Skip to main content
GitHub Copilot is a popular AI pair programmer. By routing Copilot through Portkey, you gain enterprise-grade controls: unified model access, spend governance, observability, guardrails, and reliability features — without changing your editors or workflows.
This Experimental integration. You can only use Github Copilot Chat with Portkey. Advanced Copilot features are not available yet.

1. Set up Portkey

Portkey lets you use 1600+ LLMs with GitHub Copilot via a simple OpenAI-compatible endpoint. We’ll create a model routing config and attach it to a Portkey API key.
1

Create or verify your Integration

Go to Integrations and connect your provider (e.g., OpenAI, Anthropic, etc.).
  1. Click Connect on your provider
  2. Enter a Name and Slug
  3. Provide provider credentials (API key and other required details)
  4. Finish model provisioning
On the provisioning screen, you can keep default model selections or customize them.
2

Copy the model slug

  1. Open Model CatalogModels
  2. Click your desired model (example: OpenAI’s GPT‑4o)
  3. Copy its slug (e.g., @openai-dev/gpt-4o)
You can click Run Test Request here to validate your integration. If you see a permissions error, create a User API Key first under API Keys.
3

Create a Config in Portkey

Create a routing config that pins your Copilot traffic to the model from the previous step.
  1. Go to Configs
  2. Create a new config with:
{
	"override_params": {
		"model": "@YOUR_SLUG_FROM_PREVIOUS_STEP" 
	}
}
  1. Give it a Name and Save
4

Attach the Config to a Portkey API Key

Create an API key and attach your default config.
  1. Go to API Keys
  2. Click Create
  3. Choose the Config you created above
  4. Save your API key securely

Enforce Default Configs

Learn how to enforce the attached config and optionally disable overrides.
🎉 Step 1 complete! You now have a Portkey API key with a default config that selects your model.

2. Integrate Portkey with GitHub Copilot

Copilot lets you manage models by provider. We’ll configure it via the Azure provider option and point it to Portkey’s OpenAI-compatible endpoint.
1

Open Manage Models in Copilot

  1. In the Copilot chat view, click the CURRENT-MODEL dropdown
  2. Click Manage Models…
You’ll see a list of providers.
2

Select Azure and configure a new model

  1. Choose Azure
  2. Click the gear icon next to Azure
  3. Click Configure modelsAdd a new model
Fill in the details:
  • Identifier: a unique key for this model, e.g., portkey-model
  • Display name: e.g., Custom Portkey Model
  • API endpoint URL: https://api.portkey.ai/v1/chat/completions
  • Capabilities: enable Tools, Vision, and Thinking (as needed for your use)
  • Maximum context tokens: use your provider’s documented limit; keep defaults if unsure
  • Maximum output tokens: set per your usage; adjust later if needed
After saving, you should see your Custom Portkey Model in Copilot’s model list.
3

Provide your Portkey API key

  1. From Manage Models…, select Azure
  2. Pick the model you just created
  3. In the API Keys section, paste your Portkey Workspace API Key (the one with the default config from Step 1)
  4. Save
You can now use your Portkey-routed model in Copilot chat.
✅ Copilot is now integrated with Portkey. Your requests will go through Portkey with the configured routing, guardrails, and analytics.

Portkey Features

Now that you have enterprise-grade Github Copilot Chat setup, let’s explore the comprehensive features Portkey provides to ensure secure, efficient, and cost-effective AI agent operations.

1. Comprehensive Metrics

Using Portkey you can track 40+ key metrics including cost, token usage, response time, and performance across all your LLM providers in real time. You can also filter these metrics based on custom metadata that you can set in your configs. Learn more about metadata here.

2. Advanced Logs

Portkey’s logging dashboard provides detailed logs for every request made by Github Copilot Chat. These logs include:
  • Complete request and response tracking for debugging
  • Metadata tags for filtering by team or project
  • Cost attribution per task
  • Complete conversation history with the AI agent

3. Unified Access to 1600+ LLMs

You can easily switch between 1600+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing the virtual key in your default config object.

4. Advanced Metadata Tracking

Using Portkey, you can add custom metadata to your LLM requests for detailed tracking and analytics. Use metadata tags to filter logs, track usage, and attribute costs across engineering teams and projects.

Custom Metadata

5. Enterprise Access Management

6. Reliability Features

7. Advanced Guardrails

Protect your codebase and enhance reliability with real-time checks on LLM inputs and outputs. Leverage guardrails to:
  • Prevent sensitive code or API key leaks
  • Enforce compliance with coding standards
  • PII detection and masking in generated code
  • Content filtering for inappropriate code generation
  • Custom security rules for your organization
  • Compliance checks for internal coding policies

Guardrails

Implement real-time protection for your AI agent interactions with automatic detection and filtering of sensitive content, PII, and custom security rules. Enable comprehensive data protection while maintaining compliance with organizational policies.

FAQs

Use https://api.portkey.ai/v1/chat/completions for OpenAI-compatible chat completions.
Enable the ones your workflows need (Tools/Function Calling, Vision, Thinking). You can change them later.
Attach a default config to the API key and optionally disable overrides. See Enforcing Default Configs.
Issue separate API keys or use metadata. Monitor in the analytics dashboard.

Next Steps

Join our Community
For enterprise support or custom features, contact our enterprise team.
I