This Experimental integration. You can only use Github Copilot Chat with Portkey. Advanced Copilot features are not available yet.
1. Set up Portkey
Portkey lets you use 1600+ LLMs with GitHub Copilot via a simple OpenAI-compatible endpoint. We’ll create a model routing config and attach it to a Portkey API key.1
Create or verify your Integration
Go to Integrations and connect your provider (e.g., OpenAI, Anthropic, etc.).
- Click Connect on your provider
- Enter a Name and Slug
- Provide provider credentials (API key and other required details)
- Finish model provisioning

On the provisioning screen, you can keep default model selections or customize them.
2
Copy the model slug
- Open Model Catalog → Models
- Click your desired model (example: OpenAI’s GPT‑4o)
- Copy its slug (e.g.,
@openai-dev/gpt-4o
)

You can click Run Test Request here to validate your integration. If you see a permissions error, create a User API Key first under API Keys.
3
Create a Config in Portkey
Create a routing config that pins your Copilot traffic to the model from the previous step.
- Go to Configs
- Create a new config with:
- Give it a Name and Save

4
Attach the Config to a Portkey API Key
Create an API key and attach your default config.
- Go to API Keys
- Click Create
- Choose the Config you created above
- Save your API key securely

Enforce Default Configs
Learn how to enforce the attached config and optionally disable overrides.
🎉 Step 1 complete! You now have a Portkey API key with a default config that selects your model.
2. Integrate Portkey with GitHub Copilot
Copilot lets you manage models by provider. We’ll configure it via the Azure provider option and point it to Portkey’s OpenAI-compatible endpoint.1
Open Manage Models in Copilot
- In the Copilot chat view, click the CURRENT-MODEL dropdown
- Click Manage Models…


2
Select Azure and configure a new model
- Choose Azure
- Click the gear icon next to Azure
- Click Configure models → Add a new model
- Identifier: a unique key for this model, e.g.,
portkey-model
- Display name: e.g.,
Custom Portkey Model
- API endpoint URL:
https://api.portkey.ai/v1/chat/completions
- Capabilities: enable Tools, Vision, and Thinking (as needed for your use)
- Maximum context tokens: use your provider’s documented limit; keep defaults if unsure
- Maximum output tokens: set per your usage; adjust later if needed

3
Provide your Portkey API key
- From Manage Models…, select Azure
- Pick the model you just created
- In the API Keys section, paste your Portkey Workspace API Key (the one with the default config from Step 1)
- Save


✅ Copilot is now integrated with Portkey. Your requests will go through Portkey with the configured routing, guardrails, and analytics.
Portkey Features
Now that you have enterprise-grade Github Copilot Chat setup, let’s explore the comprehensive features Portkey provides to ensure secure, efficient, and cost-effective AI agent operations.1. Comprehensive Metrics
Using Portkey you can track 40+ key metrics including cost, token usage, response time, and performance across all your LLM providers in real time. You can also filter these metrics based on custom metadata that you can set in your configs. Learn more about metadata here.
2. Advanced Logs
Portkey’s logging dashboard provides detailed logs for every request made by Github Copilot Chat. These logs include:- Complete request and response tracking for debugging
- Metadata tags for filtering by team or project
- Cost attribution per task
- Complete conversation history with the AI agent

3. Unified Access to 1600+ LLMs
You can easily switch between 1600+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing thevirtual key
in your default config
object.
4. Advanced Metadata Tracking
Using Portkey, you can add custom metadata to your LLM requests for detailed tracking and analytics. Use metadata tags to filter logs, track usage, and attribute costs across engineering teams and projects.Custom Metadata
5. Enterprise Access Management
Budget Controls
Set and manage spending limits across teams and departments. Control costs with granular budget limits and usage tracking.
Single Sign-On (SSO)
Enterprise-grade SSO integration with support for SAML 2.0, Okta, Azure AD, and custom providers for secure authentication.
Organization Management
Hierarchical organization structure with workspaces, teams, and role-based access control for enterprise-scale deployments.
Access Rules & Audit Logs
Comprehensive access control rules and detailed audit logging for security compliance and usage tracking.
6. Reliability Features
Fallbacks
Automatically switch to backup targets if the primary target fails.
Conditional Routing
Route requests to different targets based on specified conditions.
Load Balancing
Distribute requests across multiple targets based on defined weights.
Caching
Enable caching of responses to improve performance and reduce costs.
Smart Retries
Automatic retry handling with exponential backoff for failed requests
Budget Limits
Set and manage budget limits across teams and departments. Control costs with granular budget limits and usage tracking.
7. Advanced Guardrails
Protect your codebase and enhance reliability with real-time checks on LLM inputs and outputs. Leverage guardrails to:- Prevent sensitive code or API key leaks
- Enforce compliance with coding standards
- PII detection and masking in generated code
- Content filtering for inappropriate code generation
- Custom security rules for your organization
- Compliance checks for internal coding policies
Guardrails
Implement real-time protection for your AI agent interactions with automatic detection and filtering of sensitive content, PII, and custom security rules. Enable comprehensive data protection while maintaining compliance with organizational policies.
FAQs
What endpoint should I use in Copilot?
What endpoint should I use in Copilot?
Use
https://api.portkey.ai/v1/chat/completions
for OpenAI-compatible chat completions.Which capabilities should I enable?
Which capabilities should I enable?
Enable the ones your workflows need (Tools/Function Calling, Vision, Thinking). You can change them later.
How do I enforce a fixed model or routing policy?
How do I enforce a fixed model or routing policy?
Attach a default config to the API key and optionally disable overrides. See Enforcing Default Configs.
How do I track costs per team?
How do I track costs per team?
Issue separate API keys or use metadata. Monitor in the analytics dashboard.
Next Steps
Join our CommunityFor enterprise support or custom features, contact our enterprise team.