Skip to main content
OpenTelemetry (OTel) is a Cloud Native Computing Foundation (CNCF) open-source framework. It provides a standardized way to collect, process, and export telemetry data (traces, metrics, and logs) from your applications. This is vital for monitoring performance, debugging issues, and understanding complex system behavior. Many popular AI development tools and SDKs, like the Vercel AI SDK, LlamaIndex, OpenLLMetry, and Logfire, utilize OpenTelemetry for observability. Portkey now embraces OTel, allowing you to send telemetry data from any OTel-compatible source directly into Portkey’s observability platform.

The Portkey Advantage: Gateway Intelligence Meets Full-Stack Observability

Portkey’s strength lies in its unique combination of an intelligent LLM Gateway and a powerful Observability backend.
  • Enriched Data from the Gateway: Your LLM calls routed through the Portkey Gateway are automatically enriched with deep contextual information—virtual keys, caching status, retry attempts, prompt versions, and more. This data flows seamlessly into Portkey Observability.
  • Holistic View with OpenTelemetry: By adding an OTel endpoint, Portkey now ingests traces and logs from your entire application stack, not just the LLM calls. Instrument your frontend, backend services, databases, and any other component with OTel, and send that data to Portkey.
This combination provides an unparalleled, end-to-end view of your LLM application’s performance, cost, and behavior. You can correlate application-level events with specific LLM interactions managed by the Portkey Gateway.

How OpenTelemetry Data Flows to Portkey

The following diagram illustrates how telemetry data from your instrumented applications and the Portkey Gateway itself is consolidated within Portkey Observability: Explanation:
  1. Your Application Code is instrumented using OTel Instrumentation Libraries.
  2. This telemetry data (traces, logs) can be sent to the Portkey OTel Backend Endpoint.
  3. Simultaneously, LLM calls made via the Portkey Gateway generate their own rich, structured telemetry.
  4. All this data is consolidated in the Portkey Observability Stack, giving you a unified view.

Setting Up Portkey as an OpenTelemetry Backend

To send your OpenTelemetry data to Portkey, configure your OTel exporter to point to Portkey’s OTLP endpoint and provide your Portkey API Key for authentication. Key Environment Variables:
# Portkey's OTLP HTTP Endpoint for traces and logs
OTEL_EXPORTER_OTLP_ENDPOINT="https://api.portkey.ai/v1/otel"
# Your Portkey API Key (ensure it's a Server Key)
OTEL_EXPORTER_OTLP_HEADERS="x-portkey-api-key=YOUR_PORTKEY_API_KEY"
Replace YOUR_PORTKEY_API_KEY with your actual Portkey API Key found in your Portkey Dashboard.
Signal-Specific Endpoints: If your OTel collector or SDK strictly requires signal-specific endpoints: For Traces: OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://api.portkey.ai/v1/otel/v1/traces" For Logs: OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="https://api.portkey.ai/v1/otel/v1/logs"
Remember to include the OTEL_EXPORTER_OTLP_HEADERS with your API key for these as well.

Viewing Traces

Once configured, your OpenTelemetry traces appear in the Portkey dashboard with full visibility for your AI application:
OpenTelemetry traces in Portkey

GenAI Semantic Conventions Support

Portkey automatically enriches OpenTelemetry traces with cost and token metrics following the GenAI Semantic Conventions. When you send traces with GenAI attributes, Portkey automatically:
  • Extracts Token Counts: Reads gen_ai.usage.input_tokens and gen_ai.usage.output_tokens from trace attributes
  • Calculates Costs: Automatically computes costs based on token usage and model pricing
  • Identifies Models & Providers: Extracts model information from gen_ai.request.model or gen_ai.response.model and provider from gen_ai.system
  • Enriches Analytics: Makes all trace data available for cost attribution and usage analysis
This means traces sent from OpenTelemetry-instrumented applications automatically get the same cost tracking and analytics as requests made directly through the Portkey Gateway.
This feature is particularly powerful for applications using frameworks like LangChain, LlamaIndex, or other tools with built-in OpenTelemetry instrumentation that follow GenAI semantic conventions.

Why Use OpenTelemetry with Portkey?

Portkey’s OTel backend is compatible with any OTel-compliant library. Here are a few popular ones for GenAI and general application observability:

Language Agnostic

Works with any programming language that supports OpenTelemetry - Python, JavaScript, Java, Go, and more

Framework Support

Compatible with all major LLM frameworks through their OTel instrumentation

Zero Code Changes

Many libraries offer auto-instrumentation that requires no changes to your application code

Standards-Based

Built on industry-standard protocols ensuring long-term compatibility
Navigate to the Logs page to view your traces, filter by various attributes, and drill down into specific requests.

Supported OTel Libraries

Getting Started

1

Get your Portkey API key

Sign up for Portkey and grab your API key from the settings page
2

Choose an instrumentation library

Pick from our supported integrations based on your stack
3

Configure the endpoint

Point your OTel exporter to https://api.portkey.ai/v1/logs/otel with your API key
4

Start tracing

Run your application and view traces in the Portkey dashboard

Next Steps

Experimental Features

Push Logs to an OpenTelemetry compatible endpoint [Enterprise/Self-Hosted]

OpenTelemetry conventions on GenAI Traces are still under development and have not been widely adapted, hence the feature is still experimental.
When enabled, Portkey Gateway pushes logs to an OpenTelemetry-compatible endpoint following the experimental semantic conventions for GenAI. This includes rich attributes such as:
  • Request attributes: gen_ai.request.model, gen_ai.request.max_tokens, gen_ai.request.temperature, etc.
  • Response attributes: gen_ai.response.id, gen_ai.response.model, gen_ai.response.input_tokens, gen_ai.response.output_tokens, etc.
  • Provider information: gen_ai.provider.name
  • Prompt and output messages: gen_ai.prompt.*, gen_ai.output.messages
  • Tool definitions: gen_ai.tool.definitions
To push logs to an OpenTelemetry compatible endpoint, you can set the following environment variables in your deployment configuration: This is an example configuration for langsmith, but all OpenTelemetry compatible endpoints are supported.
EXPERIMENTAL_OTEL_TRACES_ENABLED: true
EXPERIMENTAL_OTEL_EXPORTER_OTLP_ENDPOINT: https://api.smith.langchain.com/otel
EXPERIMENTAL_OTEL_EXPORTER_OTLP_HEADERS: x-api-key=langsmith-api-key
Environment Variables:
  • EXPERIMENTAL_OTEL_TRACES_ENABLED: Set to true to enable pushing logs to an OpenTelemetry endpoint
  • EXPERIMENTAL_OTEL_EXPORTER_OTLP_ENDPOINT: The OpenTelemetry OTLP endpoint URL (e.g., https://api.smith.langchain.com/otel)
  • EXPERIMENTAL_OTEL_EXPORTER_OTLP_HEADERS: Comma-separated list of headers in the format key=value (e.g., x-api-key=langsmith-api-key)
The logs are pushed to the endpoint specified in EXPERIMENTAL_OTEL_EXPORTER_OTLP_ENDPOINT at the /v1/traces path.