Portkey brings advanced AI gateway capabilities, full-stack observability, and prompt management + versioning to your Promptfoo projects. This document provides an overview of how to leverage the strengths of both the platforms to streamline your AI development workflow.
promptfoo is an open source library (and CLI) for evaluating LLM output quality.
Variables from your Promptfoo test cases will be automatically plugged into the Portkey prompt as variables. The resulting prompt will be rendered and returned to promptfoo, and used as the prompt for the test case.
Note that promptfoo does not follow the temperature, model, and other parameters set in Portkey. You must set them in the providers configuration yourself.
providers: id: portkey:claude-3-opus20240229 config: portkeyProvider: anthropic # You can also add your Anthropic API key to Portkey and pass the virtual key here portkeyVirtualKey: ANTHROPIC_PROVIDER
Portkey automatically logs all the key details about your requests, including cost, tokens used, response time, request and response bodies, and more.Using Portkey, you can also send custom metadata with each of your requests to further segment your logs for better analytics. Similarly, you can also trace multiple requests to a single trace ID and filter or view them separately in Portkey logs.
Since promptfoo can make a lot of calls very quickly, you can use a loadbalanced config in Portkey with cache enabled. You can pass the config header in the same YAML.Here’s a sample Config that you can save in the Portkey UI and get a respective config slug:
[Roadmap] View the Results of Promptfoo Evals in Portkey
We’re building support to view the eval results of promptfoo in Portkey that will let you view the results of promptfoo evals within the feedback section of Portkey.