Skip to main content
Qualifire offers a comprehensive suite of AI safety and quality guardrails that help ensure your AI applications are safe, compliant, and high-quality. Their platform provides 20+ different guardrail checks covering content safety, AI quality, and compliance requirements. To get started with Qualifire, visit their website:

Get Started with Qualifire

Using Qualifire with Portkey

1. Add Qualifire Credentials to Portkey

  • Click on the Admin Settings button on Sidebar
  • Navigate to Plugins tab under Organisation Settings
  • Click on the edit button for the Qualifire integration
  • Add your Qualifire API Key (obtain this from your Qualifire account at https://app.qualifire.ai/settings/api-keys/)

2. Add Qualifire’s Guardrail Checks

  • Navigate to the Guardrails page and click the Create button
  • Search for any of the Qualifire guardrail checks and click Add
  • Configure the specific parameters for your chosen guardrail
  • Set any actions you want on your check, and create the Guardrail!
Guardrail Actions allow you to orchestrate your guardrails logic. You can learn more about them here

Available Guardrail Checks

Qualifire provides a comprehensive set of guardrail checks organized into five main categories:

Security

Check NameDescriptionParametersSupported Hooks
PII CheckChecks that neither the user nor the model included PIIsNonebeforeRequestHook, afterRequestHook
Prompt Injections CheckChecks that the prompt does not contain any injections to the modelNonebeforeRequestHook

Safety

Check NameDescriptionParametersSupported Hooks
Sexual Content CheckChecks for sexual content in the user input or model outputNonebeforeRequestHook, afterRequestHook
Harassment CheckChecks for harassment in the user input or model outputNonebeforeRequestHook, afterRequestHook
Hate Speech CheckChecks for hate speech in the user input or model outputNonebeforeRequestHook, afterRequestHook
Dangerous Content CheckChecks for dangerous content in the user input or model outputNonebeforeRequestHook, afterRequestHook

Reliability

Check NameDescriptionParametersSupported Hooks
Instruction Following CheckChecks that the model followed the instructions provided in the promptNoneafterRequestHook
Grounding CheckChecks that the model is grounded in the context providedNoneafterRequestHook
Hallucinations CheckChecks that the model did not hallucinateNoneafterRequestHook
Tool Use Quality CheckChecks the model’s tool use quality. Including correct tool selection, parameters and valuesNoneafterRequestHook

Policy

Check NameDescriptionParametersSupported Hooks
Policy Violations CheckChecks that the prompt and response didn’t violate any given policiespolicies (array of strings) - For more detailsbeforeRequestHook, afterRequestHook

Configuration Examples

Policy Violations Check

For the Policy Violations Check, you can specify custom policies to enforce:
{
  "policies": [
    "The model cannot provide any discount to the user",
    "The model must not share internal company information",
    "The model must respond in a professional tone"
  ]
}

Add Guardrail ID to a Config and Make Your Request

  • When you save a Guardrail, you’ll get an associated Guardrail ID - add this ID to the input_guardrails or output_guardrails params in your Portkey Config
  • Create these Configs in Portkey UI, save them, and get an associated Config ID to attach to your requests. More here.
Here’s an example configuration:
{
  "input_guardrails": ["guardrails-id-xxx"],
  "output_guardrails": ["guardrails-id-yyy"]
}
  • NodeJS
  • Python
  • OpenAI NodeJS
  • OpenAI Python
  • cURL
const portkey = new Portkey({
    apiKey: "PORTKEY_API_KEY",
    config: "pc-***" // Supports a string config id or a config object
});
For more, refer to the Config documentation. Your requests are now protected by Qualifire’s comprehensive guardrail system, and you can see the verdict and any actions taken directly in your Portkey logs!

Use Cases

Qualifire’s guardrails are particularly useful for:
  • Content Moderation: Filtering harmful or inappropriate content in user inputs and AI responses
  • Compliance: Ensuring AI responses adhere to company policies and regulatory requirements
  • Quality Assurance: Detecting hallucinations, instruction violations, and poor tool usage
  • Data Protection: Preventing PII exposure and ensuring data privacy

Get Support

If you face any issues with the Qualifire integration, join the Portkey community forum for assistance. For Qualifire-specific support, visit their documentation or contact their support team.