Run this playbook on-demand to move Open WebUI feedback into the Portkey feedback API with zero infrastructure or product changes.
Why this cookbook
Portkey is your control plane for AI observability, governance, and feedback. Open WebUI already captures thumbs-up/thumbs-down signals at the message level. When you need a fast, enterprise-friendly way to push those ratings into the Portkey feedback API, this guide hands you a no-infra approach. For additional automation options, see the Open WebUI integration overview.What you’ll build (at a glance)
- One-file script: Choose Python or Node to handle the entire fetch-map-post flow.
- Manual cadence: Trigger the sync whenever leadership wants a fresh pulse.
- Governed ingestion: Enforce consistent
trace_id
,value
,weight
, andmetadata
across every payload. - Zero changes to Open WebUI: Leverage existing export endpoints—no patches, no servers.
Architecture overview
Data mapping for the Portkey feedback API
Portkey field | Source in Open WebUI | Notes |
---|---|---|
trace_id | "{chat_id}:{message_id}" | Composite identifier that stays unique per message. |
value | data.rating | Map 👍 to +1 , 👎 to -1 . Other ratings are ignored. |
weight | Constant 1 | Keep scoring aligned for executive dashboards. |
metadata | data ∪ meta (plus extras) | Merge native metadata and append snapshot_chat_id , message_index , user_id , etc. |
Prerequisites
- Open WebUI access: Base URL plus a token with permission to read feedback.
- Portkey workspace: API key for the feedback API and optional custom base URL if you self-host Portkey.
Environment variables
On Windows PowerShell, set variables with:
Quick start: manual sync without infra
- Python
- Node.js
Dry-run first to review the payloads before sending them into the Portkey feedback API.
Single-file scripts (copy-paste ready)
- Python: `owui_to_portkey.py`
- Node.js: `owui_to_portkey.mjs`
How to run (dry-run to production)
1
Dry-run first
Preview mapped payloads before ingesting them into Portkey.
2
Filter by timeframe
Limit ingestion to recent ratings by passing a Unix epoch.
3
Ship to Portkey feedback API
Remove
--dry-run
to push ratings into Portkey whenever executives need updated insights.4
Switch to user scope when needed
If your token is limited to personal feedback, add
--user-scope
. The mapping logic stays the same.Security & guardrails
- Least privilege: Issue read-only Open WebUI tokens and rotate them regularly.
- Secret hygiene: Set keys via environment variables—never commit them into source control.
- Right-sized metadata: The scripts include light snapshot details (
snapshot_chat_id
). Trim fields if your policies require tighter scoping. - Idempotent reruns: The
--since
filter keeps repeated executions from flooding Portkey with duplicates.
Troubleshooting
401 from Open WebUI
401 from Open WebUI
The scripts automatically retry with
Cookie: token=...
—validate the token scope if issues persist.Empty exports
Empty exports
Confirm that ratings exist and that your Open WebUI user has evaluation access.
Portkey 4xx/5xx
Portkey 4xx/5xx
Double-check
PORTKEY_API_KEY
and verify trace_id
, value
, weight
, and metadata
formats.High volume
High volume
Add additional filters (for example by
model_id
or user_id
) before posting to Portkey.FAQ
Can I expand scoring later?
Can I expand scoring later?
Yes. Portkey accepts
value
ranges from -10
to 10
. Adjust the script’s mapping once you adopt richer scales.Can I attach more metadata?
Can I attach more metadata?
Absolutely. Append any key/value pairs to
metadata
before the payload posts to Portkey.What if I want real-time sync?
What if I want real-time sync?
This cookbook is optimized for on-demand runs. Contact the Portkey team for the managed connector when you’re ready for fully automated synchronization.