Overview
Call a single endpoint to generate summaries, drafts, translations, or tool outputs with usage metering.
Problem
Integrating multiple LLM SDKs complicates deployment, billing, and policy enforcement.
Solution
Use AI Hub’s provider-agnostic Query API to call OpenAI, Anthropic, Gemini, or Mistral with one payload and unified usage tracking.
How it works
POST provider+model+prompt (+optional images) to the Query API. Responses include result type/content and usage for dashboards and quotas.
Who is this for
Expected outcomes
- Faster time to ship AI features
- Lower maintenance and vendor lock-in risk
Key metrics
Integration time
Baseline
24 hours
Target
2 hours
AI incidents due to SDK changes
Baseline
5 count/qtr
Target
0 count/qtr
Gallery
Downloads & templates
Case studies
B2B portal ships AI summaries in a sprint
One endpoint replaced three SDKs; deploy in 48 hours.
Security impact
- Prompt payloads & outputs; stored in logs per retention policy · PII: none
Compliance
- SOC2