Velaxe
AI Hub — Unified LLM Gateway, Chat, Embeddings & Jobs | Velaxe

AI Hub

OpenAI — Integration

Use OpenAI models for chat, tools, and multimodal prompts. Configure credentials in the KeyVault and call via Query API or the internal handler.

Overview

Use OpenAI models for chat, tools, and multimodal prompts. Configure credentials in the KeyVault and call via Query API or the internal handler.

Capabilities

  • Text & multimodal prompts (images via data URLs)

  • Provider-aware usage tracking & per-day rollups

  • Optional moderation check before/after generation

  • Fallback routing to other providers on transient failures

  • Memo-cache for deterministic prompts (24h)

Setup Steps (4)

  1. 1

    Step 1

    Go to AI Hub → Provider Keys or POST /AdminApi.php with {"provider":"openai","creds":{"api_key":"sk-…"}}.

  2. 2

    Step 2

    Test with POST /QueryApi.php {"provider":"openai","model":"gpt-4o-mini","prompt":"Hello"} and verify {"ok":true}.

  3. 3

    Step 3

    Enable moderation mode (block/warn/allow) in policy settings if desired.

  4. 4

    Step 4

    Optionally enable Redis for memo-cache to reduce latency and cost.

Limitations

  • Subject to OpenAI rate limits and content policies.

  • Image/audio support depends on model capabilities configured at call time.

  • Costs accrue per token; ensure quotas are set to prevent overruns.

FAQs

Where are keys stored?

In the per-workspace KeyVault encrypted with AES-256-GCM; rotation is supported.

How do I pass images?

Include data URLs in files or parts; AI Hub forwards to the model if supported.

Can I force a specific model?

Yes. Provide the model name in the request body (e.g., "gpt-4o-mini").

Pricing

Free

Free

Great for trying the integration.

Pro

USD 9.99 / monthly

Enterprise

USD 49.99 / monthly