Velaxe
Chat Core — Orchestration Engine for Sessions, Routing, Handoff & Governed AI | Velaxe

Chat Core

OpenAI — Integration

Bring-your-own key model connector. Use OpenAI models for intent, summarization, tool-use, and reply drafting under Chat Core guardrails and budgets.

Overview

Bring-your-own key model connector. Use OpenAI models for intent, summarization, tool-use, and reply drafting under Chat Core guardrails and budgets.

Capabilities

  • Model allowlists & per-team budgets

  • Prompt firewall and redaction policies

  • Tool/function calling support with schema contracts

  • Cost & latency metrics to InsightLake

Setup Steps (4)

  1. 1

    Step 1

    In Settings → Models, add an OpenAI provider with your API key.

  2. 2

    Step 2

    Select allowed models and set monthly budgets/alerts.

  3. 3

    Step 3

    Attach the provider to a pipeline (e.g., intent → tools → reply).

  4. 4

    Step 4

    Run evals with replay sets and compare outcomes.

Limitations

  • Data residency depends on provider region; configure per-policy.

  • Throughput subject to provider rate limits and quotas.

FAQs

Can we keep data out of training?

Yes. Use enterprise settings/BYOK and disable data logging where offered.

Multiple providers?

Supported. Route by task or experiment across providers.

Pricing

Free

Free

Great for trying the integration.

Team

USD 99.00 / monthly

Business

USD 499.00 / monthly

Enterprise

USD 2,500.00 / monthly