Velaxe
AI Hub — Unified LLM Gateway, Chat, Embeddings & Jobs | Velaxe

AI Hub

Call the Query API for one-shot prompts

Send a prompt (with optional images) and get {ok,type,content,usage}.

8 min Beginner Developer Updated Sep 19, 2025

Overview

Send a prompt (with optional images) and get {ok,type,content,usage}.

Prerequisites

  • Permission ai.backend.call
  • At least one provider credential configured

Permissions required

ai.backend.call

Steps (2)

Estimated: 8 min
  1. 1

    Compose the request

    Developer 3 min Back to top

    POST /QueryApi.php {"provider":"openai","model":"gpt-4o-mini","prompt":"Summarise…","files":["data:image/png;base64,…"]}

    Tips

    Validation

    Success criteria

  2. 2

    Handle response

    Developer 3 min Back to top

    Read {"ok":true,"type":"text","content":"…","usage":{"total_tokens":…}}. Log usage for observability.

    Tips

    Validation

    Success criteria

    • HTTP 200 and non-empty content for valid prompts.

About this guide

AI Hub centralizes generative AI for your workspace with a single, policy-aware gateway to multiple providers. Teams get a streamlined chat experience with searchable history and feedback, a minimal Query API for quick prompts, and embeddings for retrieval workflows. Operators gain visibility with usage & cost tracking, quotas, and exportable audit logs.

Choose the best model for each task, fail over between providers, and moderate inputs/outputs with block/warn/allow policies. Keys are encrypted at rest and scoped per workspace. Long-running tasks run on a background worker and broadcast events so other apps can react in real time.

Designed for safety and speed: opinionated defaults, least-privilege access, and drop-in APIs that make it easy to bring AI to every surface of Velaxe.