Velaxe
AI Hub — Unified LLM Gateway, Chat, Embeddings & Jobs | Velaxe

AI Hub

Run long-running AI tasks via the queue worker

Enqueue prompts and consume results; events broadcast progress/done/error.

15 min Intermediate Developer, Ops Updated Sep 19, 2025
Queue dashboard
FIFO jobs with status & results

Overview

Enqueue prompts and consume results; events broadcast progress/done/error.

Prerequisites

  • Cron enabled for backend/cli/ai_worker.php (* * * * *)

Permissions required

ai.backend.call

Steps (3)

Estimated: 15 min
  1. 1

    Enqueue a job

    Developer 5 min Back to top

    Use JobQueue::push(wsId, provider, model, prompt, files) from your module.

    Tips

    Validation

    Success criteria

  2. 2

    Worker processes jobs

    Ops 5 min Back to top

    The worker calls AiService::handle() and publishes AI.job.progress / AI.job.done / AI.job.error.

    Tips

    Validation

    Success criteria

  3. 3

    Fetch results

    Developer 5 min Back to top

    Read result_type/content in AiJobs; display in your UI or trigger follow-on steps.

    Tips

    Validation

    Success criteria

    • Jobs transition to done with stored usage metadata.

About this guide

AI Hub centralizes generative AI for your workspace with a single, policy-aware gateway to multiple providers. Teams get a streamlined chat experience with searchable history and feedback, a minimal Query API for quick prompts, and embeddings for retrieval workflows. Operators gain visibility with usage & cost tracking, quotas, and exportable audit logs.

Choose the best model for each task, fail over between providers, and moderate inputs/outputs with block/warn/allow policies. Keys are encrypted at rest and scoped per workspace. Long-running tasks run on a background worker and broadcast events so other apps can react in real time.

Designed for safety and speed: opinionated defaults, least-privilege access, and drop-in APIs that make it easy to bring AI to every surface of Velaxe.