Overview
Enqueue prompts and consume results; events broadcast progress/done/error.
Prerequisites
- Cron enabled for backend/cli/ai_worker.php (* * * * *)
Permissions required
Steps (3)
-
1
Enqueue a job
Use JobQueue::push(wsId, provider, model, prompt, files) from your module.
Tips
—
Validation
—
Success criteria
—
-
2
Worker processes jobs
The worker calls AiService::handle() and publishes AI.job.progress / AI.job.done / AI.job.error.
Tips
—
Validation
—
Success criteria
—
-
3
Fetch results
Read result_type/content in AiJobs; display in your UI or trigger follow-on steps.
Tips
—
Validation
—
Success criteria
- Jobs transition to done with stored usage metadata.
About this guide
AI Hub centralizes generative AI for your workspace with a single, policy-aware gateway to multiple providers. Teams get a streamlined chat experience with searchable history and feedback, a minimal Query API for quick prompts, and embeddings for retrieval workflows. Operators gain visibility with usage & cost tracking, quotas, and exportable audit logs.
Choose the best model for each task, fail over between providers, and moderate inputs/outputs with block/warn/allow policies. Keys are encrypted at rest and scoped per workspace. Long-running tasks run on a background worker and broadcast events so other apps can react in real time.
Designed for safety and speed: opinionated defaults, least-privilege access, and drop-in APIs that make it easy to bring AI to every surface of Velaxe.