Overview
Choose enforcement mode and audit outcomes.
Prerequisites
- OpenAI moderation key present (optional but recommended)
Permissions required
Steps (2)
-
1
Pick enforcement mode
Set AiPolicy.moderation_mode to block, warn, or allow (via admin tool or PolicyStore::set()).
Tips
—
Validation
—
Success criteria
—
-
2
Review audit logs
Moderation outcomes are recorded in AiAudit. Use CSV export to review.
Tips
—
Validation
—
Success criteria
- New entries appear when flagged content is processed.
About this guide
AI Hub centralizes generative AI for your workspace with a single, policy-aware gateway to multiple providers. Teams get a streamlined chat experience with searchable history and feedback, a minimal Query API for quick prompts, and embeddings for retrieval workflows. Operators gain visibility with usage & cost tracking, quotas, and exportable audit logs.
Choose the best model for each task, fail over between providers, and moderate inputs/outputs with block/warn/allow policies. Keys are encrypted at rest and scoped per workspace. Long-running tasks run on a background worker and broadcast events so other apps can react in real time.
Designed for safety and speed: opinionated defaults, least-privilege access, and drop-in APIs that make it easy to bring AI to every surface of Velaxe.