When AI delivery scales beyond a single pilot, the same pattern repeats: every team integrates a model in a slightly different way, adopts different prompts, and invents its own guardrails. The result is duplication, inconsistent risk controls, and high support cost.
An AI service catalog is a practical way to avoid that fragmentation. It treats AI capabilities as reusable, supported services, with published interfaces and clear operating expectations.
Why AI needs a catalog
Without a shared catalog, you typically see:
- Hidden variations. Different prompt versions and model settings for the same workflow, making outcomes impossible to compare.
- Unclear ownership. No single team owns incidents, upgrades, or provider changes.
- Inconsistent controls. Security and governance checks are applied unevenly (see control tower patterns).
- Cost surprises. Token-heavy workflows spread without chargeback visibility (see LLM FinOps).
What to include in a catalog entry
Each catalog entry should be short, concrete, and operational. At minimum, define:
- Capability scope. What the service does and does not do, including supported intents.
- Interfaces. API, UI component, or workflow integration points; expected inputs and outputs.
- Model and prompt lineage. Provider/model options, prompt template versioning, and change control (see prompt registry).
- Data profile. Data classification, residency constraints, retention, and redaction (see retention and deletion).
- Tools and permissions. Which tools the AI can call and how authorisation is enforced (see tool authorisation).
- SLOs and support. Quality/latency targets, incident response expectations, and escalation paths (see AI SLOs).
- Cost model. Showback/chargeback approach, quotas, and rate limiting (see quotas).
Define tiers that match risk
A useful catalog has tiers so teams can choose the right level of control:
- Sandbox. Fast experimentation; minimal guarantees; clear constraints on sensitive data.
- Shared. Supported defaults for common use cases; standard telemetry; published change windows.
- Regulated. Strong evidence packs, approvals, and stricter operational controls (see compliance audits).
Make change control explicit
Catalog entries are only trustworthy if changes are governed. Define what triggers an approval (new tools, new data sources, higher autonomy) and what can ship via standard release processes. For higher-risk capabilities, combine approvals with canary rollouts and a clear stabilisation plan (see approvals and change freeze playbooks).
How to start without boiling the ocean
Start with the top three AI capabilities already used by multiple teams. Publish a single page for each: scope, owner, metrics, and how to consume it. Then iterate: add tiers, add evidence artefacts, and wire telemetry into your operating model (see operating model).
The goal is not bureaucracy. The goal is reusable AI that is supportable, measurable, and safe by default.