As soon as an AI system can take actions—send emails, update records, issue refunds—it becomes an operational risk surface. Most incidents occur not because the model was malicious, but because it was confidently wrong.
Approvals are one of the simplest ways to bound that risk. Design approvals that are effective without turning every workflow into bureaucracy.
Enforce approvals in the policy layer
Approvals must be enforced outside the model. The policy layer should require an approval token or human signature before executing high-risk tools (see tool authorisation).
Record who approved what, when, and why—including model/prompt/tool versions (see lineage).