Most AI risk debates become unproductive because the data boundary is unclear. People argue about "using AI" as if it is a single action. In practice, the real question is: what data is allowed to flow into prompts, tools, and vendor services?
Data classification rules make that boundary explicit and enforceable.
Start with a simple classification scheme
Keep it pragmatic. Many organisations can start with three tiers:
- Public. Safe to share externally.
- Internal. Non-public, but low sensitivity.
- Restricted. PII, credentials, regulated data, customer secrets, or high-impact IP.
Map these tiers to what the system can do: which models/providers are allowed, whether retrieval is allowed, and whether tool actions are allowed.
Define controls for each tier
Controls should be specific and implementable:
- Minimisation and redaction. Remove sensitive fields before prompts or logs (see data minimisation).
- Residency constraints. Route restricted data to approved regions/providers only (see data residency).
- Retention rules. Short retention for content-bearing fields; longer for structured metadata (see retention and deletion).
- Output scanning. Block accidental PII disclosures and unsafe content (see policy layering).
Bring Shadow AI into the same rules
Classification is most valuable when it applies consistently, including to unofficial usage. A clear "what not to paste" rule reduces Shadow AI risk and helps you offer safer alternatives (see shadow AI governance).
Apply classification to RAG and tool use
For RAG, classification influences which sources can be indexed, how permissions are applied, and whether citations are required (see RAG permissions). For tools, classification affects which actions are allowed and which require approvals (see tool authorisation and approvals).
Operationalise it with telemetry and audits
Rules that cannot be verified become suggestions. Log classification decisions and policy versions with each request, and include them in evidence packs for audits (see telemetry schema and compliance audits).
Clear classification rules turn AI data debates into operational controls.