Security · Technical

Privacy Threat Modeling for LLM Applications

Amestris — Boutique AI & Technology Consultancy

Privacy risk in LLM applications is rarely a single “model leak.” It is usually a chain: sensitive data enters prompts, is logged, is cached, is retrieved across boundaries, or is exposed through tool output. Threat modeling makes those paths explicit so you can design controls that actually work.

Define your attacker models

Start with realistic adversaries: external users, internal users, and vendors/third parties.

Map the data flow

Trace where data goes across prompts, retrieval, provider APIs, logs, caches, and evaluation datasets. Most privacy failures happen in supporting systems (logs/caches), not in the model itself.

Turn the threat model into controls and tests

Use layered defenses (see policy layering), adversarial testing (see red teaming), and minimisation at the source (see data minimisation).

Privacy threat modeling is not a one-off document. It should be revisited whenever models, prompts, retrieval sources, or tools change.

Quick answers

What does this article cover?

How to threat-model privacy risks in LLM applications: data leakage paths, attacker models, and practical mitigations.

Who is this for?

Security and architecture teams designing AI systems that handle sensitive data and need clear, reviewable privacy controls.

If this topic is relevant to an initiative you are considering, Amestris can provide independent advice or architecture support. Contact hello@amestris.com.au.