Privacy risk in LLM applications is rarely a single “model leak.” It is usually a chain: sensitive data enters prompts, is logged, is cached, is retrieved across boundaries, or is exposed through tool output. Threat modeling makes those paths explicit so you can design controls that actually work.
Define your attacker models
Start with realistic adversaries: external users, internal users, and vendors/third parties.
Map the data flow
Trace where data goes across prompts, retrieval, provider APIs, logs, caches, and evaluation datasets. Most privacy failures happen in supporting systems (logs/caches), not in the model itself.
Turn the threat model into controls and tests
Use layered defenses (see policy layering), adversarial testing (see red teaming), and minimisation at the source (see data minimisation).
Privacy threat modeling is not a one-off document. It should be revisited whenever models, prompts, retrieval sources, or tools change.