Product ยท Practical

Assistant Memory Without Privacy Nightmares: Personalisation, Consent and Retention

Amestris — Boutique AI & Technology Consultancy

"Memory" is an attractive feature: the assistant remembers preferences, recurring tasks, and context across sessions. It is also a fast path to privacy risk if teams store more than they should, mix tenants, or make memory invisible to users.

Safe memory design is mostly about constraints: what you store, how you scope it, and how you let users control it.

Use narrow memory types, not one big blob

Many teams get better outcomes by treating memory as separate mechanisms (see context engineering):

  • Preferences. Language, tone, format, and defaults.
  • Task state. Where the user is in a workflow.
  • Facts about the user. Only when explicitly provided and appropriate.
  • Work artefacts. Saved drafts, citations, or summaries linked to a document.

This makes retention, access control, and deletion much easier.

Make scopes explicit

Memory should be scoped to the right boundary:

  • Session scope. Kept only within a single interaction.
  • Workspace scope. Shared within a team or tenant with explicit permissions.
  • Personal scope. Stored per user, not shared.

Do not rely on prompt text for scoping. Make scope a first-class field in your data model and cache keys (see multi-tenancy).

Collect less and store less

Memory should not become a second analytics pipeline. Apply minimisation:

  • Store structured preferences rather than raw conversations.
  • Redact sensitive fields before storage (see data minimisation).
  • Prefer retrieval over long-term memory for factual knowledge (see knowledge base governance).

Retention and deletion are required features

If you ship memory, you must ship deletion. Define retention per memory type and implement delete flows that remove data from storage, indexes and caches (see retention and deletion).

Be transparent with users

Personalisation should not feel like surveillance. Use explicit user controls:

  • Show what is remembered and why.
  • Offer "forget" and "clear memory" actions.
  • Explain when memory is off or limited due to policy.

These are trust-building UX patterns (see user transparency).

Secure memory like any other sensitive store

Memory stores can leak through logs, caches, or tool outputs. Apply the same DLP and access controls you use elsewhere (see DLP for LLM systems and identity and session security).

Memory is not just a feature. It is a data product. Treat it with the same operational and governance discipline as any other system that stores user information.

Quick answers

What does this article cover?

How to design assistant memory and personalisation without creating privacy, security, or compliance risks.

Who is this for?

Product and engineering teams adding memory or personalisation to AI assistants in customer or employee workflows.

If this topic is relevant to an initiative you are considering, Amestris can provide independent advice or architecture support. Contact hello@amestris.com.au.