The memory chip
Persistent facts about the user, editable in one click.
Persistent memory is one of the newest and most carelessly designed surfaces in AI products. The model quietly remembers things about you across sessions, and most products either hide that list behind three menus or surface it as an un-editable paragraph. The memory chip pattern treats each remembered fact as a first-class object — pinnable, editable, forgettable.
"Memory you can see is memory you can trust. Pin, edit, forget — three affordances, always visible together."
Each fact, one chip, three controls.
Every remembered fact renders as a small row: the fact itself, the source (where the model learned it), and three controls. A pin toggle. Inline edit. A forget button. Filters at the top let the user narrow to pinned facts. That's the whole feature.
The chip is not a settings row. It's a note, editable as easily as a sticky on a fridge.
Memory you can see is memory you can trust. Pin, edit, forget — three affordances, visible together.
Memory without visibility is surveillance.
Users experience invisible memory as uncanny. The model suddenly knows a thing they don't remember telling it, and trust collapses. Making memory an open list — with a source for every fact — turns the same information into something the user controls. The feature is the same; the relationship is different.
Details that make memory legible.
- Source on every fact. Where did the model learn this? "You told it · date." "Inferred · date." The user shouldn't have to guess.
- Inline edit. Click the text, fix it, press enter. Editing a fact should take one click, not three.
- Pin for priority. Pinned facts weigh heavier, visually and in the prompt. The user gets real control, not just visual sorting.
Memory that can't be fully forgotten.
Some products expose a memory UI where "forget" only hides the fact from the list — the underlying weights still retain the inference. The user is promised a clean break and gets a veneer instead. The next time the fact reappears in an answer, the product has broken a promise.
What this pattern gets wrong when it gets wrong.
- Stale memory
- A persistent fact about the user that's out of date and silently poisoning answers.
- Confidence theater
- Language or typography that performs certainty beyond what the model actually has.
- Leaky context
- Content from another source, session, or user surfacing in a place it shouldn't.
Three shipping variants worth copying.
- A chip shelf in settings, one fact per chip
- Inline edit that writes back to model memory immediately
- A 'remind me what you know about me' affordance in the composer