The citation footer
Where the sources go, and how you know they back the claim.
Citations in AI products usually fail in one of two directions. The first is absence: the answer cites nothing and you have to trust it. The second is overload: the answer cites everything, bracketed inline, rendering the prose unreadable. The right answer lives in between, and it's mostly a typography problem.
"A citation is a footnote, not a badge. It belongs below the sentence, not inside it."
Numbered refs up top, tidy tray below.
Numbers ride along the sentence as small, muted superscripts. Clicking one opens a tray below the answer with the full source: title, span, URL. Hovering highlights the matching row. The prose stays readable; the paper trail stays one click away.
The trade-off is between pace and proof. Inline citations maximize proof but wreck pace. Tray-only citations maximize pace but hide the proof. Numbered refs split the difference.
Retention in Q2 held flat at 62%, up from 58% a year ago . The lift came mostly from the onboarding redesign shipped in March , with qualitative signal confirming the pattern .
The inline form drowns the sentence. Numbered refs + a tidy tray keep trust and pace together.
Readable prose is a trust signal too.
Inline bracketed sources feel rigorous at first, but they tell the user "the model isn't confident enough to just say this." Over fifty sentences, they exhaust the reader. A well-ordered tray communicates the same rigor without sabotaging the reading experience.
Details that turn refs into trust.
- Span precision. Not just "Q2-plan.pdf." Say "Q2-plan.pdf · §2, p.4." The reader can verify in ten seconds instead of ninety.
- Hover pairing. Hovering the superscript highlights the matching tray row and vice versa. The link is the interaction, not a line on the page.
- Source types. Mark each source by kind — internal doc, live dashboard, transcript, URL. Kind is context that changes how the reader weighs it.
Citations that can't be checked.
Any citation that doesn't link to something verifiable is decoration. Worse, it's decoration that implies a rigor the product isn't providing. Ghost citations — refs that point to sources the user can't open, or that were hallucinated outright — are among the most damaging failure modes in AI products.
What this pattern gets wrong when it gets wrong.
- Ghost citation
- A source is shown but doesn't actually back the claim, or links to a page that doesn't contain the quoted text.
- Citation overload
- So many citations that the user stops reading them, which defeats the purpose of having them at all.
- Confidence theater
- Language or typography that performs certainty beyond what the model actually has.
Three shipping variants worth copying.
- A superscript number that hovers to reveal a 2-line excerpt
- A 'sources' drawer at the bottom, collapsed by default
- A red underline on citations whose excerpts don't match the sentence