The ambiguity prompt
Asking a clarifying question without breaking flow.
When the model doesn't know which of two answers the user wants, it has three options: guess, hedge, or ask. Most products quietly guess, then spend 300 words hedging about the guess. A good ambiguity prompt replaces both with a single sharp question, embedded right in the response, resolvable in one tap.
The pattern is cheap, and the payoff is disproportionate. You save the user a re-ask, a reread, or a polite abandonment. More importantly, you teach the model that specificity is worth one extra turn.
"A clarifying question is a design opportunity. When it's done right, it feels like a collaborator checking in, not a chatbot stalling."
Inline fork. Two paths. A default.
The response shows the user two framed paths: A and B, each with a one-line preview of what the answer would look like. Picking a path commits the model. Not picking is fine; the model defaults to A after a beat. The interaction is linear, no modal, no stall.
I need a summary of yesterday's standup.
That could mean two things. Pick one, or I'll default to the first in a moment.
The clarifying question is inline. No modal. No stall. Defaults exist, but the user can nail it in one tap.
Hedging is guessing with a worse interface.
When a model hedges, it's usually because it could have asked and didn't. A hedged answer makes the user do two things: parse the hedge, and re-ask more precisely. An inline fork collapses both into one gesture. The user picks, the model answers cleanly, the thread keeps momentum.
Forks that don't feel like stalls.
- Show previews, not just labels. One line of what each path will produce. Otherwise the user is picking a coin flip.
- Always default. If the user walks away, the model should still answer. The fork is optional, not blocking.
- Keep the fork reachable. After the model commits, a small 'back to fork' link returns to the choice. The decision is never trapped.
Forks that proliferate.
One fork is clarifying. Three forks in a row is a quiz. If the model keeps asking, it's signaling that the prompt surface is broken, not the response. The fix is upstream, in composer design, not another fork.
What this pattern gets wrong when it gets wrong.
- Confidence theater
- Language or typography that performs certainty beyond what the model actually has.
- Implicit refusal
- Declining a request without saying the word no — a vague answer, a pivot, a change of subject.
- Ambiguous state
- Running, done, errored, paused all look the same. The user has to infer from context.
Three shipping variants worth copying.
- A two-path fork inline in the response
- Each path has a one-line preview of its answer
- Tapping a path commits the model to that interpretation