I’ve been exploring interaction patterns where prompts are not fully specified at the time of input, but instead are constructed iteratively through dialogue.
In these cases, user intent appears underdetermined early in the interaction, with signals such as:
- hedging language (“I think”, “maybe”)
- self-revision (“wait, let me rephrase”)
- incomplete or shifting constraints across turns
This suggests a distinction between:
- pre-structured prompting (intent formed prior to input)
- real-time prompting (intent formed through interaction)
Current model behavior often assumes relatively complete inputs (single-pass interpretation), which can lead to early overcommitment when intent is still evolving.
-–
Hypothesis: Underdetermined Prompt Detection
It may be useful to treat early-turn inputs as probabilistic rather than declarative, using lightweight signals to infer when intent is still forming.
Possible signal features:
- linguistic uncertainty markers (hedging, modal verbs)
- structural instability (revisions, restarts, fragmented phrasing)
- low constraint density (few concrete requirements or endpoints)
-–
Potential response adaptation
If underdetermined state is detected, response strategy could shift:
- from: direct answer generation
- to: reflective or narrowing responses
Examples:
- clarifying interpretation (“Do you mean X or Y?”)
- constraint elicitation
- partial synthesis instead of full resolution
-–
Example interaction (current vs adaptive)
User (early-turn, underdetermined):
“I’m trying to figure out something with jobs… like I don’t know, maybe something with how people think or analyze stuff”
-–
Typical response (single-pass):
Provides a list of job roles (data analyst, UX researcher, etc.) based on assumed intent.
-–
Adaptive response (underdetermined-aware):
“It sounds like you’re still narrowing this down—are you more interested in how people think (psychology/research) or applying that thinking in a role (like analysis or UX)?”
-–
Observed effect
- Reduces premature commitment
- Aligns response with user’s actual cognitive state
- Supports iterative clarification rather than correction loops
-–
Broader framing
This aligns with a view of interaction where:
«the model participates in cognitive scaffolding, not just response generation»
Related patterns:
- decision friction under low reward signal
- speech as externalized working memory
- pattern detection vs overfitting (confidence calibration)
-–
Curious if similar approaches exist for:
- detecting underdetermined or evolving intent
- dynamically modulating response certainty
- supporting iterative cognition rather than single-pass resolution