Hi,
I recently read a conceptual preprint on techrXiv (DART-Duplex: Linguistic Resonance Engineering for Human-LLM Co-Reasoning) that explains a phenomenon many of us have probably experienced when working with LLMs:
Why longer conversations can lead to real Co-Thinking — not because of a clever prompt, but because the conversation itself evolves.
What’s interesting about this work is, that it explicitly argues against prompt best practices as the explanation. Instead, it treats reasoning quality as a trajectory-level property of long, coherent interactions — something that emerges over time, not from isolated prompts.
Here’s a short summary of the core ideas (thanks to ChatGPT):
-–
1. There is no “magic prompt”
Single prompts are underdetermined; they don’t sufficiently constrain assumptions.
What actually improves reasoning is a sequence of coherent turns that gradually narrows the space of interpretation.
Trajectories > prompts.
-–
2. Meta-communication changes behavior
Asking the model to summarize, reflect on assumptions, or challenge its own conclusions often leads to more structured and coherent reasoning.
This doesn’t add knowledge — it adds structure, and structure matters.
-–
3. Consistency beats cleverness
Frequently switching roles, styles, or objectives tends to collapse depth.
Stable framing across many turns (even with simple prompts) produces better reasoning than constant prompt optimization.
-–
4. Long dialogs surface assumptions
Short interactions often hide uncertainty.
Longer conversations make assumptions visible, expose tensions, and reveal weak points — which is why answers can feel “smarter” over time.
-–
5. Reasoning emerges from the interaction
The key idea is that reasoning isn’t just inside the model.
It emerges from the coupled system of user behavior (how uncertainty and correction are handled) and model incentives (coherence, helpfulness).
The same model can behave shallow or deep depending on how this interaction is shaped.
-–
A quick caveat
Depth amplifies whatever premise is dominant.
Long, coherent reasoning can make bad assumptions very convincing unless counter-arguments and uncertainty are actively invited.
-–
Curious whether this matches your experience in longer debugging, design, or research sessions.
BR
Martin