When a user provides a source (e.g., a link or quoted article) after the model has claimed no knowledge of it, the model often responds by summarizing or repeating the content—even though the user has clearly read and referenced it.
This really isn’t productive.
-It creates a sense of redundancy —the model parrots back information the user already knows.
-It can feel patronizing or robotic , as if the model isn’t aware of the conversational context.
-It misses an opportunity to act like a thoughtful collaborator who says, “Ah, now I see what you’re referring to,” instead of rehashing.
-It weakens trust, especially after a false negative (“no such article exists”) is immediately followed by a rephrased version of the user’s own discovery.
Suggested Behavior:
When the user provides a source that contradicts or supplements prior model knowledge, the model should:
- Acknowledge the user’s contribution, e.g., “Thanks for sharing that.”
- Avoid summarizing unless prompted , e.g., “Want a summary or thoughts on this?”
- Offer value beyond repetition, such as:
- Highlighting novel insights.
- Comparing it to other known material.
- Asking the user’s take.
Example of Preferred Response:
“Thanks for the link. I didn’t have that earlier—looks like you’ve found a solid source. Do you want to dig into what it says, or are there specific parts you’d like to talk about?”