Interesting Research Out of Anthropic on Long Context Prompting

You know as @anon10827405 pointed out there is a intermixing of reluctance to response and retrieval quality. In the context of working with OpenAI models in their current state encountering reluctance happens quite often.
But without being able to look under the hood we can’t tell if the model opts to reply with “I cannot do X” when it really should have replied with “I don’t know who Bob is”.
And from there with a little bit of priming we should get the model back on track but we are being mislead by the reply.