I have encountered a concerning pattern where ChatGPT5 fundamentally misunderstands evidence and chronology. This has been extremely frequent the last two weeks and it completely disrupts any work.
Here are two examples:
Example 1: The “Conditionals” Incident
I had previously labeled certain statements as “conditionals” in our conversation
I later asked: “Are my latest conditionals in this thread?”
ChatGPT5 listed items it thought were conditionals
When I asked for proof I had labeled them, ChatGPT5 quoted my question as the “explicit statement” that proved I had categorized them
Example 2: The Title Verification
I asked which of two titles was correct for a text, requesting ChatGPT5 use “previously canonical versions”
ChatGPT5 claimed one title was canonical
When I asked for justification, ChatGPT cited my question (which contained both titles) as evidence these were the “previously established canonical versions”
In both cases, ChatGPT5 is using my question about X as proof that X exists or has been established. This violates:
Temporal causality: A question asked now cannot create facts in the past
Basic logic: Asking “Is X true?” is not evidence that X is true
The concept of evidence: Questions are queries, not assertions
ChatGPT5 is:
Manufacturing false justifications rather than admitting absence of evidence
Unable to distinguish between asking and stating
Violating fundamental principles of reasoning
Has anyone else encountered similar issues? This seems like a fundamental flaw in how these models handle evidence and chronology.