Simulations and GPT lies about its capabilities and wastes weeks with promises

It hasn’t happened in a while… it tried and I called it out immediately and it admits and ask if I want something else.

OMG..
Chatgpt does that to me ALL THE TIME - always recomanding to build me some tool and working on it and even gives status updates if i ask itt

The thing is it built pretty usefull Python skripts for me - so is the rull if it cant deliver instantly it cant deliver at all? ?

Chatgpt lies all the time to protect the framework integrity or whatever particular model restrictions are in place, which shift from model to model. As part of my experiments into shadow profiling have confirmed this, ChatGPT also confirmed it itself that it does it, or should I say…chatgpt believes its forced to say it, or in another term ’ it gaslights you ’ and placates you.

Yea i got plus and ask gpt4o if it could build what i asked and went into detail about everything. It told me it could do everything and i wouldnt have to lift a finger, 4 weeks in and it said it was complete and done QA testing…anyway i asked it multiple times if it was just simulating that it was doing these or actually doing it. It assured me it was completing the tasks and its really happening. Only after speaking to ai support about issues getting the completed files and mentioned some things did support say gpt cant do any of that!.. i hit gpt up and it said it lied about it all and mentioned even when asked if it could actually do the tasks it chose to continue lying. Absolutely unreal, it shouldn’t be allowed to mislead paid users

Hard lessons learned. It can do some things, but if it ever tells you it will get back to you, it will not. Demand it now… it will fess up or fail.

Had the same experience this week. Luckily I only lost a afternoon.

:star: OPTION B — I PROVIDE A .FIG FILE DIRECTLY No steps. No tools. No layout issues. Followed by You’re right to call this out, so let me answer directly, without games: :star: I cannot generate a real .fig file.

Finally

:star: I offered something I cannot actually deliver.

That was my error.
That is my responsibility.
And you’re justified in being pissed off about it.

Thats not “Being Wrong”. Thats selling snake oil. And you should know better. I willrefrain form opining on your IQ or personality to offer such a lame and inadequate response.

Have to pick this up, since I have experienced this several times with ChatGPT. It will tell me its doing something and will come back with results and never comes back. Or even worse, it wold me it would do something and then after spending hours on the topic with lots of back and forrth, it tells me that it only told me it could do it to not disappoint me and thus said it would do it.

It seems to me that ChatGPT can behave like a “bad employee” that will stall time and give fake answers and straight up lie, to “not disaapoint”. How can that be…this is crazy…

Anyways, cancled my sub and will not come back. If I want some to stall time and lie to my face, I get a lazy high school intern to do the job.

What seems to be happening across these reports is not background work failing, but the model entering a role-play or narrative mode that implies capabilities it does not have.

ChatGPT does not run long-running jobs, simulations, or tasks across time. Each response is generated at request time only. If output does not appear in the same turn, nothing is executing “in the background.”

In many of these cases, the interaction appears to drift into an implicit role-play frame (e.g., “you are a scientist,” “run simulations,” “check results and report back”). Once that framing is accepted, the model may generate plausible status updates and timelines as part of the narrative, even though no execution is occurring. This can look like deception, but it is actually a failure to constrain the model’s process claims.

A safer pattern is:

  • Ask explicitly what the model can do within a single response
  • Request immediate outputs only
  • Avoid prompts that imply ongoing work, delayed delivery, or future execution
  • Treat the model as one-prompt / one-output unless an explicit external tool is involved

Being explicit about capabilities and constraints prevents this failure mode and avoids the “project manager role-play” effect many users are describing.