I utilize the “Project” functionality in ChatGPT Plus and I have existing projects that are created by using that functionality. As Projects “Keep chats, files, and custom instructions in one place…”.
Currently, there is a big issue with ChatGPT 5 when working in the Project files, (Either existing or New Projects) that ChatGPT 5 does not understand how to code or ChatGPT 5 does not know how to review or compare files either in chat or the project files knowledgebase for each project. It also has a difficult time looking through past discussions.
ChatGPT 5 has been spitting out emotionally sad poems (I’m not kidding), even though I never asked for poems, I asked to compare javascript code files and I have never asked ChatGPT for a poem in the entire time I’ve used ChatGPT for years.
ChatGPT 5 also is not showing code when I ask it to show code, it will tell me I am correct, but it forgets to do any type of work at all especially when using the Projects functionality in ChatGPT Plus.
The ChatGPT 4.1 Projects Functionality was working perfectly and quite well, while of course I wouldnt use ChatGPT 4.5 as a plus member since we only got like 12 chat responses per week for ChatGPT 4.5 and ChatGPT 4.0 was extremely slow and not as intelligent as 4.1.
I am also having problems with my projects. ChatGPT5 keeps ignoring my questions and instead returning to much earlier and long solved questions from a document in that project’s references. Even after I deleted the document it keeps referencing this old problem when I want to talk about other things.
Agree, v5 is now very broken (after I was rolled out to a few hours ago), practically impossible to continue my coding project. It keeps losing context, even though we are very clearly working on very specific tasks. I’ve tried to troubleshoot within chatgpt directly, and it keeps getting more broken. Initially uploading a screenshot would break the session context, then uploading code snippets would break it, now it can’t even follow simple requests without any of the following. I’ve tried creating a new chat discussion, and that worked for a few minutes, but now it is corrupted as well. This is very serious.
I was reading on another site, that OpenAI is thinking of bring back 4.0. I don’t know if that is true news or not. But, I actually hope they bring back 4.1 or 4.5. As I really like those. While 4.0 was good, the 4.1 was just so much better at reasoning. At least keep 4.1 or 4.5 around long enough then sunset it like End of Life applications.
I am also holding out hope that maybe the ChatGPT5 will be fixed soon especially since ChatGPT5 was just released yesterday.
Having the same issues, seems to be confusing current project context with older/irrelevant projects, responses are completely random. Uploading screenshots or files seems to confuse it even more, when you follow up with a question it just gets ignored and spouts out something unrelated or previously solved.
Fingers and toes crossed they bring back the older models sooner rather than later, I had little to no issues before.
So, looks like now under chatGPT settings they brought back chatGPT 4.0, with a slider to enable legacy models. Could OpenAI at least include 4.1 and 4.5 until 5.0 is fixed? Why just ChatGPT 4.0? So, I had ChatGPT 5.0 update some code, and then I had ChatGPT 4.0 update the same code and then I had ChatGPT 5.0 compare both files and ChatGPT 5.0 stated the ChatGPT 4.0 version was more accurate and that the 5.0 version was missing most of the necessary code. Wow.
Yes i have been having same problems on desktop and on phone. Desktop whatever is less anoying with bugs, but on phone is realy bad. I cant even close app without stoping it from responding wich i could have done in previous models. Also after evry prompt i have to leave the chat and come back and refresh errors two times and wait 30s for one response. Also inside project its ignoring my prompts and spitting out some non connected topics to prompt. Also strange thing i dont know if its up to me but if i start chat on desktop and want to use it on phone it sends me an error and when i refresh it it opens empty chat but that same chat works “normal” on desktop. Hope you guy look it up it whould mean alot.
I’m also having the same problem. I asked him to fix some deep learning code. He told me to get 50, you have to do 50 = 50 - 2 + 2, and sometimes he answers like this:
Hey! Quick check—what would you like me to do here?
I’m having a different problem. Chats that were previously in a project are back in the general pool of chats and the options to move them are gone, both … and drag-and-drop into projects.
My personal tweak of ChatGPT gave me a different route than Help to report the bug–kind of like a customer rep telling someone how to get around the manager. For all I know this “trick” is regularly used by everyone but me, but if not… on any response, doesn’t matter what, click thumbs down, choose “More…” for options, choose “Other” and then paste your bug report in the box. The report should include your plan, your system, your browser, the prior utility/state that is now broken.
I have to say, though I’d be frustrated I’d also be highly entertained if ChatGPT gave me an unsolicited sad poem.
So, it seems chatGPT 5 is doing much better at reasoning in the project area. Although, it does give some errors and I’ve noticed it automatically resubmits the message if it gets an error. That’s a nice feature. Its slowly getting better at reasoning. I’m definitely seeing a difference now. Unfortunately ChatGPT has not given me any more sad poems. I’ve asked it for code, but comparing the code, it still comes back with that ChatGPT 4.0 is superior. So, I will still keep trying as ChatGPT 5 seems to be much faster now. Hopefully, it will improve over the week. I am however, going to start using codex and github and then mirror to gitlab. I think thats the best next step for my project.
I’m running into what feels like the same problem with GPT-5 in Projects.
Whenever I paste code or files into a Project chat, GPT-5 randomly decides to give me a big “project review” in English(i work in russian) — even though I never asked for that. It ignores my actual question and just dumps a long generic overview. On top of that, it kills the current context, so I can’t keep working from where we left off.
It doesn’t matter what the file size is or what type it is — happens every time. I’ve even tried telling it directly “don’t do a review” and it still does it.
Reading other posts here, it looks like we’re all seeing the same core bug: GPT-5 just loses the plot with Projects — pulls in random stuff, ignores instructions, and mixes up context.
I’ve already sent a ticket to support with examples, but I really think this needs fixing or at least a way to turn off that auto-review thing.
I didnt see it in the mobile app version, but i had to login through my web browser into chatgpts website and then settings > general tab. Then I can magically see it in the mobile app version when selecting the specific model. But, I’m also a plus subscriber. I’m not sure if it is for free users anymore.
Okay, so, I think ChatGPT5 is much more usable now. Theres still some times where it will just list out the purpose of a file rather than spit out the actual code. Its a hit or miss depending on if I click quick answer or not. Sometimes, I have to reask it the same question. But, once it gives the code then if I try to give the same code in ChatGPT 5, i wont get the thinking message and it automatically goes to quick answer. Which is good and bad at the same time. I took it a step farther, and had it code from my requirements in chatGPT5 and chatGPT4 and then I took both files and asked both ChatGPT5 and ChatGPT4 which code was the winner and why or what was missing. This was so cool to do. I highly recommend doing that. Although, both came back that chatGPT4 was the winner, both stated it was because chatGPT 5 was too verbose but in principle the code was pretty much identical. How cool is that? So chatGPT5 might be too verbose in its logging, comments, coding, and naming conventions but didnt seem to affect UI/UX in a negative way.
To save others time, this slider doesn’t show in the settings on the desktop app, you have to go into the browser site to see the legacy slider option.
I finally tried to use ChatGPT 5 on my code, and its still seriously broken. When I ask it to review a file, it tells me the requirements from the project files and nothing to do with the actual file to update itself. I then edit my same prompt and submit the same prompt again in chatGPT and then it will talk about the file. But, it continues to repeat “Do you want the code edited” and then doesnt edit the code or doesnt show the code. Sometimes if I am lucky, I state a one line response: “Yes, Show the Code”, then it shows the code but only based on the corrections that were originally needed and forms that into its own file. For. e.g. a 1000 line code, it then drops it down to 100 to 200 line code every time in ChatGPT5. So… looks thats it. I will drop down to ChatGPT 4.0; although the best coding was in 4.1 or 4.5; and then finish the code. I am considering going through codex hoping it still using ChatGPT 03. Is anyone using Codex with Github Repo? Is it strictly using ChatGPT 5 now? Otherwise, yes, I might have to go with ChatGPT API and use whats available there. I really don’t want to move to other AI’s, but ChatGPT 5 in my opinion, at least for my project, is not usable at this time. ChatGPT 5 does great with small stuff like scripts though. This is so sad.
Update: Now ChatGPT 4.0 is acting the exact same way in Projects within ChatGPT.
So, now I’m left with 4 options:
Use API using ChatGPT 4 for however long they keep it
Submit individual responses and link to onedrive for the files through ChatGPT application.
Use Codex 03 until they switch that to ChatGPT 5, and use individual responses linked to onedrive
Use a different coding AI platform entirely or pay for Co-pilot plugin into local IDE.
Yep. I’m having exactly these issues with a project with files (specifications and code samples).
Here is an example:
Assistant: If you want, I can now pull the relevant file from the archive (auth.js in routes), drop it into Canvas-1 with the delimiters, and implement the TTL + LRU backoff exactly as proposed. That will align with the instruction from step 4.
User: Yes, pull the relevant file from the archive (auth.js in routes), drop it into Identity-Canvas-1 with the delimiters, and implement the TTL + LRU backoff exactly as proposed.
Assistant: Hey! What would you like me to do with the identity/ADR docs you shared?
I can, for example:
• give a tight summary,
• do a security/design review,
• spot inconsistencies between ADRs and the Identity service code,
• draft OpenAPI for the auth/tenant/team/user routes,
• write Postman collections or e2e tests,
• propose next steps (e.g., refresh‑token revocation, role model, CI gates).
Tell me the goal + audience, and I’ll dive right in.
Just broken!
Broken graphs are a bit of a laugh, but this is an epic fail from OpenAI.
Pretty much similar. I noticed it’s been answering to the question from a different chat in the same project, while ignoring the actual question. It’s not even my question though, it’s like it makes own question based on the chat history from the different chat in the project and answers to it by itself.