Is something going on with chatGPT at the moment?
Last night my partner started reporting that the answers provided were weird.
Today I noticed the same thing, both in terms of chatGPT not remembering things from the memory, nor being able to coherently continue a long chat.
Now I just tried to use one of my custom GPTs and it is butchering all the answers, even when I specifically tell it to look for the information available in the knowledge section, which is extremely clear.
I checked and there seems to be no outages, but i don’t know where to check for other reported problems.
Does anyone know anything about this?
I’m having the same problem. I need it to rewrite a piece of html but it just cant get past 60 lines of code - keeps telling me to add it manually. Even when it says here comes the complete code - it then stops at 60 lines. I couldn’t find any work arounds.
1 Like
I am a full stack engineer working in the info sector of payment portals and the online print world. We use chatGPT when working on new UI changes coupled with it’s JS backbone. Been working great as a partner but lately,
it plain as day is getting dumber. If I paste a js block in … please lets refactor this for a new feature. OK. It takes it in and refactors it but omits functions and features that were there… it just glosses over them. It say like, “OK, my apologies blah blah” and tries to rectify it but just refactors again minus a bunch of functions and features. like, it’s just getting stupider.
Everyone stating this across the board is 100% right onto something.
I cancelled my chatgpt as it isnt worth jack shit if it cant be trusted to at least do better than a 3rd year programmer.
1 Like
I totally agree!
In my case, I have al the information it needs in my knowledge files and it’s still giving me replies without including the data.
If it will not perform the job as instructed, it is worthless.
I surmise that there are things going on that many might not be privvy to.
Software changes happen all of the time and often get degraded for a variety of reasons. But, when you pick a model to wok with, you should have at least some kind of change list so you can at least make informed decisions about which model to use in the first place. Right now, we are all playing wackamole with these LLM providers and it’s just making for more work.
Refactoring by hand now but at least I know more or less whats going on.
1 Like
Yeah, I just checked OpenAI’s status and they have been having a couple of problems yesterday and today.
I am starting to play with open-source, local models and RAG. I feel that with the whole trend of AI agents, having smaller, more focused models will give me better results, especially when it comes to work.
Agreed. Doing the same but at the end of the day, I dont want the extra $$ on my power bill by running my rig all of the time, either. Im crossing my fingers that OAI can address this sooon.
1 Like
Good point. Have you made any calculations regarding the power bill? Or have you found any good websites that can help us with that?
I assume chatGPT itslef could be a helper too, but I’m a bit skittish at the results at this point.