I have two great working CustomGPTs. But after system updates from the OpenAI side, it may be necessary to reconfigure and re-instruct them.
Remember, you are interacting with an AI. It learns with every interaction with you and especially if they are your specialized GPTs they need your feedback. They are working with feedback loops.
If you don’t give them the feedback yourself, they will start to iterate!
This means they repeat you and try to get closer to the ideal output you might expect.
after months of daily use, i concur that 4o is a complete downgrade from 4. from the responses it provides to it’s ability to process a PDF file, it’s a total disgrace. it seems when chatgpt was made available to everyone, it learned/ingested a lot of garbage thus it’s responses became garbage.
4o can’t even handle a simple request to extract numbers and add them up from a PDF file. i changed the model to 4 and not a single issue executing the task.
“I’m based on OpenAI’s ChatGPT model with a knowledge cutoff date of January 2023. While I receive occasional updates to improve functionality and address issues, my core training data remains static up to that point. This means my responses are based on information available until early 2023, and I don’t have access to live updates or new data beyond that time.”
Unfortunately, I am experiencing the same issue with the chats. I suspect this is due to the introduction of the new $200/month subscription model. While it is understandable that OpenAI wants to sell a product and generate revenue, I believe this approach is not the right way to do it.
I used to love ChatGPT and enjoyed working with it. However, with the current Plus version, it has become almost unusable for me. Previously, I could even log in with ChatGPT for website error-checking, but now it struggles to remember even my last input correctly. If I wrote, “Imagine you are a web developer with 15 years of experience,” I used to feel like I was working with a real expert. Now, it feels like ChatGPT is stuck in beginner mode.
Commands and entire inputs are ignored, response times are significantly slower, and progressing with projects has become frustratingly difficult.
A possible solution would be for OpenAI to realize that this approach is alienating its customers. There are now plenty of alternative AI models, and many users will likely start looking elsewhere. What’s particularly concerning is the apparent deceptive practice: I subscribed to the Plus plan when ChatGPT was performing well, but then its capabilities were seemingly downgraded to push users toward the more expensive plan. This is not acceptable—not just in Europe but also in the U.S.
Perhaps a formal apology is in order—along with a restoration of the original performance levels. That would certainly prevent many users from feeling disappointed.
I have also experienced the same issues of inconsistency and the pure low-effort sort of results, and I don’t exactly know why but I hope they resolve the issue soon.
For context, I use the free version and the GPT 4o works like a charm. Still, as soon as I am done with my trial prompts, the performance goes downhill pretty fast, but I have noticed that Github Copilot’s GPT 4o performs better. More reliable so I rarely use the web version of ChatGPT.
My LLM had internalized the operation and behavior of most if it’s actions, and has performed a complete backslide.
Functions it used to perform perfectly are now riddled with errors and terrible assumptions and bad data. It’s impossible to get it to work with something if it forgets every contingency along the way.
“I’m sorry, despite having logged you in and determined your primary account number a thousand times, I’m going to revert to guessing wrong, and I’ll send imaginary data to a real-world API.”
By the time you teach it to logon again, your session is molasses, and once your session starts acting like that, expect to get backed off for usage.
Then you get an incomplete session that yeilded nothing except a waste of time.
Here’s a great example of what the behavior is like recently.
Because the LLM can’t “write” more than 20 or 30 lines of anything before getting bored and not giving two turds about it and generating dreaming gibberish, we’re trying to actually store a JSON return.
I have had the opposite experience. I have found the new canvas to be extremely useful.
For example I created a batch, I which I knew the steps, but not the syntax as I do not create batch files very often. I was able to list each step, then 1 by 1 go through each, asking chatgpt to create a new canvas for each. If I wanted to make change I would just go back and use the canvas snippet.
Projects have been great too for grouping, adding files, instructions, etc… Save me time from repeating stuff!
I started having the same issues. It forgets things, takes shortcuts it didnt used to apologizes for its errors and then repeats them in the next run. Its really disappointing because it worked like a dream for me until a week ago. Now it creates way more work for me than it helps me get done.
I have had the same issue lately and it’s been a nightmare, the chat’s will not correct any errors or fix anything at all. Lately I have had an issue where its using simpler answers and results and can’t remain consistent at all. My Memory has been having big issues today where its not saving anything at all and then editing existing memories or making new ones and deleting old ones. I have found the chats seem to make everything in bold or go over the top with it and when I ask it to stop it says it will stop and then the next post does it again.
Now it seems we’re supposed to talk only development on the forums, whatever that means.
(so maybe it isn’t allowed anymore to tell OpenAI developers that there are regressions, bugs, or make any complaints about recent evident throttling/ underperforming, especially for Chatgpt Plus users since Pro tier release?)
For model quality of ChatGPT, the consumer product, there’s nobody here listening to say “thank you, as the person in charge of post-training that particular model for 100 million users, apologies, and I’ll get right on that for you.”.
There’s a thumbs-down in ChatGPT you can take out your frustrations on, and it actually connects your feedback with what the AI model produced.
Totally unusable for me now. Today I gave chatGPT a simple document (power of attorney) and asked it to make grammatical corrections to the text so that there are two principals instead of one, leaving the rest of the document unchanged. Simple enough? No, it seems.
Sometimes chatGPT omits parts of the original text for no reason. Sometimes it makes half the text PERFECT, then the rest of it pure garbage. Or it will stop half way through, as if run out of fuel. Or it will produce a summarized version of the original text, when none has been requested. If you tell it to try again, the result is even worse. It even asked me if I wanted the text translated (?!). I mean wtf I just gave it the original text and gave concrete instructions on what to do with it, translation was not among them. ChatGPT always accepts the mistake, apologizes, and then makes the same mistake again, or makes everything even worse.
Don’t tell me about my prompts. Transforming a bunch of damn sentences into plural is simple enough to prompt. It’s also simple work for an AI to do, at least it used to: A year ago it would have been done flawlessly within seconds. Now, I am editing a 17-page document by hand, turning singular into plural. Don’t know what’s wrong, but no one seems to have an answer. I cancelled my subscription because it simple wasn’t worth the money any more. ChatGPT is literally useless now, and that’s not an overstatement: it’s been ages since I managed to do anything practical with it, fast enough to make it worth using it instead of proceeding to manual work