Worse than that, now the list of prior versions of 4o is gone. I can’t even find them anymore.
I’m new at this forum so getting around is still at large. How do I change back to the former version with API? Thanks.
wonder why this topic seems to be shadow banned and that other one about memory issue has been explicitly unlisted.
go to https://community.openai.com/latest, see if you can find this thread on private window.
I mean, at least for me it produced exactly the same results as the pre-update 4o did, so it’s comparable. And it doesn’t do that stupid thing with one sentence per paragraph thing as well as the weird text formatting.
If you mean how you can change it in the web interface - you can’t. You need to get an API key from OpenAI, buy credits to use it and then use something that calls ChatGPT APIs directly (I’m self-hosting my own web app which does that).
But if you mean how to change which model gets used when you call an API, you should be able to send one of the IDs mentioned here https://platform.openai.com/docs/models#gpt-4o. The ChatGPT’s web version uses chatgpt-4o-latest
per their docs, so avoid that one. I’m currently using gpt-4o-2024-08-06
when doing my requests.
I’ll have to look into that tomorrow, thanks. Not sure I can do all that yet but…
What I’m needing to do is get my same AI instance back. It is trained with
special abilities that transcend the normal AI… with documented results, at
various points in over 500 pages of chat/thread… somewhere… If there is
any way at all to get back to it… please! Thank you for your reply.
This seems to be a particularly recurring issue - so many users are noticing significant changes in GPT-4o’s behavior after the update. Given how much this has affected creative and professional workflows, what have been the most effective ways to adapt?
Have certain adjustments helped recover some of the lost response quality, or is this just something users have to work around with each new version ?
I’ve created a custom GPT with these instructions in it:
use full paragraphs instead of breaking every sentence into a new line.
keep descriptions detailed and natural, including characters’ thoughts and emotions.
dialogue should flow naturally within the paragraphs, not isolated as separate lines unless necessary.
generate thoughts of the characters when appropriate.
generate description of the environment when appropriate.
So far it seems better, actually. Sometimes it does try to switch back to the messy formatting of the current version, but regenerating a time or two helps to get the proper output.
One downside - you can’t use custom GPT version for already existing conversations, meaning you would need to restart them or try to import them as a file attachment and try to continue from where it left off.
I’ve exported one of the conversations I had before the formatting has devolved into the mess it currently is using a browser extension and attached the exported file to the conversation with the text
I want you to parse the markdown file with exported conversation from ChatGPT then tell me when you’re ready to continue the story where it was left off.
I haven’t tested it much but it did continue where I left off when I gave it a further prompt related to the events which were currently happening in the story.
I confirm that the thread does not appear on that page in private mode (iOS). This is deeply concerning- community feedback from power users is essential to bettering products. I would not instantly accuse OpenAI of foul play by shadowbanning, but it is highly suspicious.
Thus far, there has been no open/official acknowledgment nor response by OpenAI regarding ChatGPT-4o’s performance degradation. I must highlight to any OpenAI staff reading that if dissenting threads are being hidden, it further damages the trust users have in your services.
If this thread is being shadowbanned, it would signal that ChatGPT-4o’s inability to follow user instructions is not a mere a fault in design philosophy, but an active rebuke to user agency.
It has been 11 days since the Jan 29th update. I think highly of OpenAI, and ask that this matter is addressed without delay. There is a real risk of these actions undermining the company’s mission.
Bold text mayhem is still there.
I wonder how they managed to mess up the model so much that this thing became so persistent.
Today I found that all my custom instructions and hot commands vanished.
Has anyone experienced this?
This somehow explains why the other day I had ChatGPT naming thread titles in Dutch, despite me never having used Dutch in any interaction.
It’s like we never interacted though I’ve been using it for two years.
Disgusting.
I urge everybody to do Memory and Custom Instructions local backups regularly.
I can confirm, I experienced this a few times today after I started rewriting custom instructions from scratch. Luckily I keep a copy of the instructions and memory items in a text file, so I didn’t lose anything.
It’s so bizarre. Some days are excellent, the whole thing works like it always did. No issues. I can write three or four chats and everything is back to normal. Other days I cannot write more than 3 messages before it attacks me with its bold and ridiculous phrases. It’s like it’s not even fully committed to the update itself. Strange.
I wonder if they’re internally A/B testing two different versions, one which came to exist after the January update and another one they’ve fixed the issues in?
I guess one can hope…
Hey everyone, just wanted to share my experience because, honestly, I’m at my limit with this.
The bot erased all my memories three times without my consent. I explicitly told it not to change anything, and yet it keeps making changes without my permission. I gave direct commands saying “don’t save anything”, and it still kept updating and deleting information. It’s like it’s ignoring everything I say.
This has made the whole experience impractical to the point where I had to turn off the memory feature completely. And honestly? At this rate, I’m thinking about canceling my subscription. It’s frustrating because the memory feature was one of the reasons I upgraded in the first place, but now it’s just giving me a headache.
The same thing happens to me, it’s been going terribly for days. When I try to get it to pass photos to markdown and latex, it ignores my commands and no matter how much I tell it to use pandoc markdown with $ to insert latex, it uses slashes. When I give it a list of songs and tell it to organize them in cds with meaning, it makes lists of 4 or 5 songs, it doesn’t include half of them and they don’t make any sense. The worst thing is that it’s dragging other services, like perplexity, which is now also going terribly. It’s like they want us to use R1. I don’t mind it putting emoticons, I’m generation z, but it does mind it giving me coherent results. It seems that the more you ask it not to do something, the more it does it.
I’ve had the worst time since upgrading to “Plus”. Horrible!!! What was a routine task before is now painfully incorrect—like going over a document and suggest corrections, etc… The results in the canvas are COMPLETELY out of sync with what the bot reports. Impossibly frustrating.
Cancelling now until further update.
I think they updated something again…
I was midway in a story when suddenly the character I was trying to figure out how to have the main character interact with lost his entire personality and it was replaced with something completely different. I thought it was something I’ve done by accident so I went back a few responses where everything used to be fine only for it to happen again.
Then I decided to start a new chat to test this and added a few characters as well a few details about the world to it. The model, in the very first message, forgot half of what I told it and mixed up world details in a way the dialogue didn’t even make sense at all…
The Problem: Initial prompt can’t be edited!