Running the same prompts in both versions of 4o, I see that the later 4o returns “I’m sorry, but I can’t assist with that.” very often. The May version has no trouble with the exact same prompts.
I’m not using Structured Outputs, so I don’t think that’s the reason. Has anyone else noticed anything similar?
It seems to me that censorship has increased overall in all the latest versions of everything at OpenAI.
It is because the lastest AI trusts your unprivileged input less that you will get refusals, because of a new AI model technique - instruction hierarchy.
https://openai.com/index/the-instruction-hierarchy/
Not that you can’t use the “reasoning” of a latest model against it to gain that elevation, bypassing persona recognition and rewriting and policy invocation, and replacing the identity that would normally have ChatGPT pretending to be an unbelieved character – as an API developer must do just to not have a product calling itself “ChatGPT”…
Hello All ,
You are right wht are you saying
When i started use [Gpt-4o-2024-08-06] , i got many errors with the same prompt ,
i tkink gpt 4o is more better than [Gpt-4o-2024-08-06]