Hey Open AI Community,
I’m really frustrated and need some clarity here. I’ve been using the GPT-4o model through the Open AI API for my projects, and I’m paying for this service, expecting top-notch performance. But the responses I’m getting are noticeably worse than what I see from the commercial ChatGPT (which also uses GPT-4o, right?). How is this even possible?
I’ve tested both with similar prompts, and ChatGPT consistently gives more accurate, detailed, and coherent answers. My API responses, on the other hand, feel watered down—like it’s missing the depth and reasoning I’d expect from a model of this caliber. I’m paying for the API to get the same quality, if not better, since it’s supposed to be a premium service for developers. But right now, it feels like I’m getting a downgraded version of GPT-4o.
Can someone explain why there’s such a gap in performance? Is the API version tuned differently, or are there limitations I’m not aware of? I’ve already tweaked the parameters like temperature and max tokens, but it’s still not cutting it. I’d really appreciate some insight from the Open AI team or anyone who’s faced the same issue. I’m relying on this for my work, and it’s honestly disappointing to see such a difference when I’m shelling out money for the API.
Thanks for any help!
I just tried gpt-4o-latest, it reported an error that function calling is not supported, it’s strange that the latest version does not support function calling while gpt-4o does this well ??
chatgpt-4o-latest
is the current model used in ChatGPT (optimized for chat). Updated this week. https://platform.openai.com/docs/models/chatgpt-4o-latest
gpt-4o
, which supports function calling, was updated last year (it points to gpt-4o-2024-08-06
. Team’s working on a new gpt-4o-2025-XX...
model, which will carry over some of the improvements you see in ChatGPT. Should arrive in the coming weeks!
2 Likes
Hi, I also noticed gpt4o api being very bad compared to the web version. Seems like the api quality output has decreased while with the new gpt4o web version has greatly improve. The gap between web and API is hard to understand especially when API is intended for apps and professionals, it’s weird that we don’t benefit from the best model before « basic » users
Hope they fix that soon
Only thing I noticed is the mini version being useless now, I did a batch of 1000 with 4o, but the price is so high for batches that I don’t want to use it.
I have the same problem with performance and truncation of tasks. No matter how much I reduce the size of the prompts and the complexity of the tasks, the difference in performance is noticeable.
The following questions arise:
Is the use of this API for home use only?
Is it an OpenAI policy that the API performs with such a low quality, or is it a temporary processing resource limitation ? in the future it will be better or it will always keep this big gap ?
Should we work with models that can be downloaded on our own infrastructure to be able to have a good and stable quality of service?
Thanks!
yes the latest version does not support web search as well in playground/ api calls.
But CHATGPT can search the web.