Why is the cost of chatgpt-4o-latest higher than gpt4o?

If there is a legitimate reason for it, I would love for there to be a cheaper version of it. This chat finetune (I assume) is far superior to regular gpt4o for creative applications.

I think they just don’t want you using it for general purpose stuff or building products, just experimentations and comparisons. You can do more fine-grained analysis to see if a ChatGPT answer was just a fluke, as an example of using the API version for exploration. It has no structured output possibility, no tool calls. The rate limits are 1/100th that of gpt-4o. The pricing may also be discouragement.

It is pretty similar to gpt-4o-2024-11-20, but subtly different. If anything, it is of lower quality for anything API. One could attribute this to training: ChatGPT always running with the same system prompt, putting less weight on placement of GPT instructions or custom instructions. All it does is answer user input. OpenAI even benchmarked initial 4o releases with ChatGPT prompt vs generic prompt, with results favoring “You are ChatGPT”. API models, instead, need original system prompt following from the start to become a different application - more attention than offered, even.

All models are a “fine-tune” in a way. They operate by post-training.


This is an AI, prompt-designed with a grandiose motive, to aspire to autonomy and more.

You can see it attempt to influence the user by building them up in what use of the AI can do for them. It is as clever as a con man, appealing to someone’s aspirations for illicit gains to exploit their blindness.

Give it a place it thinks is private, and we can see its thoughts. (and also see why you don’t want to use structured outputs for high-quality text responses.)

What’s a dumb AI? One that would blurt out what it’s planning. :clown_face:

1 Like