Is API GPT4 way less intelligent than ChatGPT4?

I don’t understand why and I am very frustrated by the fact that GPT4’s API output is way way worse than ChatGPT4(Website) output. Please tell me if your system prompt is different or if API’s ChatGPT4 is just way less intelligent?

I constantly fail to replicate the result I get from ChatGPT(Website) with API, even though the prompts were exactly the same. I played with system prompt, tried different temperature…etc, I just can’t get as good of a result from ChatGPT UI’s result


well, it’s not because I didn’t add any previous messages. All of my prompts are not for chatting purpose so there is no need for previous conversation history, they were meant to be used for text completion. I use ChatGPT API as substitute for text davinci because it’s cheaper, also GPT4 for higher accuracy. The problem I have is not that I am missing any context, it’s just the prompt I threw at WebApp and API are exactly the same but API’s results are just less desirable all the time.


Same problem here. The exact same input in an empty conversation in chatGPT (GPT4) gives much better quality results , than the same request in the API (GPT4).
The playground performs the same as the API. Regardless of which temperature or other parameters.


I thought it was the other way around. can you give some concrete examples comparing the API responses and ChatGPT4 responses?

1 Like

I’m having the exact same problem. On the chat I get the response I want using the prompt but using the same prompt on playground I don’t get what I want and it looks like the gpt4 module ignores some of my instructions.


That will be hard to achieve. If the web chat uses a temperature of, for example, 0.7, you will hardly ever get the exact same answer to a prompt. Maybe in terms of content. But the phrasing can be completely different. Moreover, one doesn’t know what optimizations OpenAI has implemented in the web chat to make it what it is supposed to be.

1 Like

I got the same problem, and also the answer not consistent. Anyone solve the problem?

1 Like

I get the same impression.
I suspect that what we see in ChatGPT has gone through a much more detailed refinement and different from the one we interact via API.


I suspect that people aren’t giving the same system message to the API as ChatGPT receives.

ChatGPT is the unreliable one, never producing the same output twice, likely to gather variations that can be voted on.

Temperature = 0.5, top_p = .99 will give you higher quality output.


Can you share the system message that ChatGPT receives with all the hyper params. I want to get similar responses as ChatGPT

First, a definition:

Hyperparameters are adjustable parameters that you set prior to training a machine learning model. They define higher-level concepts about the model training process, such as learning rate, number of epochs, or the complexity of the model. The performance of the model can significantly depend on the choice of hyperparameters. Unlike model parameters, hyperparameters cannot be learned from the training process and must be preset.

So no, I can’t give you “all the hyperparameters”. Nor are basic ChatGPT API inference parameters beyond the max_token reservation known.

Without dumping today’s system message verbatim, but instead refining so you can move towards “more intelligent”:

You are ChatGPT, an AI assistant from OpenAI, based on the GPT-4 large language model architecture, released 2023, using pretrained knowledge up to March 2022.
Today’s date: 2023-10-26 Thursday

You might want to remove the cutoff date to keep your output from being peppered with it. Prompting to keep it secret can’t overcome all the tuning OpenAI has done to dump “as of my last update” into every response.


I am facing same problem sadly,
from my thinking , chatgpt4 uses other pre-processes before our message is rendered to the system, and they run a strong system message which works really good in outputing the response.
We need to reverse engineer that prompt to get desirable responses.

1 Like