I don’t understand why and I am very frustrated by the fact that GPT4’s API output is way way worse than ChatGPT4(Website) output. Please tell me if your system prompt is different or if API’s ChatGPT4 is just way less intelligent?
I constantly fail to replicate the result I get from ChatGPT(Website) with API, even though the prompts were exactly the same. I played with system prompt, tried different temperature…etc, I just can’t get as good of a result from ChatGPT UI’s result
well, it’s not because I didn’t add any previous messages. All of my prompts are not for chatting purpose so there is no need for previous conversation history, they were meant to be used for text completion. I use ChatGPT API as substitute for text davinci because it’s cheaper, also GPT4 for higher accuracy. The problem I have is not that I am missing any context, it’s just the prompt I threw at WebApp and API are exactly the same but API’s results are just less desirable all the time.
Same problem here. The exact same input in an empty conversation in chatGPT (GPT4) gives much better quality results , than the same request in the API (GPT4).
The playground performs the same as the API. Regardless of which temperature or other parameters.
I’m having the exact same problem. On the chat I get the response I want using the prompt but using the same prompt on playground I don’t get what I want and it looks like the gpt4 module ignores some of my instructions.
That will be hard to achieve. If the web chat uses a temperature of, for example, 0.7, you will hardly ever get the exact same answer to a prompt. Maybe in terms of content. But the phrasing can be completely different. Moreover, one doesn’t know what optimizations OpenAI has implemented in the web chat to make it what it is supposed to be.
I get the same impression.
I suspect that what we see in ChatGPT has gone through a much more detailed refinement and different from the one we interact via API.
Hyperparameters are adjustable parameters that you set prior to training a machine learning model. They define higher-level concepts about the model training process, such as learning rate, number of epochs, or the complexity of the model. The performance of the model can significantly depend on the choice of hyperparameters. Unlike model parameters, hyperparameters cannot be learned from the training process and must be preset.
So no, I can’t give you “all the hyperparameters”. Nor are basic ChatGPT API inference parameters beyond the max_token reservation known.
Without dumping today’s system message verbatim, but instead refining so you can move towards “more intelligent”:
You are ChatGPT, an AI assistant from OpenAI, based on the GPT-4 large language model architecture, released 2023, using pretrained knowledge up to March 2022.
Today’s date: 2023-10-26 Thursday
You might want to remove the cutoff date to keep your output from being peppered with it. Prompting to keep it secret can’t overcome all the tuning OpenAI has done to dump “as of my last update” into every response.
I am facing same problem sadly,
from my thinking , chatgpt4 uses other pre-processes before our message is rendered to the system, and they run a strong system message which works really good in outputing the response.
We need to reverse engineer that prompt to get desirable responses.
I’m still getting the same problems in July 2024?
Although the same question was asked using both API and ChatGPT4 I get contradictory answers, with the one in ChatGPT4 being right and the API response being completely wrong.
API response: As of the last update, Si2 Microsystems Pvt Ltd does not appear on major global sanctions or watchlists, such as those maintained by the United Nations, European Union, United Kingdom, or the United States (including OFAC).
Chatgpt4 output : Yes, Si2 Microsystems Pvt Ltd is included in several global sanction and watchlists. Here are the details:
U.S. Department of Commerce Entity List:
Si2 Microsystems was added to the U.S. Department of Commerce’s Entity List in November 2023. This inclusion is due to the company’s involvement in supplying U.S.-origin integrated circuits to the Russian defense sector without the required licenses ([Enterprise Technology News and Analysis]/([Republic World]
European Union Sanctions:
The European Union has imposed sanctions on Si2 Microsystems, accusing the company of providing Russia with technology and goods that support its military operations. This is part of the EU’s broader strategy to limit Russia’s access to critical technologies amid geopolitical tensions ([Enterprise Technology News and Analysis]
@frankstaelens2020 This is because the ChatGPT version of GPT-4o has the ability to browse the web, while the API does not. This means that when you ask the API about something very specific or recent that may not be part of its knowledge, like the example that you provided, it might just hallucinate an answer because it has no other options — while the ChatGPT version will just, well, google it.
If you want to have the web browsing feature enabled when using the API, you have to develop it yourself by giving the model a function it can call, then having your app/code search the web when the model calls it and giving the search results back to GPT-4.
I don’t think OpenAI has ever communicated on adding web browsing as a feature in the API the same way file search and code interpreter are integrated, I think it’s just not planned.