I am feeling a bit frustrated. No matter how I adjusted the parameters (temperature & top p) on the playground, the responses for model 4.0 is always worse than when I directly use chatgpt 4. The same thing also happens when we use api. If you know why, please please let me know. And also, does anyone know the default setting for chatgpt 4?
It IS worse. I’ve tested this myself. I can solve many things with ChatGPT 4, but the API gpt4 chat is a bit worse.
But even the more concerning thing is that ChatGPT 4 today has been a bit stupid. It keeps making mistakes constantly, and I’m wasting my prompts, while also being frustrated. This isn’t what I paid for. And I sure as hell ain’t paying that price for the gpt4 API which is 1000% more expensive, and less capable.
I hope OpenAI will get this fixed very soon. The API call is also much slower than directly using ChatGPT and often give errors.
Without knowing the prompts you sent or their context, one reason I can guess is the prompt engineering they use for chatGPT. In other words, when a user uses chatGPT, probably a prefix and suffix are attached to the user’s prompt and then the whole concatenated prompt is sent to the model. I guess their prompt engineering approach makes the model work well in chatGPT. But what is their prompt engineering approach? I don’t know and would love to know.
have you used /completion or /chat endpoint URL?
Since, from my observation the chat API returns similar responses as the online console, in contrast of the completion API. By the way, I use API in C# (Betalgo.OpenAI), but think under the hood it’s based on the same REST API as in other languages.
The reason this happens is because chat GPT pre-programmed and trained to produce the types of responses it does. It has its own complex series of commands etc. I’m sure that there is some degree of functionality introduced by using the sliders available in the playground for gpt4, But the conversational model is what you’re looking for. select conversational and you’ll see pretty similar responses.
Long story short, GPT3/ gpt4 are not the same thing as ChatGPT
Yes - because one of them is a full fledged bot that will engage in conversation with you. /chat APIs will have your application interact with the model in chat method, similar experience as you would get by using ChatGPT. The other /completions is something that is purposeful, and will not engage in conversation. It will do as directed in the prompt.
Thanks for the response! But how do I select conversational model in the playground? There is only completion and chat model. And also, what about API? How do I talk to the API and have it give me the similar responses to ChatGPT?
Docs are here for integration and creating your own App.
This is for using ChatGPT : OpenAI API
model is gpt-3.5-turbo
This is for using Non ChatGPT (or the playground models) : OpenAI API
model is text-davinci-003
I am not sure if you can select conversational model in playground. If you want to have a conversation you would probably use ChatGPT window instead. I could be wrong.
Sorry, Just saw that there is a new option to select chat and complete on the right hand side of the playground. If you select Chat then you can select the same model listed above for Chats.
Looks like you dont have access to chat then. Request for this access.
I 100% second this. ChatGPT GPT4 serves my use case beautifully, but each time I use the APIs to directly connect my tasks, the responses are abysmal. It has barely anything to do with the chat functions, simply the logic and functionality is much worse.
“you are a helpful ai assistant, you directly answer the user’s question and also provide additional details that might assist the users question. All of your knowledge should be the latest more recent knowledge, don’t suggest things that are older than your cutoff”
You are welcome.