Hi there! I’m trying to use API with model “gpt-4o-mini”. When I’m trying to find out which version of ChatGPT I’m talking to, it usually responds “3.0-3.5”.
But when model is set to “gpt-4o”, API answer contains correct info about itself.
I’m using PHP and openai-php/client
Here is example of request:
foreach ($history as $message) {
$messages[] = [
'role' => $message->role,
'content' => $message->content,
];
}
$messages[] = [
'role' => 'user',
'content' => $this->userQuestion,
];
$stream = $client->chat()->createStreamed([
'model' => 'gpt-4o-mini',
'messages' => $messages,
]);
I found it doesn’t lie including uncovering the truths on OpenAI like the new voice update doesn’t exist and is just a marketing ploy.
You cannot ask the model about itself and get any kind of reliable answer.
The model just predicts next tokens, it doesn’t have any source of information about itself.
Why I can’t? Any common ChatGPT could tell you it’s own version. And last knowledge database update time.
For me it is an important point and I appreciate you if you help me figure out is there any problem or not.
I mean, from my own API chat I expect similar behavior as from chatgpt com temporary chat, and from open ai playground dashboard
But now same questions got wrong answers only from API.
Maybe I did something wrong?
ChatGPT can do that because it is given that information in its system message. If you want the API to be able to answer those questions, you’ll need to send it in the system message.
1 Like
Seems like that. Thank you. Could you please tell me what I should set as system message and temperature to get as close as I can to chatgpt web chat?
You can just Google,
“You are ChatGPT, a large language model trained by OpenAI”
and you’ll find many examples of the various versions of the system instructions which have been used.