Hi
I had been successfully receiving GPT-4 responses but the last few hours it seems i get GPT-3 even when i reqest a v4 model
Is this common? Anyone else having or had the same in the past
Tha ks
Hi
I had been successfully receiving GPT-4 responses but the last few hours it seems i get GPT-3 even when i reqest a v4 model
Is this common? Anyone else having or had the same in the past
Tha ks
FYI
The way i can tell is, the conversation tone is significantly more robotic
But more… i ask it what model it is and it tells me GPT3
If you ask GPT4 it knows its GPT4
Sounds trivial i know, but the api is accepting and returning the model parameter indication v4 and i used to have v4 responses
But lately… its dropped back down to v3
Also worth noting… it “appears” im being charged for v4 ?!? (could be mistaken though)
You describe operator error and assumption.
You have to program gpt-4 or others to know what they are and their purpose to you.
Thanks for the response
Sorry though, but im not sure i agree
In your case what does it say if you dont give it context initialiser?
I.e. simply ask “what version GPT are you?”
For me it says something along the lines of “i am based on the GPT-3 architecture”
However, the chatGPT accessable through chat.openai.com more clearly seems to know it is a GPT-4 model
I understand that you can ask it to respond differently with the initializer, but it doesnt seem to be a necessary precursor
What am i misunderstanding please
It also doesnt “feel” like GPT4 (though obviously that isnt a particularly measurable quality - hence focusing in its response to the question)
They likely have the info in the system message. Are you aware of hallucinations in LLMs?
I am
I just wander why the API model is consistantly different to the chatGPT model in this minor regard
Do you get the same response to the question then?
Edit: rereading your response i now see what youre saying… the chatGPT has a system initialiser setting its context a little more precisely
As I mentioned, ChatGPT has a system prompt. For the API, you have to send your own system prompt.
Default chat gpt3.5 very low accuracy answer give. you return chat gpt-3 helpful🙂