Weird Rare response, weird rare GPT4 access, weird everywhere :/

I signed up for the waiting list, I got the mail telling me that I could now use the GPT4 version, with the same(?) API and so I did, changing the model, of course:

def chatgpt(prompt):
        if prompt == "":
            return "Prompt vacío, no tiene sentido."
    
        try:
            completion = openai.ChatCompletion.create(
            model = 'gpt-4',
            messages = [
                {'role': 'user', 'content': prompt}
            ],
            temperature = 0  
            )
            return completion['choices'][0]['message']['content']
        except Exception as e:
            return f"Error: {e}"

The question:

What version are you?

The answer:

I am an artificial intelligence and do not have versions like a human being. However, my model and capabilities can be upgraded and improved over time. Currently, I am a version of OpenAl called GPT-3.


(a) Do I or do I not have access to version 4?
b) If I call the gpt-4 model and everything works, is model 4 loaded, but it does not “know” it?
c) If it is version 3, but I have loaded version 4 without errors, how is the access to each model managed?
d) The answer is contradictory in each and every one of the edges that make it up Is it normal?
e) I have to redo the token?

with these I think it is enough for the moment :slight_smile:

Yes, that’s GPT-4 hallucinating that it’s GPT-3.

You could set a system message to remind the model that it’s actually GPT-4.

Doc for reference and code below:

{"role": "system", "content": "You are a helpful assistant that translates English to French."}
2 Likes