Are we already testing GPT-4.5?

Found it on Reddit and decided to test it, seems legit.


Oh well…



Requesting the system prompt from the Android app I got this well known response:

“You are ChatGPT, a large language model trained by OpenAI, based on the GPT-4 architecture. You are chatting with the user via the ChatGPT Android app. This means most of the time your lines should be a sentence or two, unless the user’s request requires reasoning or long-form outputs. Never use emojis, unless explicitly asked to. Never use LaTeX formatting in your responses, use only basic markdown. Knowledge cutoff: 2023-04 Current date: 2023-12-17”

Does not mean that some users are not inadvertently testing another variant of the model but it’s likely not true for all of us. The best argument for this case is that we would get a huge announcement first, I suppose.

1 Like

could be, or it could also be hallucination, the response in my last screenshot sounds like a commercial more then a system/instruction prompt.

1 Like


Yeah, this was floating around twitter too.

I think this all started from that GPT-4.5 API hoax someone did.

Also, it’s as if people suddenly forgot that before the April 2023 knowledge update, it confidently claimed multiple times its knowledge cutoff was Jan 2022, when it was still September 2021. Because of that, I don’t understand why people find its own claims about details like this trustworthy.


With 100 million users per month there is always somebody new who will get tricked by the model into believing that it actually knows. Haven’t we all been there, once at least?


Or when I got frustrated at the waning AI capabilities and thought in October the AI needed to be properly informed. Now actually seeming a prognostication.

Now no longer with the ability for you to continue chatting, though.

(chat share previously auto-moderated and 404’d, so enjoy again)

The hallucination is likely the result of more fine-tune about chat completion endpoint programming, and the AI conflating gpt-3.5-turbo therein with the ChatGPT system prompt. It also makes up models like text-davinci-004.


Very true. Can’t argue with that.

And as to _j’s point, Yeah I was wondering if some kind of fine-tune with formatting or something else had an unintended side effect.

I think a major problem is that usage for these models vary so greatly, let alone individual usage, that what might seem like help in one field might limit another. The balance for creative flexibility and accuracy is a very, very fine line that isn’t easily observable.

Regardless, I am curious to know how they are going to actually solve the current problems right now. I’m starting to notice its performance dip too. OpenAI has acknowledged it, most of us are aware of it. The question is how are they going to resolve it, if they even identified the problem yet?


I just got a two-thousand token reply from ChatGPT 4 so this part is fine. Regarding my custom instructions I had to adjust my prompts once more but am actually back to normal as of today.
Maybe when they announced that they had new GPUs on the 13th this month that was actually the through of the valley?

Edit adjusted the number of tokens.

1 Like

Interesting. You know, someone theorized they did some kind of increased bit quantization since the Nov 11 update (or somewhere around there), causing the model to perform worse.

Considering what you just mentioned, that earlier theory might actually check out, because if they have more GPUs, they don’t need as heavy quantization to keep up with the demand and save on GPU resources.


1535 = 2**10 * 1.5 - 1

1 Like

I mean, I was honestly just piggybacking off what vb said. He’d need to provide that source, not me.

And maybe it is, but we also know next to nothing of their actual infrastructure (nor should we expect to). We don’t know how the Microsoft deal works, nor how they handle their own GPUs. To be honest it doesn’t really matter, because if they re-enabled plus subscriptions, it’s obvious that they have more computer resources and can handle heavier traffic. Model quantization is all about compute resource efficiency, so regardless if they have more compute power from any source, they don’t need to use model quantization as intensely to save on compute resource.

Remember, this is why GPT-4 still has limits as to how frequently you can message the model. Whatever their setup is, it’s still not able to handle the compute power that’s needed if those constraints were let loose.

Oh, and especially on something like twitter, it could just as easily have been a typo. Who knows.

EDIT (again):

thanks for your patience while we found more gpus.

My linguistic brain just lit up and realized he is saying “thanks for being patient, we were finding more GPUs to get, and now we got them, and now that we found some more GPUs, we can re-enable plus subscriptions.”


The above reply makes no sense in the context of this topic about a silly Reddit screenshot…


So in the end, it is correct that there is no GPT4.5 language model or endpoint, right?

1 Like

this could have easily ended the back and forth, I captured 2 responses demonstrating contradicting behaviour (GPT-4.5 turbo and davinci-004), and a third response that sounded totally made up and hallucinated, possibly triggered by the nature of my prompt that was asking to continue the text. Almost like the old versions of chat completions.

It’s fair to say that there has been a bit of rumors and speculations lately on “imminent” releases and updates, around Christmas or new year. I liked believing it for a moment. Felt like Christmas every time they released something new in the last year.

1 Like

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.