It seems that OpenAI GPT-4.5 Turbo has been leaked: search engines like Bing and DuckDuck Go have indexed the product page. The link still leads to an error page.
The model is said to have a context window of 256K tokens, twice as much as GPT-4 Turbo, and will be up-to-date until June 2024. It is also said to be OpenAI’s fastest, most accurate, and most scalable model to date.
There have been rumors about GPT-4.5 Turbo since December. The leaked teaser text does not provide any information about possible video or 3D capabilities.
For what it’s worth: one thing I have been observing since the weekend is the GPT-4 turbo’s capability to produce much longer outputs than was normally the case.
I’ve played around with a few different prompts including some quite simple ones that did not even include any additional context and suddenly I easily get these 1,500+ word responses in one go, which seemed quite difficult to achieve before…
It’s a constant up and down with the output length. I had up to 3K tokens almost reliably without additional effort the last month and now I am back to the base level.
Whatever you want to do with long model replies that doesn’t need to be production ready: do it now!