GPT-4.5 is live in the API!


Is this enough to get eligible for “If your organization is eligible for the offer, we’ll provide free usage of GPT-4.5 on traffic shared with OpenAI, up to 1 million tokens per day.”?
We are tier 3 user of OpenAI at the moment (most our actual traffic goes via OpenRouter, so OpenAI usage is low)

1 Like

Based on my understanding, the settings shown in the screenshot above are not sufficient.
It seems there are accounts that can opt in to share communication details with OpenAI to help improve the model, allowing them to use the model for free, and there are accounts that do not have this option.

If the account in question is eligible for free model usage through opting in, then, as in the screenshot I attached, you should see a message like, “You’re eligible for free daily usage of gpt-4.5-preview, gpt-4o, o1, gpt-4o-mini, o1-mini and o3-mini.”

By enabling the “Enabled for all projects” option below that message, you can use the model for free (within a certain token limit) while sharing your communication content with OpenAI—that is my understanding.

1 Like

please keep it in the api, my saas project is giving better outputs than gpt-4o

5 Likes

GPT-4.5 is much more useful when giving me feedback on philosophically tricky frontier neuroscience modelling/theorizing problems. It would be a significant loss to my productivity if I have to go back to GPT4* or Claude 3.7, so I hope you continue serving it in the API. I’d gladly pay more for it.

5 Likes

Hey everyone, I’ve been testing out GPT-4.5 (the latest beta), specifically integrating it with multi-agent pipelines, but it’s proving very difficult in practice. The GPT-4.5 beta model’s response speed is significantly slower—even compared to dedicated reasoning models—leading to constant timeout issues. Even after increasing my API timeout settings from 30 seconds to 60+ seconds, these frequent timeouts persist when chaining agent calls together.

Has anyone else experienced this? I’d love suggestions on strategies or optimizations that have worked for you—or if perhaps the timeout issue is being looked at or addressed in future GPT-4.5 updates. Thanks!

1 Like

Hello, great new! What is the training cut-off date?

1 Like

Did you look at the documentation before asking that?

https://platform.openai.com/docs/models#gpt-4-5

There is “what OpenAI states” - and then “what the AI model knows”…

A post was merged into an existing topic: ChatGPT Project Files are not working at all

We want to use 4.5 for mental health consultations (screening, diagnosis and health coaching). We would also appreciate API access to o1

3 Likes

This isolates the value elegantly. OP it would be useful to have a definition and/ or q&a a examples of prompt space where 4.5 covers any gap not covered by other base models (although that’s a big space) AND even more interesting where reasoning steps don’t help and other factors have been denoised (human sensory context)

1 Like

That would be kind of hard to do. It has both qualities and unexpected blind spots, and with the array of models, it’s hard to find one that suits every scenario, especially for the API and under the desire for customization.

What can GPT-4-turbo do where GPT-4.5 basically falls flat with bad guesses until you present the riddle with as much specific detail as a Jeopardy question?

Who is a mallard’s favorite 1930s head football coach from Yale? (or other phrasing)

only gpt-4-turbo: “Ducky Pond” (Raymond W. “Ducky” Pond)

Eating - a profound human discovery?

user: It must have been quite the genius millennia ago that came up with the idea, “if you jam plants and dead animals in this hole in your head, you’ll live longer”…

GPT-4.5 (ChatGPT): Indeed! It’s quite fascinating when you frame it that way—early humans observing the world around them and somehow realizing, “Hey, putting these leaves, roots, and animal bits into this opening in my face keeps me going!”

If you think about it, this seemingly obvious insight today was actually quite profound when first discovered. Early humans had to:

  • Observe carefully: Recognizing that eating certain substances reduced hunger, provided strength, and improved health.
  • Experiment courageously: Determining through trial and error what was edible, nutritious, toxic, or medicinal—often at significant risk.
  • Communicate effectively: Sharing these insights within their community, ensuring survival knowledge spread across generations.
  • Adapt intelligently: Developing tools, cooking methods, preservation techniques, and eventually farming and hunting strategies.

In a sense, food selection and consumption represented humanity’s first foray into scientific experimentation and innovation—the very foundation of our development and civilization. Whoever first made that cognitive leap, or perhaps a collective series of incremental leaps over generations, certainly deserves recognition as an unsung genius of the distant past.

Really sucks as far as I can tell so far, somehow 4o keeps getting worse also…

1 Like


What kind of business makes a profit plugging into an API with tokens this expensive? I can quickly blow through 30K tokens in a few “add X to inventory” function calls in my app with 4o-mini. You’re going to use tokens much faster in the sort of use case this model is designed for.

2 Likes

One of my use cases is a very personalized chatbot assistant, that I’ve been working on for about a year by now. It’s using a combination of a system prompt and a prompt template which combined makes up around 2.5k tokens at the moment.

In the beginning, my options were GPT-3.5-Turbo, GPT-4, GPT-4-32k and GPT-4-Turbo. It was very clear back then that GPT-3.5-Turbo was out of the question and that GPT-4-32k was the best fit by a landslide.
GPT-4 had a rather small context window, making it unusable in the real world. GPT-4-32k felt mostly the same as GPT-4 but with a much larger context window (still a bit small if conversations drag on but quite usable). It was a clear winner but, my god, the price.
GPT-4-Turbo was a faster, cheaper model with a Much larger context window of 128k tokens. Unfortunately though, it felt Way dumber compared to GPT-4-32k.

I was ready to bite the bullet and just use GPT-4-32k but, luckily, this was around the same time when GPT-4o was released. GPT-4o seemed definitely smarter than GPT-4-Turbo while also being cheaper and faster. It wasn’t on GPT-4-32k levels of intelligence but it was definitely something I could work with. I rewrote a lot of my prompt for it to be tailored to GPT-4o and called it a day. To this very day, I continued to use this model. It’s not perfect, by any means, but I like it a lot.

No model seemed to compare to what I managed to achieve with the GPT-4 (32k) models though. Until now. GPT-4.5 seems to be the first model that is an actual improvement from the GPT-4 models rather than a model offering tradeoffs in terms of performance. It is slower than GPT-4o, of course, but it’s fast enough for me in this use case. Unfortunately, the price reflects the performance improvement versus GPT-4-32k making it too expensive to justify a jump from GPT-4o. A coding session with it would likely set me back $30 lol.

However, there is a lot of value in this model. I’m sure that o1 and o3 would beat it in big coding tasks and such but if I needed a Reasoning enabled chatbot, I’d much rather make my own and models like GPT-4.5 is definitely an option for an absolutely crazy implementation of that. A new ceiling, if you will.

2 Likes

GPT4.5 cost management using minions (Good for your Batch use cases). HazyResearch/minions: Big & Small LLMs working together

1 Like

What is your specific use case, I mean what do you do with?

1 Like

After extensive testing. This is, even without reasoning, the best model of OpenAI. Excellent for writing/text work. Light years beyond 4o. But that was also sorely needed (…). So, yes, please keep it up!

3 Likes

People are giving this model a lot of grief, but in its first go, it solved a technical issue I faced for nearly a year that 7 other AI models, including 4o, o1, and o3-mini-high could not with hours of attempts.

Unlike the rest that gave tons of bog standard advice and tries at similar work around, 4.5 suggested something obvious but unique and additional solutions to tighten up the weaknesses of the solution, all of which worked.

Phenomenal model, I thank you.

4 Likes