Gpt 4.5 Expected Release Date?

Do we have an expected date when the new model is coming out? Gpt 4 and Gpt 4o are terrible and have fallen so far from grace. :smiling_face_with_tear: I will say that ever since the company decided to implement a filter, their quality has drastically gone down hill in terms of writing. Praying that something good will happy soon.

Nope lol

The good-ish news is that they’ve extended the life of the original (expensive) GPT-4 models until around mid 2025.

It currently looks like OpenAI is optimizing for cost (which is what enterprise customers seem to be asking for) rather than prowess, so I wouldn’t be surprised if GPT-5 was a big disappointment for most of us.

Of course it goes without saying that I would be happily surprised if I was wrong!

3 Likes

You would think their priority would be fixing the models they have so they aren’t garbage :smiling_face_with_tear:

As a former corpo, I have stopped expecting this to be true :laughing:

2 Likes

OpenAI Joined the Big Business Universe, lol.
Announcements for anything are business strategies.

So you have to deal with timing, choose what to announce, whether before or after it’s ready, generate expectations, etc, everything to generate interest from investors, beat competition, and so.

Then public timelines and expected dates like these probably won’t happen. In the age of real time feeds, people start to be hungry for updates. Many businesses depend on announcements and updates to maintain engagement.

This needs to be surgically planned: not too much news, not too little, not all at once, not too little.

It seems obvious to me, given the developments with Siri, Microsoft’s CoPilot and GPT4o’s desktop app, that GPT4o is evolving towards having a local LLM that users teach to perform all their tasks. The learned tasks will be uploaded to the large model in the cloud for distribution to other users’ local LLM’s and their accompanying memories. This will enhance users data security and enable them to carry out more labour-intensive tasks. At the same time it will shift a lot of the compute from OpenAI’s servers to users’ computers. And that will free up enough compute for GPT4o to start learning dynamically. The result will be that GPT4o will learn all user tasks within 6 to 24 months and thereby meet many definitions of AGI. The key is going to be having GPT4o users teach it how to use all their software and perform all their tasks. To start out, red teams will need to teach it to perform most popular tasks, and this will take a while. Going down the path Microsoft seem to be taking - having a huge local LLM that requires a terrbyte of disk space and its own AI chip - seems a blind alley to me. They look to be attempting to have the LLM learn everything on its own. And DEVIN has shown that’s not a good way to go. All learning is collaborative, whether it’s animal, human or AI. This is why feral children do not learn. Think back to the beginnings of AI back in Bletchley Park, where Colossus and the Bombes required cribs from human codebreakers to crack Enigma messages in a timely fashion. Leveraging users to teach AI to become AGI makes perfect sense.

Understand where you’re coming from on the perspective of Devins lack of functionality but want to know what you think about certain Q papers.

Bernt Bornich, the CEO of OpenAI’s robotics partner 1X, gives the best explanation I’ve heard of what Q* is in this video, starting at the 37 minute and 40 second mark, and getting fully explicit around the 39 minute and 9 second mark. https://www.youtube.com/watch?v=nkWANooIc1o&t=2s

1 Like

When free 2 user get access t to chat GPT - 4 ??

Hi @senguptasapravo2011 :wave:

Welcome :people_hugging: to the community!

You can get access already to GPT-4o model in free account, but there are some limits. Please see the following topic.

2 Likes

There is a need for GPT4.5, or 5 or 6?

From my point of view, I expect them to do more for current versions:

  • updated documentation to map the API structure for streaming (sometimes I think they put so many ways just to make confusion because some of them are breaking → through API assistant do not send the response after submitting tool output), only on their server, same with SSE (Server-Sent Events) to stream real-time information…)
  • updated documentation with better examples for noobs like me :wink:
  • structured outputs ( we can see the potential o1 versions, we need to hammer them because there is huge potential here and will be high performers in a Multi-Agent System if we can customize the Chain of Taughts or the Way of Thinking to make data granulation, and other)
  • maybe new onion models o2, o3
  • from +100 Billion company, they could share some chat templates (but not those trash chats they have now, with simple chat completions that can’t even render markdown). Or maybe they are afraid they give to much to competition… saw Claude was fighting last months to render advanced equations on chat… but they lazy or the policy is (we do not care, we just impress sheep’s for stocks only)