Having actual OpenAI reps pay attention to the forum and provide some confirmation that you’re at least aware of our problems and feature requests is the most important announcement you could make. At the bare minimum, we should not have to speculate blindly about critical performance issues in the API, which directly impact viability for integrating OpenAI services into an existing platform. While a GPT5 or whatever would be fun, it doesn’t matter for a business if unreliability results in a bad experience for the end user.
OpenAI is currently prepping the next generation of its o1 reasoning model, which takes more time to “think” about questions users give it before responding, according to two people with knowledge of the effort. However, due to a potential copyright or trademark conflict with O2, a British telecommunications service provider, OpenAI has considered calling the next update “o3” and skipping “o2,” these people said. Some leaders have referred to the model as o3 internally.
The startup has poured resources into its reasoning AI research following a slowdown in the improvements it’s gotten from using more compute and data during pretraining, the process of initially training models on tons of data to help them make sense of the world and the relationships between different concepts. Still, OpenAI intended to use a new pretrained model, Orion, to develop what became o3. (More on that here.)
OpenAI launched a preview of o1 in September and has found paying customers for the model in coding, math and science fields, including fusion energy researchers. The company recently started charging $200 per month per person to use ChatGPT that’s powered by an upgraded version of o1, or 10 times the regular subscription price for ChatGPT. Rivals have been racing to catch up; a Chinese firm released a comparable model last month, and Google on Thursday released its first reasoning model publicly.
O3 with agentic capabilities & built-in “tasks” management would make my day but I’m probably asking way too much lol
In all seriousness, for my startup’s needs at least, two things would make us happy for today’s announcement:
A model that is better at calling the right function at the right time without time tradeoff (o1 is good but is too slow, gpt4o is fast but when it needs to call the right 10 functions in the right order to perform a task correctly it crumbles)
An equivalent of Claude Computer Use, but better and cheaper
I’m not sure if it makes sense to expect an o3 model today. We’ve only just received the full o1 on the first day of Shipmas, and it hasn’t even fully rolled out yet. Only a fraction of developers have API access to o1. It’s possible we’re entering a new preview for o3, but it’s more reasonable to assume a new variant of the ‘Omni’ models—maybe 5o or 4.5o, and a mini variant.
The total surprise would be a locally deployable GPT-3o.
OpenAI Announces Open-Sourcing GPT-3: A New Chapter in AI Research Collaboration
OpenAI is excited to announce a major step forward in our mission to ensure that artificial general intelligence benefits all of humanity. Today, we are open-sourcing GPT-3, one of the most influential language models to date, by releasing the model weights for all four of its base configurations:
Ada (350M parameters)
Babbage (1.3B parameters)
Curie (6.7B parameters)
Davinci (175B parameters)
In addition to these base models, we are also releasing the weights for text-davinci-003, the fine-tuned instruct variant designed to align with human instructions and preferences.
By making these resources publicly available, we aim to foster a new era of transparency, collaboration, and innovation in AI research and development.
Why Open-Source GPT-3?
Since its debut, GPT-3 has demonstrated unprecedented capabilities in natural language understanding, generation, and contextual reasoning. It has been integrated into countless applications, advancing fields from creative writing to scientific research. However, as stewards of AI development, we recognize the importance of making cutting-edge technology accessible to the wider community for several reasons:
Driving Innovation: Empower researchers and developers with the tools to build, experiment, and improve upon the foundations of GPT-3.
Promoting Transparency: Provide the AI community with a deeper understanding of large language models, their training dynamics, and limitations.
Enabling Responsible AI: Collaborate with the broader community to address challenges such as bias, fairness, and safety in AI applications.
What’s Included in the Release?
Along with the model weights, we are providing:
Comprehensive Documentation: A detailed guide to the models, their architectures, and usage best practices.
Pretrained and Learning Tokenizers: To facilitate efficient text processing and input handling.
Inference Tools: Scripts and utilities to streamline deployment across a variety of platforms.
Community Forum Category: A platform to discuss ideas, share findings, and collaborate on advancements in innovative uses of these new community models.
Join Us in Building the Future of AI
We believe that AI research is at its strongest when it is conducted in the open. By releasing GPT-3, we hope to inspire the next generation of breakthroughs and to empower researchers everywhere to address real-world challenges in a meaningful way.
OpenAI remains committed to advancing the field responsibly, and we encourage the community to explore, collaborate, and innovate with these models.
Together, we can shape a future where AI serves the interests of humanity.
gpt-4o-turbo-preview? o2-preview? gpt-4o-5-preview? gpt-5o-preview? hahahah I’m terrible at guessing, but doesn’t seem like a gpt-5 level event based on the clue… hm, people on x seem to think its o3 since o2 is a brand in the uk, or gpt3o… interesting. it would be nice to have another “international competition” for early access to a new model, like gpt-4 with the evals repo…
Sometimes a ship full of Christmas presents just happens to drags anchor, rip up a couple of undersea cables, and then get boarded by four different viking tribes at the same time.
It’s getting shipped alright, no one said anything about stuff getting delivered
To their credit, they sorted the sora stuff fairly rapidly, let’s hope they can sort the o1 api backlog soon as well. It feels like they’re getting better at this stuff.
naaa i also fee like if its awesome like a really new model they will just tease it and if its just a facelift like you can use as much advanced voice mode as you want or premium mode for all till the end of the year they will give it instand