It’s already been an exciting journey with the previous announcements, and today marks the next big reveal.
Join us for some community fun and engaging discussions about today’s announcements and presentations during the live event.
Here’s the link to OpenAI’s YouTube streams:
The event will go live at 2024-12-10T18:00:00Z (the time should automatically adjust to your device’s time zone). Note that the stream usually starts 30 minutes early, and the link above will be updated accordingly.
Please be aware that commenting on YouTube will not be available, but feel free to share your impressions here instead.
I expect something for developers again. My biggest tip is a Gemini 1.5 Flash 8b-like model, because that’s something OpenAI doesn’t offer yet and what seems to be heavily used among developers.
I definitely wouldn’t complain about a Flash 8b-like model, but I’m really hoping MCP support for the desktop app is one of the gifts Samta brings for the 12 days of Shipmas. I finally got around to using it with Claude, and I am convinced.
video feature for advanced voice mode? that would be nice
i think it was there was a demo showing this when advanced voice mode was shown, would certainly be a game changer. maybe limited time for plus user and unlimited time for pro users.
I’ve tried logging into Sora.com a couple of times over the past day, but it’s still showing as ‘temporarily unavailable.’ I completely understand that demand can exceed expectations during a rollout, especially for something as exciting as this. That said, I wonder if you might consider leveraging tools like ChatGPT to refine your demand forecasting and customer communications protocols.
For instance, for those of us on paid plans, a simple email acknowledging the situation would go a long way. Including a link to a queue system or a way to schedule access could also help reduce frustration and ensure people don’t waste time trying repeatedly. I anticipated some hiccups and held off before attempting to log in, but it’s likely others haven’t had the same experience and may feel discouraged.
What would be really cool is best-in-class open weight model drops. Like 1B, 3B and 7B variants. Mistral style. Literally a santa moment. Maybe that will come at the end? But strategically that would be awesome - hitting both ends at the same time, paid API for one set of users/customers, and reeling in another set of users/customers, away from Meta and co.
I don’t know, but I don’t think so, I’ve done a “canvas” like feature on my own in the past and its the secret sauce to a lot of “devin”, “replit auto coder AI” and other automated feature coding software… personally, I don’t like the performance… it either gets expensive and inefficient or it is very inefficient if its using open source modals, but I find it interesting that openai is working towards that