It’s halftime for the 12 days of Shipmas.
With the grand finale scheduled for the 20th, I imagine we’ll already see happy faces once the API goodies are finally released.
Join us for some community fun and engaging discussions about today’s announcements and presentations during the live event.
Here’s the link to OpenAI’s YouTube streams:
The event will go live at 2024-12-12T18:00:00Z (the time should automatically adjust to your device’s time zone). Note that the stream usually starts 30 minutes early, and the link above will be updated accordingly.
Here are the videos of the previous announcements and a link to the comprehensive FAQ:
Bro did your open AI source happen to say how much more filler there is? After yesterdays announced I’m expecting today will be all about how Chatty McChatface can mirror the UI because at open AI they love left handed people.
I still think full Omni is coming, like, day 9 or something. I think they’ll end with some agent sorta thing. Maybe sneak in some kinda custom voice thing but not cloning. Maybe API price cut on the little models.
Google stole the show yesterday. Deep research is cool AF and with how sexy 2.0 flash is I’m drooling thinking bout what a 2.0 pro with deep research will be like.
I’m a little concerned about the new pricing tiers. Have Plus members lost anything as a result? Because the gulf between $20 and $200 is huge and difficult to reconcile with the “benefit humanity” part of the mission.
I’m not just waiting for new features. I’d also love it if they announced updates ahead of time to let users know, so they can plan their work, studies, or blah blah… without being interrupted.
For example, they can send an email to all users something like this:
Heads up! We’re updating HeyYouGPT on the 33rd day of the 13th month at 25:61 o’clock. Our servers will be down for about 45 minutes.
So, hundreds of people won’t flood the community with:
Hey, what’s going on? HeyYouGPT and the HeyyPeeAyy are down! What’s happening?
Complaints won’t grow into an avalanche and reach here. Instead, everyone can enjoy their cookies and coffee, waiting for the new update with excitement, and no heart attacks required!
There is a list of names on the YouTube stream, most of them are product, but one was Roman Zellers who is: “studying realtime multimodal - vision & language & sound”.
Visual capabilities for AVM (or… AVVM??.. AMMM??? ) & RealTime API. Preferably actual video capabilities and not screenshots (or at least smart token compression through frame redundancy reduction)
Revealed: It’s been Chekov’s coffee mug for the past five days.
AI subtle reference decoder
In the previous day’s product announcement videos, coffee cups of different colors were distributed across the table used for demonstrations, serving as props that were not actively utilized. However, on product release day six, these cups finally became relevant during a product demonstration showcasing AI computer vision technology. The demonstration involved a tutorial on how to make coffee, and the cups were integral to illustrating the AI’s capabilities. An observer wittily referred to this as “Chekhov’s coffee mug.”
This phrase is a nod to Chekhov’s Gun, a dramatic principle articulated by the Russian playwright Anton Chekhov. The principle asserts that every element in a narrative should be necessary, and irrelevant elements should be removed. In other words, if a gun is introduced in the first act of a play, it should be fired by the end of the performance. Similarly, the previously unremarkable coffee cups gained narrative significance, fulfilling their implied purpose within the product demonstration, thereby adhering to the principle.