Day 6 of Shipmas: Halftime is today

It’s halftime for the 12 days of Shipmas.
With the grand finale scheduled for the 20th, I imagine we’ll already see happy faces once the API goodies are finally released.

Join us for some community fun and engaging discussions about today’s announcements and presentations during the live event.

Here’s the link to OpenAI’s YouTube streams:

The event will go live at 2024-12-12T18:00:00Z (the time should automatically adjust to your device’s time zone). Note that the stream usually starts 30 minutes early, and the link above will be updated accordingly.

Here are the videos of the previous announcements and a link to the comprehensive FAQ:

12 Days of OpenAI - Release Updates

10 Likes

Go big or go small seems to be the question on a lot of minds.

I’m always happy for a new Flagship model, but I do see value in snappy smaller models you can train for tasks.

BTW, thanks so much for doing these threads and tying them all together!

Everyone, can we keep it (mostly) on topic today?

Or at least...

…not just corpo-griping that you see on Reddit, Tweeter, etc! :sweat_smile:

Seriously, though, ready for a good half-time thread!

4 Likes

A staff member, I forgot their name, said that todays announcement was a big filler, so hopefully we get something big tomorrow.

:melting_face: :face_holding_back_tears:

best xmas ever, half time was so full of emotions, I’m ready for it.

tonight we need a drink, what a day

1 Like

this is a good point, would be nice

Bro did your open AI source happen to say how much more filler there is? After yesterdays announced I’m expecting today will be all about how Chatty McChatface can mirror the UI because at open AI they love left handed people.

I still think full Omni is coming, like, day 9 or something. I think they’ll end with some agent sorta thing. Maybe sneak in some kinda custom voice thing but not cloning. Maybe API price cut on the little models.

Google stole the show yesterday. Deep research is cool AF and with how sexy 2.0 flash is I’m drooling thinking bout what a 2.0 pro with deep research will be like.

I’m a little concerned about the new pricing tiers. Have Plus members lost anything as a result? Because the gulf between $20 and $200 is huge and difficult to reconcile with the “benefit humanity” part of the mission.

1 Like

Question for the thread: What’s been the biggest reveal so far for you personally.

Follow up ?: What do you hope to still see!?

Sora

New model, and honestly I hope to see something no one would ever expect but thats just awesome.

3 Likes

Free Figure 02 for all households worldwide! :sweat_smile: :sweat_smile: :robot: :robot: :strawberry: :robot:

2 Likes

I mainly want a new model so they discount gpt4-turbo :stuck_out_tongue_closed_eyes:

4 Likes

I’m not just waiting for new features. I’d also love it if they announced updates ahead of time to let users know, so they can plan their work, studies, or blah blah… without being interrupted.

For example, they can send an email to all users something like this:

Heads up! We’re updating HeyYouGPT on the 33rd day of the 13th month at 25:61 o’clock. Our servers will be down for about 45 minutes.

So, hundreds of people won’t flood the community with:

Hey, what’s going on? HeyYouGPT and the HeyyPeeAyy are down! What’s happening?

Complaints won’t grow into an avalanche and reach here. Instead, everyone can enjoy their cookies and coffee, waiting for the new update with excitement, and no heart attacks required!

1 Like

There is a list of names on the YouTube stream, most of them are product, but one was Roman Zellers who is: “studying realtime multimodal - vision & language & sound”.

I think we might be getting that today.

1 Like

Good catch! Thanks for sharing with us.

We’ve got such a great community here.

My wishlist:

  • Voice controls for RealTime API
  • Visual capabilities for AVM (or… AVVM??.. AMMM??? ) & RealTime API. Preferably actual video capabilities and not screenshots (or at least smart token compression through frame redundancy reduction)

My fantasy wishlist:

  • Fine-tuning for RealTime API
  • Reduced cost
1 Like

I really hope this is not in the realms of fantasy. :sweat_smile:

2 Likes

They’ll be paying us in a few years! :wink:

1 Like

Revealed: It’s been Chekov’s coffee mug for the past five days.

AI subtle reference decoder

In the previous day’s product announcement videos, coffee cups of different colors were distributed across the table used for demonstrations, serving as props that were not actively utilized. However, on product release day six, these cups finally became relevant during a product demonstration showcasing AI computer vision technology. The demonstration involved a tutorial on how to make coffee, and the cups were integral to illustrating the AI’s capabilities. An observer wittily referred to this as “Chekhov’s coffee mug.”

This phrase is a nod to Chekhov’s Gun, a dramatic principle articulated by the Russian playwright Anton Chekhov. The principle asserts that every element in a narrative should be necessary, and irrelevant elements should be removed. In other words, if a gun is introduced in the first act of a play, it should be fired by the end of the performance. Similarly, the previously unremarkable coffee cups gained narrative significance, fulfilling their implied purpose within the product demonstration, thereby adhering to the principle.

2 Likes

Wish granted my dude!

I was more hoping for Omni image generation but video in is pretty dope.

2 Likes

hm, did I miss it, I don’t see day 6…