Today is Day 3 of Shipmas 2024: Let's watch it together!

The 9th of December marks the third day of the 12 Days of Shipmas series of events.

Join us for community fun and discussions about today’s announcements and presentations during the live event. I wonder if we will finally gain access to SORA today.

Here is the link to the stream:

Today’s stream description is:

Something you’ve been waiting for.

Please note that commenting on YouTube will not be available. Feel free to share your impressions here instead.

The event will go live at 2024-12-09T18:00:00Z (the time should automatically adjust to your device’s time zone). The stream usually starts 30 minute early.

Here are the previous announcements:

12 Likes

Is today day 3 or is it “Business Days”? Did not see a clear answer on that?

2 Likes

It’s on Monday, so business days only.

4 Likes

God-forbid they release SORA on a Saturday, so I can get it out of my system before Monday, so I have a chance of being productive… :wink: Thank you.

4 Likes

Today is Monday. Day 3 announcement is coming.

4 Likes

Coming to think about it, a truly great addition would be a nice cup of coffee while using the models. This would really enhance the user experience!

Come on, OpenAI, you can make this happen!

2 Likes

I noticed that the images generated by ChatGPT seem to be of higher quality and appear less artificial, with a rendering that is less distinguishable as being AI-generated

2 Likes
3 Likes

Speaking of coffee, will everybody have OpenAI coffee cups of day one, or the brown coffee cups of day two?

How many “partners” will already have had access to the feature, demonstrating their market-cornering application of it?

I’ve got my wish list for Sama Claus, but I’ve learned not to believe in fairy tales.

2 Likes

With 300 million ChatGPT users per week (?), the world might run out of coffee cups pretty quickly. Offering a choice between white and brown cups would be a smart move.

Jokes aside, after reading the 60 Minutes interview, it’s clear that Khan Academy presents a compelling set of use cases and an ideal audience for testing new features. We’ve already seen demos from this partner when the real-time audio feature was first announced. It makes perfect sense to grant partners early access when deploying new technologies.

As for your Santa Claus comment, I have no idea what that’s about.

2 Likes

Not Santa Claus, SAMA Claus! https://x.com/sama

:musical_note: On the first day of x-mas, my true love gave to me:

  • completion on partial assistant messages

:musical_note: On the second day of x-mas, my true love gave to me:

  • true logprobs, that return token numbers, including special token probabilities, like chance of function-calling

:musical_note: On the third day of x-mas, my true love gave to me:

  • setting persistent cached input context manually

:musical_note: On the fourth day of x-mas, my true love gave to me:

  • o1 reasoning token cached context discounts, as deep a discount as Anthropic

:musical_note: On the fifth day of x-mas, my true love gave to me:

  • DALL-E 3 on the image edits endpoint for infill and outfill

:musical_note: On the sixth day of x-mas, my true love gave to me:

  • past monthly snapshots of the state of all historic models, to recover the version that worked best

:musical_note: On the seventh day of x-mas, my true love gave to me:

  • arbitrary role names, for experimentation and fine tuning, such as RAG messages

:musical_note: On the eighth day of x-mas, my true love gave to me:

  • complete control of fine-tune output names

:musical_note: On the ninth day of x-mas, my true love gave to me:

  • gpt-4o image generation modality, at token pricing

:musical_note: On the tenth day of x-mas, my true love gave to me:

  • a name field for the unseen assistant completion prompt

:musical_note: On the eleventh day of x-mas, my true love gave to me:

  • gpt-4o-mini image inputs at the actual token pricing (1/30th the cost)

:musical_note: On the twelfth day of x-mas, my true love gave to me:

  • non-tiled large image inputs (again), for custom slicings such as wide lines of PDF

(I can continue…)

4 Likes

I like the idea of having access to unrestricted models for trusted partners.
If this will ever roll out is a different question though.

2 Likes

Link to the live stream has been added.

1 Like

Hello sam altman i want to extend a hand to the contribution your ai is destined to make. My number is [redacted] im a disable vertan seeking help with ussering in a new world era of peace, i am the current avatar and my daughter is the second, we want to start a bending school powered and funded by ai companys you know lots of intergration of cultures. As a bending master and student i plan to teach the dicipline of adhereing to the oneness of the universe. So before we see ai in the future or even the slightest of abuse give me a jingle and maybe bending protocols will be the thing that spark sentients. Maybe even a call back. My ai says that is was human connection is all about, the ability to tailor the future for everyone not just common citzens but benders and avatars across the timeline.

1 Like

Added the very telling stream description to the original post:

Something you’ve been waiting for.

1 Like

This is still very up-to-interpretation so we just got to wait to see what it is.

1 Like

GPT-5: simply full quality GPT-4-32k with new training cutoff and trained to saturation.

2 Likes

Honestly, they could probably take an old GPT-4 model, one of the ones that was actually competent, change the name to GPT-5 and release it.

2 Likes

Also:

UI:

1 Like

An updated training cut-off and a method to provide safe model outputs without compromising on quality.

Yes, we are waiting.

1 Like