DevDay 2024: San Francisco - Live(-ish) News

We’ll be posting as many new updates here as we get them!

We’re as excited as all of you to see what’s in store for devs this year.


2024-10-01T16:56:00Z


2024-10-26T17:28:00Z

o1 rate limit doubled.1

Realtime API

Speech to speech.

Real-time API is live in the playground!


2024-10-01T17:31:00Z

Prompt builder


2024-10-01T17:41:00Z

vision fine tuning

50% discount on cached prompts


2024-10-01T17:44:00Z

prompt caching

model distillation tool

Stored completions

Evals


I think this is all for hot newness.

Go build something cool!

12 Likes

Realtime API

https://openai.com/index/introducing-the-realtime-api/

Vision fine-tuning

https://openai.com/index/introducing-vision-to-the-fine-tuning-api/

Prompt Caching

https://openai.com/index/api-prompt-caching/

Model Distillation

https://openai.com/index/api-model-distillation/

Access to the o1 model is expanded to developers on usage tier 3, and rate limits are increased (to the same limits as GPT-4o)

2 Likes

1 Like

last year there was a live stream, will this year not have one?

by the way, just as a comparison, last year we got at dev day: “GPT-4 Turbo with 128K context and lower prices, the new Assistants API, GPT-4 Turbo with Vision, DALL·E 3 API, and more.”

3 Likes

No live stream this year, but everything will be recorded and eventually put up on YouTube.

The word is no new models this year, there are four breakout sessions after the keynote, they’ve been cagey about what is being discussed in them. I expect they’ll likely be detailed sessions on products announced during the opening keynote.

2 Likes

Last year we had the stream and many announcements (also GPTs on the ChatGPT side).

2024 speculations:

  • Whisper Turbo (dropped yesterday)
  • Audio input and output?
  • Vision with Videos?
  • Direct image or diagram outputs?
  • TTS: better voices?
  • streaming apis?
  • browsing tool?
  • context caching?
  • new 2024 knowledge upgrade?
  • cheaper prices?
  • o1?
  • Sora?

Please keep us devs posted! Eager to see the live news, today it’s gonna be ama amazing day!

Update: Just read the no-livestream part :confused:

1 Like

I could see better TTS, streaming APIs, and context caching…

  • Whisper turbo has already dropped as you wrote.
  • I think browsing is unlikely. They don’t want to be responsible for developers’ web traffic.
  • I don’t think a new knowledge update is big enough to warrant keeping it under wraps for DavDay.
  • Cheaper prices would be great, but I can’t see it happening absent a new, more efficient model, and they’ve already said no new models, though they could be misleading.
  • o1 and Sora would count as new models, so if we take OpenAI at their word, they’re not coming today.

My guess might be something to do with the reasoning models and maybe more control over the internal reasoning mechanism, like a dial for the amount of compute to spend on reasoning, or adding function calling/tool use, or remedying any of the other “beta limitations,”

3 Likes

Great prediction, focused and attainable new drops. Does it start at 10?

1 Like

T-minus 2-minutes.

6 Likes

Keep us posted, we’re all hitting F5 / Cmd-R.

1 Like

maybe @elmstedt is out of words after seeing what OpenAI’s going to launch

2 Likes

You can also stay up to date here on the OpenAI Devs account on X. They started posting.

https://x.com/OpenAIDevs

2 Likes

I have X opened in separate screen.

2 Likes

TechCrunch just posted about their brief pre-show:

  • Realtime API (Public Beta) for low-latency Voice
  • Vision Fine-Tuning in API for GPT-4o using images and text
  • Prompt Caching
  • Model Distillation for fine-tuning smaller models using larger ones
6 Likes

image
Progress :heart_eyes:

3 Likes

Interesting. I did not have this on my radar but that’s exciting.

4 Likes

No way!!!

BFBJIWERBHJIKFBEKR Why didn’t I even think of this??? Fine-tuning for vision would be EPIC

Oh wow. This sounds very interesting

5 Likes

7 Likes

So is the realtime API basically advanced voice mode API?

1 Like

:exploding_head:

nah, no way hahahhaha

3 Likes