OpenAI Dev-Day 2023: Dev-Day Discussion!

Once I’m inside, the location is not public, geo locators and such

1 Like

Probably so they don’t get swarmed by people wanting to get in. That could be a major security hazard.

The $500 to get in to a business card swap meet with one vendor booth and no other info before you pay up would have discouraged most. They’d better have a sweet gift bag. I made pandemic drives to the bay area to get rare car parts. This was a pass.

1 Like

Question for Sam!

Sam once mentioned that he had a group chat with his friends where they had a running bet on when the first AI company to reach $1 billion in valuation with just one employee will be reached.

What initial date did he guess? And has that date changed?

1 Like

Less than 60 minutes too go! The thing I want most is a larger context and memory for Coding and Code Generation. How about out?

How do we get notified about future in-person OpenAI events?

  1. Twitter (x). Besides the official account, several principals never seen here post their PR thoughts daily.
  2. Website blog.

Will there be a new version of the Codex model?

GPT-4 is the recommended codex replacement model if you had become fond of the one from 2022. However doing more with AI-written code is one of the distinct possibilities you’ll hear.

1 Like

That sliding, bouncing box gives me flashbacks to the days of staring, waiting for it to perfectly hit the corner. I think it is the generic DVD player screensaver? Can’t even remember.

Will it hit!!?! I am so bored!

P.S. Take what you can

:popcorn: :popcorn: :popcorn: :popcorn:

2 Likes

5 Likes

:popcorn: :beer:

It’s 6:51pm in Germany btw. Prost. :partying_face:

Everybody putting their WiFi in promiscuous mode for Wireshark…

Here we gooooooo

More words

Sone extra words more and extra good one try lol

Next more words

This isgetting silly

2 Likes

Aannd there you go! :innocent: ($500 API credits for all attendees gifted by an AI)

Does “JSON Mode” limit what tokens the language model can output (during nucleus sampling), similar to the LMQL programming language (lmql.ai)?
Or it just validates the output? Is it possible that the model generates an answer (e.g. some gibberish) that cannot succesfully be converted into JSON?
Or how does it work? Thanks!

1 Like

That was a great event! I wish I could have been there! :slight_smile: I haven’t seen gpt-4-turbo on my playground, I think it will take a while…

So did anyone who runs a chat based bot just get wiped out?

1 Like

Do we know if the 128K is in the ChatGPT context, or just through the GPT-4 Turbo 128K context?

Also is it was said to ask in the forum, so I’m catching up here but do we need to apply for the “OpenAI-hosted tools API” or Code Interpreter API Beta, or is that ready to beta test out of the box?