OpenAI Dev-Day 2023: Announcement!

Hello OpenAI Community!

Can’t make it to Dev-Day 2023? We’re bringing the event to your screens for an enriched virtual experience – right here on the forum!

Monday, November 6, 2023 6:00 PM - Keynote Live Stream:
Tune in for Sam Altman’s keynote as he unveils the newest OpenAI innovations with live demos.

Join the Discussion:
Engage in the Dev-Day Discussion during the keynote to ask questions, share your experiences, talk about updates, and connect with fellow members. As an exclusive part of this year’s event, one of our forum members is at Dev-Day in person! While they’ll be immersed in the event, we’ve set up a team of forum regulars, who will help relay your questions to our onsite member, who may respond directly from the venue.

Keynote Community Bingo:
Elevate your viewing! Participate in our Community Bingo for a chance to snag amazing prizes during the keynote. Get all the details and secure your bingo card!

Embrace the Dev-Day vibe, wherever you are. We can’t wait to engage, learn, and celebrate with you. Stay inspired!


Greetings everyone, the keynote will end at 11:30am, please post your questions, comments, views and respectful discussion and I will try and get as many of your questions answered as possible.

The amazing group of forum members will be on hand to answer questions you may have, please note that questions about specific newly released services may take some time to get definitive answers, and it may not be possible to answer every question on the same day, but we will all try to get you the information you need as soon as possible.

It’s going to be an amazing day and I hope you enjoy the show!


Good evening, I’m excited for tomorrow to arrive, I’m sure it will be great, thank you very much open ai for making AI become of general interest. If it weren’t for you we would have this in 20 years lol My point of view is that tomorrow they will release a new model. or an improved model of gpt 3.5 or gpt 4 waiting by doing gpt 4 v. I love you guys =)


Thanks to all of the forum members who are making it possible for us to relay questions and the fantastic organization of the event here in the forum!

Special thanks to @N2U and @Foxabilo in particular!

I would be really happy if we can learn when OpenAI plans to move GPT-3.5 and GPT-4 from Beta to a somewhat final Release status.


Try to nail down their tiers and their speeds and who “may” be upgraded after being downgraded, prepaid or prior monthly (and the future of those accounts). Furthermore, the lack of any information or communication forthcoming about these implemented yet unannounced changes for even those that would try to support OpenAI’s users on their own forum.

I’ll throw in another:
(T/F) : Not paying for ChatGPT Plus = not developing for OpenAI’s benefit ?

This might be the most exciting news yet!! I came here and made a Forum account just to congratulate everyone working on OpenAI, thanks! Being a dev is more fun than ever!

1 Like

I’m getting:
Error code: 404 - {‘error’: {‘message’: ‘The model gpt-4-vision-preview does not exist or you do not have access to it. Learn more: How can I access GPT-4? | OpenAI Help Center’, ‘type’: ‘invalid_request_error’, ‘param’: None, ‘code’: ‘model_not_found’}}

I am a paying API user. How do I access the gpt-4-vision-preview model?


Anyone else love the new look?


Incredible keynote, I’m very excited to try out the new features

I haven’t gotten them yet. I’m sure it will happen soon but sounds awesome @grandell1234 :slight_smile:

1 Like

First thing: your account will need to be granted access to the 1106 models.

The fastest way to check besides the models endpoint is your rate limits model list (link so you don’t have to go looking in the upside-down account page)

1 Like

Amazing news from the keynote! Can’t wait to work with it… Currently searching my way around to better understand what will I need.
Thanks to all that make this happen!

1 Like

I got goosebumps throughout, crazy how ahead of everyone OpenAI is, congrats guys :clap:

I’m trying to wrap my head around everything, TTS and vision pricing seems incredible.

On things like vision in assistants, do we need to create bespoke functions for it? Or is this expected to get added eventually?

:headstone: RIP to anyone building anything that they announced today.

1 Like

got error when uploading files other than fine-tuning purpose

from openai import OpenAI
client = OpenAI(api_key="")

  file=open("filese.pdf", "rb"),
Error code: 400 - {'error': {'message': "'assistants' is not one of ['fine-tune'] - 'purpose'", 'type': 'invalid_request_error', 'param': None, 'code': None}}

Absolutely blown away by the Keynote!

New models, cheaper pricing, low effort RAG enabled, TTS, Vision!!! :scream_cat:


I’m on usage tier 4 yet I’m unable to access anything that they announced today that supposedly was available already. :thinking:

Like, not even the assistant playground works for me OpenAI Platform

1 Like

Rollout starts 1 pm Pacific


Or if they don’t actually “roll out” to let everyone get this out of their system - 502 errors start 1:30pm… :grin:

1 Like

As long as it’s not a 'code': 'model_not_found'

is RAG going to be available on ChatGPT or just with API use? I don’t see it discussed on the API platform pages/documentation. Is it also going to be made available at 1PM PST today?

1 Like