Did anyone got access to GPT-4 new vision functionalities?

Some days ago, OpenAI announced that the gpt4 model will soon (on the first days of october) have new functionalities like multimodal input and multimodal output.

I haven’t seen any waiting list for this features, did anyone of you already have access?
I have the plus version and i know this is a necessary condition.

I promise you, the second someone has access you’ll see examples posted everywhere, you won’t need to ask.


I’m seeing a lot on the web now. I think it’s already out for someone

It’s rolling out now to some users.


I’m a plus user, and I haven’t gotten access to Voice / Image yet. Also, last year of training data is still showing as 2021. This is now supposed to be current. Anyone have any updates?

I don’t have access either, and I am plus user too.

I’m a plus user, and I have access to voice and Dalle-3, but not vision.

1 Like

Same here. I thought DALLE 3 and Vision were going to be given at the same time.

I have it in the android app, but not in the browser. And the functionality does not carry over to the web for chats initiated on my phone. :frowning:

Seems like social media influencers with a certain %of followers got access first.