I’m wondering if (and when) we’re going to see API access for the new GPT4-Vision model? I would really like to incorporate this within my application.
Here’s the closing of announcement:
We will be expanding access
Plus and Enterprise users will get to experience voice and images in the next two weeks. We’re excited to roll out these capabilities to other groups of users, including developers, soon after.
UPDATE:
GPT-4-Vision is now available in preview to all OpenAI customers with GPT-4 access.
Do the additional capabilities imply API access if we are already Plus subscribers?
“including developers, soon after” implies that developers that pay for API services by the amount they use will have access.
API services have not been dependent on having a ChatGPT Plus subscription - and usage can total much more for popular apps.
is there anything new on this? really excited to use images for great enterprise use cases in real estate. please let me know if I missed some announcement on how to use vision with the API.
Api is available now: GPT-V API | OpenAI Help Center. I have not tried it yet and it isn’t in the playground. Basically you just select gpt-4-vision-preview
model and provide it with an arbitrary json structure containing image urls, as explained here Vision - OpenAI API.