Since chatgpt 4 for premium users is supporting image upload and asking questions about images, I’m assuming this should be supported by the API as well?
I tried finding any reference in the API that mentioned being able to upload an image and ask about it and I also tried it using the playground but it doesn’t work.
GPT-4 with Vision, sometimes referred to as GPT-4V or
gpt-4-vision-preview in the API, allows the model to take in images and answer questions about them. Historically, language model systems have been limited by taking in a single input modality, text. For many use cases, this constrained the areas where models like GPT-4 could be used.
GPT-4 with vision is currently available to all developers who have access to GPT-4 via the
gpt-4-vision-preview model and the Chat Completions API which has been updated to support image inputs. Note that the Assistants API does not currently support image inputs.
From the OpenAI GPT-4-Vision guide…
Looking at the model list in the assistants tab gpt-4-vision-preview doesn’t appear to be present.
Note that the Assistants API does not currently support image inputs.
Do we know if that’s planned/being worked on - do we even have an ETA?