Since the description of the vision model was previously ‘Ability to understand images, in addition to all other GPT-4 Turbo capabilities,’ and it has now changed to ‘Currently, GPT-4 with vision does not support the message.name parameter, functions/tools, response_format parameter,’ we would like to know when it will support these features that were claimed to be supported. This is very important for developers.
When will gpt-4-vision-preview support message.name parameter, functions/tools, response_format parameter?
I’d find this extremely useful as well.
I’m interested in using it to extract structured data from images. This kind of thing, but for images: Using OpenAI functions and their Python library for data extraction | Simon Willison’s TILs
Any chance function call support will be added in the future?