The update is documentation:
https://platform.openai.com/docs/guides/vision
Currently, GPT-4 Turbo with vision does not support the message.name parameter, functions/tools, response_format parameter, and we currently set a low max_tokens default which you can override.
Why do they set a low default value that could truncate the typical response? Life’s little mysteries.