There has been mention of OpenAI adding support for image files saved in vector stores so the file_search tool in the responses API can search images. Does anyone know of an availability date?
I have heard no “mention”.
Here’s a survey of what is available:
- OpenAI has no embeddings model that accepts images.
- Existing embeddings models have no discussion of being upgraded, and are already outclassed.
- OpenAI does not have query prompt embeddings models that are developer-configurable to transform input to destination
- Images cannot be returned by tools in function calling
Thus, not only is there no vector store that has image input, there is no technology to power it, either for image similarity or image classification, or for search, nor a mechanism for AI models to make use of them in a “Responses” chat with an AI.
A vector store is document-based, with chunked texts, so even if you had AI-described metadata and image entity extraction, the most the AI model could then do is talk about the data, not reference an image.
OpenAI also damaged entity extraction and feature identification by placing 250 tokens of “safety prohibition” prompting as the injected first system message in any AI chat model with image input, which degrades all applications. Thus, you can’t even use OpenAI to create text metadata to find your images of Nikki Minaj, because they lied to the AI so that it lies to you about its ability to see people.
Answer: Start here to run an OpenAI-beating open-weight embeddings model locally in 4G RAM that can encode text and image in one vector:
Think of the actual application, though: how you would deliver images if OpenAI language models only allow them in “user” role messages, or what kind of retrieval input you would actually be using in semantic search.
soon they launch they already built it but testing right now
when making statements like that it helps to share a source