Does any API model have web search? Is it planned?
For example, I’m writing to a bot to find it on the web, but it can’t do that right now.
Does any API model have web search? Is it planned?
For example, I’m writing to a bot to find it on the web, but it can’t do that right now.
No, API models and endpoints do not have web search abilities or such a service.
The only way an API model can get onto the internet is by a function or injection that you write yourself.
Google has “grounding” on some models, employing Google search. However, making use of it, you have to follow terms - basically turning your chatbot into an ad for Google in every response.
Hi @rus1243mor ! Knowing OpenAI, it will probably come out eventually. In the meantime, the only provider that I know of that delivers this wonderfully is Perplexity via their Sonar models, see here (I have used them myself). These days they also provide full citations (when I used it those were in beta).
We’ve released web search in the API this week! ICYMI: New tools for building agents: Responses API, web search, file search, computer use, and Agents SDK
This is great. I’m wondering if these new models will support image attachment as well? I’m getting the following error, seems limit is currently set to 0.
Request too large for gpt-4o-mini-search-preview in organization org-xyz on input-images per min: Limit 0, Requested 1. Visit https://platform.openai.com/account/rate-limits to learn more.
Is that still the case? I was building an app using the /v1/responses endpoint and it appeared to be accessing real time data for awhile and then recently stopped. I received no notice from OpenAI for a service that I’m paying for…
You must include the internal tool for web search in the tools parameter in your API call. “access realtime data” doesn’t happen auto-magically.
https://platform.openai.com/docs/guides/tools-web-search?api-mode=responses
The quality will be improved if also using the tool fields for user location.
Then, the AI model must actually have the talent and quality to invoke the web search tool based on the type of user input, sending it a query. As a developer, you’ll want to improve on the internal tool messaging that can’t be changed, for example, giving a system message with today’s date and the importance of sending to the web search tool first if there could be new information.
Then the output is more like “search results” instead of “informed AI”, because it adds a rewriting instruction of its own that takes over your application.
You pay for every use of web search tool by the AI model, and your usage billing will prove its invocation.
Example system message foundation, that builds on OpenAI’s injection of a knowledge cutoff date immediately before:
A user question that has some quality of recentness:
We can integrate web search into any model, but we had to roll our own solution to pull it through. Basically, we created our own “middle layer” serving as an alternative to MCP, which allows us to have the LLM respond with “function invocations”, which we then execute, for then to transmit the result to OpenAI again.
Everything we did is Open Source, and I’d love to give you a link to it, but then it would probably be removed because of “promotions”. However, you can search for “Magic Cloud” or “Hyperlambda” on GitHub and probably find it …
… if this post gets to stay ofc …