Found this posted on X by a developer and wondering if accurate? I just checked and no public changelog, blog post, or help center article confirming.
Thanks for anybody who can verify. Cheers!
Found this posted on X by a developer and wondering if accurate? I just checked and no public changelog, blog post, or help center article confirming.
Thanks for anybody who can verify. Cheers!
It does align with the “introducing OpenAI o3 and o4-mini”, here an archive copy from April 16 to ensure no backsies:
Both o3 and o4-mini are also available to developers today via the Chat Completions API and Responses API (some developers will need to verify their organizations(opens in a new window) to access these models). The Responses API supports reasoning summaries, the ability to preserve reasoning tokens around function calls for better performance, and will soon support built-in tools like web search, file search, and code interpreter within the model’s reasoning.
“Soon support” would be consistent with the intention statement "no current support’, and the Playground not presenting internal tools for the models. You can make the API calls to see if the seen messaging and service continuation for five more days only applies to those notified because they made tool use on Responses via its iterator (like deprecation notices going out to just recent users).
Your own functions can include a shim to the vector store search endpoint, along with your own web provider (if not more cheaply giving chat completions web search model results.)
Does anyone know if there’s been any movement on this? The built-in websearch is killer, would be great to get it in the playground/API.
While file search is available for the latest reasoning models, web search is not.
We can ask the API when it will let us through:
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-4o-mini",
instructions=("You are a research assistant. Today is Mon June 2, 2025. "
"Answering after knowledge cutoff date REQUIRES good web search results."),
input=[
{
"role": "user",
"content": [
{
"type": "input_text",
"text": "New cool stuff announced by OpenAI in the last month?",
}
],
}
],
tools=[
{
"type": "web_search_preview_2025_03_11",
"user_location": {"type": "approximate", "country": "US"},
"search_context_size": "medium",
}
],
max_output_tokens=6789,
store=True,
)
for output in response.output:
# prints raw event-like objects
print(output.model_dump(), "\n-----")
delete_response = client.responses.delete(response.id)
‘message’: “Hosted tool ‘web_search_preview’ is not supported with o4-mini.”
‘message’: “Hosted tool ‘web_search_preview’ is not supported with o3.”
‘message’: “Hosted tool ‘web_search_preview’ is not supported with o3-mini.”
‘message’: “Hosted tool ‘web_search_preview’ is not supported with o1.”
OpenAI is likely going to position web search calling internal to the reasoning. However, doing so would be competitive with their “Deep Research”.
Hey, this week the o4-mini model started to allow web_search, but Im having a very strange issue, that when using web search together with file search (with some pptx file) the answer returns several typos, and also missing stuff (like numbered points that jump from 1 to 3 without 2) and other strange words like Submissog and just cut out words.
Has anyone experienced that?
Note that when I deactivate the websearch, then the answers are completely fine.