I am attempting to leverage ChatGPT in an app that finds/generates working URL links. All LLMs do poorly and hallucinate when it comes to spitting out working URLs, but I found that ChatGPT can reliably do it through their web interface.
However, when I pass in the same prompt through the JS API, the results are much different, and all the links and thumbnails are mostly broken. It also resolves in like 7 seconds instead of a minute+ like the web model using the same model (o4-mini), so I can tell it is doing something much different:
Even if I tell it directly to embed the results as shopping links, use web search to confirm they are real URLs, etc., e.g.:
Give me 5 shopping links with embedded thumbnails for alternatives to Nike Air max shoes. The results should be in markdown format with the links to purchase each shoe embedded in the markdown. These links should be cross-referenced with web_search to confirm that they are real and not broken.
const response = await openai.responses.create({
model: "gpt-4o",
input: "Give me 5 shopping links with embedded thumbnails for alternatives to Nike Air max shoes. The results should be in markdown format with the links to purchase each shoe embedded in the markdown. These links should be cross-referenced with web_search to confirm that they are real and not broken.", // Using the dynamically constructed prompt
tools: [{ type: "web_search_preview" }],
});
The resulting URLs / thumbnails have a 50+% chance of being broken.
If I ask chat gpt what is going on, it tells me stuff like “use responses API”, “use web search”, which I am already doing.
Any ideas? Thank you!
Bonus: Why does the playground tell me none of the reasoning models (like o4) can use web search when the API and web UI both seem to allow it?