In assistant instruction I provided several links and actually it returns them while replying. But:
In Playground I get following warning. Actually it returns correct links while I am testing, but is there a change (and how big is it) that it returns some wrong or broken link?
is there a way to return from assistant some structured output, for instance in JSON, so that I would be sure that generated data are exactly from my instruction. Let’s say as following:
{
"link:“Login | HSTS Redirection Community”
}
The backend of “assistants” that will return links is called annotations. Within the AI response you get an abbreviated form, and in the API response you get the text of the annotation, and need to re-associate them. (apparently this is just another way that OpenAI thinks that developers cannot be trusted with the real text that their AI generates). This mechanism is used for “hard data” from documents or from code interpreter results.
Since assistants doesn’t have internet access (unless you provide a web search tool) the plain generated URL within text may not be an annotation based in reality from your documents, but from old trained knowledge…or may be a fabrication that only looks like a plausible URL string-of-text for “what goes there”. A hallucination.
The warning in the playground is just boilerplate text similar to “ChatGPT can make mistakes”. They must (correctly) figured that their playground is amateur hour and the users are AI novices.