Any tips on how to set up an action that queries an API so that the GPT will not then hallucinate responses? I’ve run into some odd responses with this.
I have a GPT that is set up to connect to a REST API and it returns concert data.
For a few weeks, it was only returning actual data it got from the API when I used the GPT.
Recently, the API slightly changed, but since then I noticed that the API calls from the GPT still happen, but then the chat returns info that is being hallucinated (things that simply aren’t in the API data set).
Any tips/suggestions on things to try? I’m going to try to change my instructions a bit, but I don’t think there’s anything that suggests making up data.
Hard to say since I can’t see what the actual request is, but one page is 20 json objects that have around 30 properties.
Oddly, re-asking the API, I’m getting various results now. Sometimes normal successes, sometimes “Error talking to” and sometimes hallucinations. I didn’t change the config at all though, and when I hit the API from postman, I get normal results every time. Performance is extremely intermittant.
A few properties were added to the response, and a few properties were added to the available query parameters in the spec. I can’t imagine why those changes would impact hallucination of data.
Addition to the response should not affect it. But the inputs yes. Could you go back to the previous query parameters (again you can leave the fields in the response as they should not matter) ?
as for @bloodlinealphaDev comment, do you see the correct raw response in the Debug?