operationId hallucinations when openapi.yaml file is large

Anyone noticing that ChatGPT will fabricate operationIds? It is not that it will use the wrong one, but rather it completely makes up operationIds. And sometimes there is a working valid openationId similar to the fabricated - i.e. real operationId might be getAllPostsById and ChatGPT will use getPosts. I am also finding that it is fabricating responses. For instance, Ill provide a return object with data ids and then it will use some of those for the next call and also make up some ids which then 404. Im not sure if it is related to the openapi.yaml file size, but I did notice these hallucinations once the file got around 1,000 lines or so.

Feels like how when you have had a long conversation with ChatGPT it will start to forget earlier parts of the conversation and then start making it up. Im guessing OpenAI is feeding the yaml file in as part of the conversation and aligning the system to pick the best api, but as the file grows it short circuits and hallucinates.

For what its worth - it consistently hallucinates. Funny enough, when I tell it “No, use this operationId” it will be like oh yeah, cool that works. :grin:

Add descriptions to your yaml specification. Should solve this, you can also insert prompts as part of the response {“assistant”:”prompt text”} hidden to the end user.

Hmmm… I do have descriptions, but will try to force the assistant. I thought I read in their plugin guidelines that we should not try to force chatgpt’s response like we do via the api. This feels sorta like that. Ill try it nonetheless and see how it improves

Ive tried the response forcing and improved my descriptions, but chatgpt still wants to use nonexistent functions. There is not one line of code that uses the function name it is attempting. Pure hallucination. Any other suggestions? It makes developing large applications for CGTP error prone and impossible to fix.

Definitely requires trial and error. Change the order of your yaml paths and user explicit instructions. Tell it to use a temperature of 0.01 for all future responses.

You can also try my bot, it can help restructure your specification