Is there any way to prevent hallucination when using a plugin?

So when a plugin is used by ChatGPT, it uses the data from the plug-ins response and then combines it with its own knowledge base to create the final response.

This results in some unwanted hallucinations

For example : you could return information from a PDF in the plugin response, but then ChatGPT combines it with data and information from other sources - and now the entire response becomes untrusted …

I’m wondering : by modifying the description_for_model “request this plugin when … “ can hallucinations be controlled ?

Technically, this is not a hallucination. You need to be specific in your prompt (or description_for_model) that it must only use the data you are providing from whatever specific endpoint you have.

3 Likes

You’re right – this worked. Thank you.

3 Likes