Prevent hallucination with gpt-3.5-turbo

If you’re trying to get any of these models to reliably reply with specific text you’re fighting a losing battle… As you’ve probably seen it will change languages on you, reword things, add stuff etc.

I’m assuming you’re trying to lock down what it says when its unsure of an answer so that you can detect this in your code and do something else correct?

This is what we classically call a NONE intent… If the model doesn’t know the answer it should return NONE for the intent. So how we get GPT to reliably tell our program that it doesn’t know something? Well on one level that’s a great question because it thinks it knows everything… But that aside, a better approach of conveying to the program that it doesn’t know something is to tell it to return a structure like <intent>NONE</intent>. These models all seem smart enough to know that’s an instruction to code and not a message to a user so you should see more reliable responses…

Hope that helps…

1 Like