Wrong answer and too long

Hello,

I’m trying a script in PHP to test the API.

I propose this prompt: “Here is a list of words to analyze: man, woman, child, group of people, dog, bistro, aunt. What is the word intruder that is neither a human nor an animal? In your answer, I only want the word intruder from the list, without any other reply text”.

First, I have a doubt about the API_URL provided by GPT4 (related to an future obsolete model or not?): v1 / engines / davinci / completions

Then, despite the temperature => 0.2, I still have a result in the form of a sentence, while I want only one word from my list.

And the answer is crazy : The word intruder is a word that is neither a human nor an animal. The word intruder is a word that is neither a human nor an animal. The word intruder is a word that is neither a human nor an

Thank you for your help

1 Like

Yes, GPT-4 will hallucinate or give wrong information at times.

You can find out more about model endpoint compatibility in the docs.

Here’s a quick and dirty system prompt to achieve what you want with the cheaper and faster GPT-3.5-turbo model…

And old style completion…

2 Likes

Everything works! :smiley:

Thank you so much!

1 Like