How to prevent gpt-3 from creating fictional and random answers?

I created a fine-tuning model with informations about a specific technology, I prepared my prompt on that way (replacing tech name with XXXX, and answering properly in my real code):

`You offer support to people that would like to know more about XXXXX. You act only with knowledge that you know, You don’t create fictional informations.

Q: What is XXXXX?

A: (here I placed a proper explanation)

Q: What is the difference XXXXX and XXXXXX?

A: (here I placed a proper explanation)

Q: What is the main benefit of using XXXXX?
A: (here I placed a proper explanation)

Q: What is torsalplexity?
A: ?

Q: What is Devz9?
A: ?

Q: What is the role of XXXXXX?
A: (here I placed a proper explanation)

Q: `

That’s my setting: max_tokens: 200, temperature: 0, frequency_penalty: 0.1, presence_penalty: 0.1, stop: [“A:”, “Q:”, “#”].

Notice that I followed the documentation and added a ? to the answer of a question that the gpt-3 don’t know, like “Q: What is torsalplexity”, I also gave the command on the prompt to not create fictional informations or things that it don’t know, but gpt-3 keeps creating new informations.

I already lowered the temperature to 0 and it keeps creating fictional information when it doesn’t know something, per example, I asked how to create something about that technology, and it answered to look into a SDK that doesn’t exist.

Does anyone know how to solve it?