Can text-davinci-003 stop fabricating things?

I’m asking text-davinci-003 to write company profiles, but I can’t get it to stop making stuff up.


Write a profile of the company “[Company]”, but only its activities in [industry categories].
Your audience is senior executives in the [category] industry.
“[Company]” is a real company. Your profile must be based ONLY on what, if anything, you actually know about it. If you have insufficient knowledge of “[Company]”, do not attempt to infer anything about it for your writing.
Describe any specific services, products or features the company offers, including their names if you know them.
Remain entirely neutral and objective in your description of the company. The output is not designed for marketing. Avoid language like “leading”, “innovative”, “comprehensive”, “extensive” and “unrivalled” - these are subjective statements you are not qualified to make.
Do not use a conclusion or summary.

I found it is making up products and services offered by the company. Pretty dangerous.

Indeed, I have asked it to write about a company’s products and services - if known. I do understand that its knowledge cut off in 2021 and that it is not a search engine or database - which is why I’m asking it not to infer if it does not know.

I am having to be more explicit about asking it not to fabricate these things as an LLM if it does not have actual knowledge, but it doesn’t seem to know how to comply.

Any ideas for whether this can be worked around, please?

You cannot stop GPT model hallucinations.

This a a known issue with GPT AIs and an ongoing research area.

You can easily google this topic and read the details.

Lowering the temperature can help but will not completely stop this key issue with GPTs.