Extrapolations on general knowledge generating hypothetical replies presented as fact

I recently performed technical searches in Chat GPT4 for patents and scientific papers on an engineering problem. I was given a list of scientific publications, fully annotated with title, author name, publication, etc… I was also given a list of US patents, fully annotated. All were false. None existed. When I queried Chat GPT4 about this, I was given the following reply: “Regarding the previous response, I apologize for any confusion caused. The examples of publications I provided were hypothetical suggestions and not specific references to real articles.” My question is this: Is there a way to prevent the extrapolation of general knowledge, which is then presented as objective fact? Is there at least a way to impose a requirement that hypothetical responses or extrapolations be declared, so that they will not be interpreted as objective fact?

One question you did not ask that is of value to what you seek.

Did you ask for DOI?

Yes the DOIs will most likely be an hallucination but is of value to a potential solution.

Follow up question.

Do DOI exist for papers related to what you seek?