Extrapolations on general knowledge generating hypothetical replies presented as fact

I recently performed technical searches in Chat GPT4 for patents and scientific papers on an engineering problem. I was given a list of scientific publications, fully annotated with title, author name, publication, etc… I was also given a list of US patents, fully annotated. All were false. None existed. When I queried Chat GPT4 about this, I was given the following reply: “Regarding the previous response, I apologize for any confusion caused. The examples of publications I provided were hypothetical suggestions and not specific references to real articles.” My question is this: Is there a way to prevent the extrapolation of general knowledge, which is then presented as objective fact? Is there at least a way to impose a requirement that hypothetical responses or extrapolations be declared, so that they will not be interpreted as objective fact?

see my experience regarding scientific citations in “Notifying wrong behavior by ChatGPT - ChatGPT - OpenAI Developer Forum” number 249986