Is there any function feedback answer from openAI assistant

I’ve been using AI assistant, particularly OpenAI’s models, for various tasks, and one thing that struck me is the lack of a robust feedback mechanism. While these models can produce impressive results, they have some hallucination.

Welcome to the Forum!

Hallucinations are as you rightly point still a phenomenom that you may encounter when using LLMs. The good news is that the right approach to using these models including the right prompting strategy can significantly limit or even nearly eliminate the risk of hallucinations.

Do you want to perhaps share examples when have you experienced hallucinations? Forum members might be able to share some tips and guidance on how to address the issue.