Hi @lixeirocharmoso!
On the hallucinations aspects - it’s super tricky, and requires lots of iterations and detailed understanding of the underlying data/knowledge. But there is another fantastic thread here (and a blog post) about grounding LLMs, which I highly recommend reading!