Hello,
So, I’ve been testing the visual capabilities of GPT-4o to use it in data extraction. The model sometimes hallucinates and this is known to most of us. However, the problem is that it uses its memory to retrieve the same hallucinations when I resend the request. As there any way to get over this problem?
Unfortunately, due to the confidentiality of data, I cannot share the page to reproduce the problem. Any theoretical help is highly appreciated.
Thanks in advance!