I’m investigation poor text recognition via API with GPT-4 Omni.
The original OpenAI chat on chatgpt_com is working like a charm, text is 100% equal to PNG, no fictional words or sentences.
If I us API call to GPT-4o for a one page text I always get only the first paragraph almost correct, the others are fictional.
I tried custom prompts to stop using Tesseract and use internal vision capabilities. But no luck. What should I do?
This can be a bit counter intuitive, but you actually gave it the image as an attachment and not as an image.
There is two different things you can do with images and GPT.
Give it the file, which it can then use when coding (what you did)
or you give it the image specifically for vision.
You are doing the wrong one.
This is what it looks like on playground: (use the right button)