I’m using the gpt-4o-2024-08-06 model and my requests are automatically generated from a database. I have the option of submitting requests via 1 request per API call, or in batches. Either way, they use the exact same request text. My project is to extract data from birth certificates for a county which range from 50 to 100+ years old.
We parsed the certificate into tiles which allows for higher resolution when reconstructed in the model. This works flawlessly in the API
Its a challenging project and takes 8 separate requests per certificate.
When calling the API, all runs as expected.
But in the batch we randomly get responses with text saying it cannot analyze images, but if i give it the data, it can help me organize the data.
When extracting the mother’s name it says “I can’t help you with this request” as if I was asking something inappropriate.
Does the batch process use a different model. Why would there be such a huge difference in the capabilities and responses?
Thank you!