Variances in PDF Interpretability Between Custom GPT and GPT-4: "Analyzing" versus "Reading Document" Functions

So, I’ve been experimenting with GPT-4 and a custom GPT model, trying to get them to understand text from a complex PDF full of forms and tables. GPT-4 does a great job, but the new custom GPT instance struggles, even with lots of guidance. One thing I noticed is that GPT-4 shows a “Reading Documents” status while processing the PDF, while the custom GPT shows “Analyzing.” Could this difference in status messages be why the custom GPT isn’t as good at understanding the text? Any ideas on how to improve its performance to match GPT-4’s PDF reading ability?

Reading_documents

“Analyzing” status shown here (sorry, new posters are limited to one image per post)

analyzing

One is retrieval the other is code interpreter.

Try adjusting your prompt.

1 Like

Any ideas for getting the custom GPT to use the retrieval method? Using identical prompts, GPT4 uses that by default, whereas a blank slate Custom GPT sticks to code interpretation despite vigorous prompting otherwise. It looks like Custom GPT is using an inferior python library to extract text.

After spending a lot more time trying to get the custom GPT to read PDFs, it’s become apparent that the “Reading Documents” status indicates a model’s innate ability to extract information from text, operating independently of external Python libraries. The absence of this functionality’s in custom GPTs is a significant limitation, likely a result of a cost-saving decision. For the time being, I’m in a holding pattern…

This issue has been overcome by a resolution implemented by OpenAI: GPTs not reading PDFs in messages anymore