Davinci model much worse than gpt-3.5-turbo for data extraction?

Hi guys,

I try to extract information (no “logic” involved, just finding the data and returning it) from unstructured plain text documents and have it returned as JSON. I tried both the “gpt-3.5-turbo” and the “davinci” with their default parameters. I noticed “davinci” often returns garbage, meaning not even valid JSON, while “gpt-3.5-turbo” seems to work great. I wonder if this is expected or what I’m doing wrong? Especially since the official extraction examples given on the openai website use the “davinci” model?


GPT-3.5-Turbo is just a more mature model that is now fine tuned to return JSON objects, the documentation covers all models, DaVinci is just not the state of the art anymore.


Ok, thanks for the quick reply. I thought “davinci” would be better and safer for such tasks, for example not inventing stuff for data asked but not contained in the documents.