Un modelo mejor que text-davinci-003

I am developing a code so that I can read pdfs and for the system to receive questions and respond about what it finds in the pdfs, the model I am currently using is text-davinci-003 it works very well for me but I would like to use another one that I may have more tokens, recommend any?

PodrĂ­as usar el GPT 3.5 Instruct

ok but it has the same amount of tokens as davinci-003, I need one that has 16000 tokens

You can now upload pdf’s to the model and run queries on them, see :

1 Like

You cannot use “assistants” with gpt-3.5-turbo-instruct.

Nor would you EVER want to use assistants at all.

If you want large token completion with 16k context at the same price as using text-davinci-003 was, you can fine tune the replacement davinci-002.

What they don’t tell you is that while text-davinci-003 is a fully capable 175B parameter model, davinci-002 is likely a 20B-50B model similar to gpt-3.5-turbo, which only works because of millions of fine tunes that then the base model doesn’t come with.

The model is essentially non-functional as a completion engine to anywhere the capacity of what it replaced. So you would have to concentrate on a very finely focused task area to fine tune it. And fine tune with the 100000+ examples like being done on open-source models.

currently working on this also but when i get to this line of code
“llm = OpenAI(temperature=0, model_name=“text-davinci-003”, max_tokens=-1)” i get this error, any ideas?

This incorrect parameter was resolved in another topic. Also to note is that text-davinci-003 is expensive and will be turned off in under a month, and one must migrate to another model, and likely to a chat endpoint depending on application.

1 Like