You cannot use “assistants” with gpt-3.5-turbo-instruct.
Nor would you EVER want to use assistants at all.
If you want large token completion with 16k context at the same price as using text-davinci-003 was, you can fine tune the replacement davinci-002.
What they don’t tell you is that while text-davinci-003
is a fully capable 175B parameter model, davinci-002
is likely a 20B-50B model similar to gpt-3.5-turbo, which only works because of millions of fine tunes that then the base model doesn’t come with.
The model is essentially non-functional as a completion engine to anywhere the capacity of what it replaced. So you would have to concentrate on a very finely focused task area to fine tune it. And fine tune with the 100000+ examples like being done on open-source models.