If a plugin is using the text-davinci-003 engine in the /generate and /complete endpoints, i.e. generated_text = openai.Completion.create(engine="text-davinci-003 Can these be upgraded to the new releases from today, such as gpt-3.5-turbo-16k or is that reserved just for /search and /playground API endpoints?
It’s a different endpoint and model type, but just as easy to try out. See GPT Guide. New models are Chat Completion compared to davinci which is Completion.
Thanks @novaphil, I remember months ago getting errors when I tried to use the gpt-4-32k-0314 one with those two endpoints and had to revert to text-davinci-003, which is being used now in production on the plugin store (while I am using gpt-4-32k-0613 now for the other two endpoints, /playground and /search).
I just tested on a local server and it looks like the plugin is able to handle gpt-4-32-0613 for /complete and /generate, but I will set the max token threshold below 32k, as I remember errors can trigger if the prompt tokens +completion tokens is greater than the max threshold I believe. Cheers!
@novaphil I appreciate your comments. In my code I am already using those completion endpoints with the better models, but I also need to use the /generate and /completion endpoints because my plugin handles numerical data which it pulls from the 3rd party API, and so I was trying to upgrade the models for those two endpoints from the text-davinci engines currently in use.
For example:
data = request.get_json()
text = data.get('text')
completed_text = openai.Completion.create(model="text-davinci-003",
text=text,
max_tokens=3700
This may be a better question to post at the forex-gpt project GitHub. /generate and /complete/search and /playground are custom endpoints for that project, they are not OpenAI endpoints.
But like I said, that project needs to switch to using the ChatCompletion syntax.