Q: Re upgrading plugin models/engines to gpt-3.5-turbo-16k and gpt-4-32k-0613

Following on Sam’s announcement today: Function calling and other API updates

If a plugin is using the text-davinci-003 engine in the /generate and /complete endpoints, i.e. generated_text = openai.Completion.create(engine="text-davinci-003 Can these be upgraded to the new releases from today, such as gpt-3.5-turbo-16k or is that reserved just for /search and /playground API endpoints?

Appreciate any insights on this, thanks!

It’s a different endpoint and model type, but just as easy to try out. See GPT Guide. New models are Chat Completion compared to davinci which is Completion.

Thanks @novaphil, I remember months ago getting errors when I tried to use the gpt-4-32k-0314 one with those two endpoints and had to revert to text-davinci-003, which is being used now in production on the plugin store (while I am using gpt-4-32k-0613 now for the other two endpoints, /playground and /search).

I just tested on a local server and it looks like the plugin is able to handle gpt-4-32-0613 for /complete and /generate, but I will set the max token threshold below 32k, as I remember errors can trigger if the prompt tokens +completion tokens is greater than the max threshold I believe. Cheers!

@novaphil update: I had to revert back to text-davinci-003 as the new models on the /complete and /generate endpoints caused the plugin to send text-based requests to the 3rd party API that I am using, which aren’t supported, so I don’t think I can use these in my current version of the code: Errors when upgrading completion and generate endpoints to new engine · Issue #2 · hatgit/forex-gpt · GitHub

To use GPT3/4 models you need to use the ChatCompletion endpoint. Try gpt-3.5-turbo or gpt-4. The GPT4 32k model is an extremely limited rollout

completion = openai.ChatCompletion.create(
  model="gpt-3.5-turbo",
  messages=[
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"}
  ]
)

print(completion.choices[0].message)

@novaphil I appreciate your comments. In my code I am already using those completion endpoints with the better models, but I also need to use the /generate and /completion endpoints because my plugin handles numerical data which it pulls from the 3rd party API, and so I was trying to upgrade the models for those two endpoints from the text-davinci engines currently in use.

For example:

    data = request.get_json()
    text = data.get('text')
    completed_text = openai.Completion.create(model="text-davinci-003",
    text=text,
    max_tokens=3700

and

    prompt = data.get('prompt')
    temperature = data.get('temperature', 0.5)
    generated_text = openai.Completion.create(engine="text-davinci-003", 
    prompt=prompt, 
    temperature=temperature,
    max_tokens=3700,  
    n=2,
    stop=None

This may be a better question to post at the forex-gpt project GitHub. /generate and /complete /search and /playground are custom endpoints for that project, they are not OpenAI endpoints.

But like I said, that project needs to switch to using the ChatCompletion syntax.

1 Like

@novaphil thank you! I updated both to include the Chat in ChatCompletion, and the upgraded models are working so far! :grinning: :grinning: :clap: :clap: :clap: