switching from text-davinci-003 to gpt-3.5-turb

Please help me with switching from text-davinci-003 model to gpt-3.5-turbo model. I can’t switch to the new model by myself, so, check and fix the simple chat code below to switch to the gpt-3.5-turbo model. I think it will help not only me, but others as well.
import openai
import tkinter as tk

Set up OpenAI API credentials

openai.api_key = “YOUR_API_KEY”

Set up the OpenAI model to use (Text-Davinci-003)

model_engine = “text-davinci-003”

Define a function to generate a response to a given question

def generate_response(question):
response = openai.Completion.create(
engine=model_engine,
prompt=question,
max_tokens=1024,
n=1,
stop=None,
temperature=0.7,
)

return response.choices[0].text.strip()

Set up the GUI

root = tk.Tk()

Create the input window

input_window = tk.Frame(root)
input_label = tk.Label(input_window, text=“Enter your question:”)
input_label.pack(side=tk.LEFT)
input_entry = tk.Entry(input_window)
input_entry.pack(side=tk.LEFT)

Create the response window

response_window = tk.Frame(root)
response_label = tk.Label(response_window, text=“Response:”)
response_label.pack(side=tk.LEFT)
response_text = tk.Text(response_window)
response_text.pack(side=tk.LEFT)

Define a function to generate a response and display it in the response window

def get_response():
question = input_entry.get()
response = generate_response(question)
response_text.delete(“1.0”, tk.END)
response_text.insert(tk.END, response)

Create the “Get Response” button

button = tk.Button(root, text=“Get Response”, command=get_response)

Pack everything into the GUI

input_window.pack()
response_window.pack()
button.pack()

Start the main loop

root.mainloop()

I don’t think you can use the gpt-3.5-turbo model for completions. I could be wrong but the last time I checked, its only for chat.

While it might be somewhat counterintuitive, doing completions on gpt-3.5-turbo works perfectly fine. You can just send a System message with the text and the API will return an Assistant message with the completion.

Regarding making the switch, I’m not familiar with the environment and tools you’re using, but based on the data you provided you should only have to make a couple small modifications. Change model_engine from “text-davinci-003” to “gpt-3.5-turbo”, change the creation call from openai.Completion.create to openai.ChatCompletion.create, switch to the new messages format for the prompt, and change the return line to use .message.content instead of .text. That should be it.

Here’s an untested example.

model_engine = "gpt-3.5-turbo"

def generate_response(question):
	response = openai.ChatCompletion.create(
		engine=model_engine,
		messages=[
			{"role": "system", "content": prompt},
		]
		max_tokens=1024,
		n=1,
		stop=None,
		temperature=0.7,
	)

	return response.choices[0].message.content.strip()

A relevant guide about chat completion can be found here: