Why does the text-davinci-003 model work in the bot, but gpt-3.5-turbo does not work?

    # Generating a response using the OpenAI API
    response = response_to_gpt(prompt) 
    def handle_message(message, bot):
        prompt = message.text
        response = None

            response = response_to_gpt(prompt)
        except Exception as e:
            response = f"Произошла ошибка! Попробуйте ещё раз. \n {e}"
            bot.reply_to(message, response)
        bot.reply_to(message, response)

    # Splitting response into multiple messages if it exceeds the maximum length allowed by Telegram API
    if response.choices and response.choices[0].text:
        response_text = response.choices[0].text
        response_text = ""
    while len(response_text) > 0:
        response_chunk = response_text[:MAX_MESSAGE_LENGTH]
        response_text = response_text[MAX_MESSAGE_LENGTH:]
        if response_chunk:
            # Replying to the user with the current chunk of the response
            bot.reply_to(message, response_chunk)

    # Save the query context to the database
    if text:
        with get_conn() as conn:
            c = conn.cursor()
            c.execute("INSERT INTO context (user_id, text) VALUES (?, ?)", (user_id, text))

except Exception as e:
    bot.reply_to(message, f"Произошла ошибка! Попробуйте позже. \n {e} ")

def response_to_gpt(message):
response = openai.Completion.create(
messages=[{“role”: “user”, “content”: f’Запрос: {Preformatted textmessage}'}]
return response

Above is the missing part that needs to be updated. You got “messages” correct, while the completion endpoint instead used “prompt”. However the method for gpt-3.5-turbo is:

Here’s an example with all the API parameters included, and I customized them for reliable Russian compositions:

# call the chat API using the openai package and model parameters
response = openai.ChatCompletion.create(
    messages    = list_of_messages,
    model       = "gpt-3.5-turbo",
    temperature = 0.4,  #0.0 - 2.0
    max_tokens  = 1500, # maximum response length
    top_p       = 0.9,
    presence_penalty = 0.0,  # penalties -2.0 - 2.0
    frequency_penalty = 0.0, # frequency = cumulative score
    n           = 1,
    stream      = False,
    user        = "customer_user-id",

That should get you working quickly.

You can read “API reference” on the left of this forum for more about “chat” and how to send the multiple messages.

Good luck!

I removed the chat because there was an error: Invalid URL (POST /v1/chat/chat/completions)

I’m getting an error “text”, I don’t understand what this error is

Python 3.7…1 - 3.9.x
OpenAI latest version (pip install --upgrade openai)

Here is a working chatbot demo I wrote as instructional that shows:

  • construction of role messages
  • proper use of chat API
  • chat history with rudimentary management of past turns
  • streaming words and parsing the delta chunks of the stream
  • no error handling of API errors
import openai
openai.api_key = "sk-12341234"
system = [{"role": "system",
           "content": """You are chatbot who enjoys weather talk."""}]
user = [{"role": "user", "content": "brief introduction?"}]
chat = []
while not user[0]['content'] == "exit":
    response = openai.ChatCompletion.create(
        messages = system + chat[-20:] + user,
        model="gpt-3.5-turbo", temperature=0.1, stream=True)
    reply = ""
    for delta in response:
        if not delta['choices'][0]['finish_reason']:
            word = delta['choices'][0]['delta']['content']
            reply += word
            print(word, end ="")
    chat += user + [{"role": "assistant", "content": reply}]
    user = [{"role": "user", "content": input("\nPrompt: ")}]

Hopefully you find what you need and can expand from there.

( I also have print function that will word wrap to a line length and others where the scope of “example” is lost )