Content not being returned GPT 3.5 Turbo

I’m trying to create a python text game with unique NPC interactions where gpt greets and talks to the user about a specific quest or idea in the game’s storyline. The code that I have been using seemed to work fine until recently when I hooked the code up to a main framework (A map system) and suddenly no content was being returned at all, not even an error message. But the object seems to be returned fine. I would love some guidance as this is a very major project for not only myself but others also. Apologies to all the other code surrounding it, I’ll happily admit that my code organisational skills are completely wack :sweat_smile:


class ai_text:
  
  def __init__(self):
    openai.api_key = os.environ["API KEY"]
    raw = False

def run_conversation(self, convo_list, raw=False):
    #messages = [{"role" : "user", "content": "What's the weather like in plymouth?"}]

    response = openai.ChatCompletion.create(model="gpt-3.5-turbo-0613", messages=convo_list)
    if not raw:
      response_message = response["choices"][0]["message"]["content"]
      return response_message
    else:
      return response

def output_ai(npc_info, attitude_message, list, secrets=False):
  '''
  The NPC info will be laid out in the following;
  [npc_name, user_name, quest, sector_name]
  the name of the npc the ai is talking to
  the name of the user
  the quest, boolean false if no quest
  and the name of sector
  '''
  ai = ai_text()
  
  attitude_message = "You are roll-playing a NPC called " + npc_info[0] + " in " + npc_info[3] + " in a fantasy setting. The user, who is called " + npc_info[1] + ", is currently having a conversation with you. Greet and respond with the necessary attitue and make sure to stay in character. Keep each sentance relativly short, and finish each response with a question and question mark or a goodbye to end the chat. Do not say anything apart from the response and the response alone and do not speak for the user, say only the one line from your character. " + attitude_message + ". "
  
  if not npc_info[2]:
    pass
  else:
    attitude_message = attitude_message + "You must remind the user to go on a quest, which is to " + str(npc_info[2]) + "."

  formatted = []

  formatted.append({"role" : "system", "content" : attitude_message})
  
  for message in list:
    if (int(list.index(message)) % 2) == 0: #NPC always starts the talking, so even numbers are the users repsonse
      formatted.append({"role" : "user", "content" : message})
    else:
      formatted.append({"role" : "assistant", "content" : message})

  if secrets:
    attitude_message = attitude_message + "You also know something the user does not, you need to talk about " + secrets

  
  raw_response = ai.run_conversation(formatted, raw)

  if not raw:
    response_ls = raw_response.rsplit(":", 1)
  
    response = response_ls[1]
    
    response_list = response.split(" ")
  
    remain = 62
    text = []
    line = []
    for word in response_list:
      if (remain - (len(word) + 1)) > 0:
        line.append(word)
        remain -= len(word)
      else:
        text.append(line)
        remain = 62
        line = []
    
    for line in text:
      print("| ", end="")
      for word in line:
        d_print(word + " ", end="")
      print(" |")
    print("\_________________________________________________________________/ ")#ending text box

  else:
    print("database:", db["convo"])
    print()
    print("formatted list:", formatted)
    
  formatted.append({"role" : "assistant", "content" : response})

  if not raw:
    if response[-1] != "?":
      leave = input("Press enter to leave ")
      return False
    
  user_response = user_input()

  formatted.append({"role" : "user", "content" : user_response})
  

Welcome @The_Craw

One of the main reasons you’re not seeing any error messages is the lack of error handling in the code.

Use a try-except block around your API call:

try:
    response = openai.ChatCompletion.create(model="gpt-3.5-turbo-0613", messages=convo_list)
except Exception as e:
    print(f"Error: {e}")
    return None
1 Like

Remember, OpenAI itself is frequently smart enough that I can paste in some code like this, and explain what it’s doing wrong, or ask about the actual error message, and it’s pretty godlike at being able to diagnose problems that way. But you need to be specific about what line is malfunctioning, what you expected it to do v.s. what it did.

I’ve learned not to underestimate it (the AI), and ask it things I don’t think it can even figure out, and then it DOES 99% of the time!

2 Likes

ah yes, I see now how this can help fix my problem. Thank you for your insight!