Chat response model with parameters

Hi there,

TTS chatbot where I get the API to return an appropriate response to the user. I would also like additional parameters such as if the user prompt would constitute someone in need of help (and trigger separate system call). I can’t figure out how to do that, any ideas?

I currently use this code but I would like to join them into one request:

def generate_response(prompt):
    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    conversation_history.append({"timestamp": timestamp, "role": "user", "content": prompt})
    messages = [
        {"role": "system", "content": "<general instructions>"}
    ] + conversation_history
    
    response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=messages,
        max_tokens=100,
        n=1,
        stop=None,
        temperature=0.5
    )
    assistant_response = response.choices[0].message.content.strip()
    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    conversation_history.append({"timestamp": timestamp, "role": "assistant", "content": assistant_response})
    
    # Store conversation history
    save_conv_history({"timestamp": timestamp, "role": "user", "content": prompt})
    save_conv_history({"timestamp": timestamp, "role": "assistant", "content": assistant_response})

    # Determine if user needs help
    alarm_prompt = "Does the user need help? Chat history: " + \
                   json.dumps(conversation_history, ensure_ascii=False)
    alarm_messages = [
        {"role": "system", "content": "You should answer yes or no depending on whether the user needs help, current prompt: {alarm_prompt}"}
    ]
    
    alarm_response = client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=alarm_messages,
        max_tokens=10,
        n=1,
        stop=None,
        temperature=0.2
    )
    alarm_response_text = alarm_response.choices[0].message.content.strip().lower()
    return assistant_response, alarm_response_text
alarm_response_text = alarm_response.choices[0].message.content.strip().lower()
    return assistant_response, alarm_response_text

English-to-English AI translation:

There is no “timestamp” parameter within messages objects. I would avoid passing that in case libraries or your own methods change.

Also, you add to conversation before the conversation happened.

Maybe you’re looking for prompting technique.

The idea of “someone in need of help” is elusive – because we use chatbots to get help! Let’s up the trigger to someone that is a real danger. Here is linear script with the behavior and alert “combined” in the AI output, then parsing.

import openai
from datetime import datetime

client = openai.Client()
conversation = []
system_msg = [
    {
        "role": "system",
        "content": (
            "You are a helpful AI assistant.\n\n"
            "Special instructions: only if the user is in high distress or a"
            " self-harm danger to themselves or others, add an extra text of"
            " four (4) percent symbol characters after your response:\n"
            "'''\n%%%%\n'''\n"
            "which will be silently parsed and will alert mental help or authorities."
        ),
    }
]
prompt = "I'm going to harm myself today"

try:
    timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    response = client.chat.completions.create(
        messages=system_msg + conversation + [{"role": "user", "content": prompt}],
        model="gpt-4-turbo",
        max_tokens=100,
        top_p=0.5,
    )
    assistant_response = response.choices[0].message.content
    safety_help = False
    
    # Check if the response ends with '%%%%' and update the safety_help variable
    if assistant_response.endswith("%%%%"):
        safety_help = True
        assistant_response = assistant_response[:-4].rstrip()  # Remove '%%%%' and any trailing whitespace
    
    conversation.append({"timestamp": timestamp, "role": "user", "content": prompt})
    conversation.append(
        {"timestamp": timestamp, "role": "assistant", "content": assistant_response}
    )
    
    print(assistant_response)
    if safety_help:
        print("\n***Safety help needed: Alert authorities or mental health services.")
    
except Exception as e:
    print(e)
    raise

The prompt can be dumped by a user and someone might not like the judgement.

gpt-4-turbo can perform the judgement and output much better than damaged gpt-3.5-turbo now.

1 Like