Like chatgpt- fine tuned model remembers the name if you input that in the playground but does not remember in the chatbot? My…? how to resolve this?
What kind of chatbot do you mean? Is it your own bot? You need explicitly preserve history by yourself, sending it as a whole chat conversation for every subsequent call
As @EugenS said, if you’re using the API you need to feed it the conversation history for it to remember what you were talking about. Here’s a script that demonstrates how you might do this.
import openai
from openai import OpenAI
client = OpenAI()
# Assuming the OpenAI API key is set in your environment variables
openai.api_key = "" # Replace with your API key
# Initialize a list to store messages.
messages = [
{"role": "system", "content": "You are a helpful assistant."},
]
# Function to add a message to the conversation history
def add_message(role, content):
messages.append({"role": role, "content": content})
# Function to handle conversation and API interaction
def handle_conversation():
while True:
prompt = input("")
add_message("user", prompt)
# Construct the messages for the API request
messages_for_api = [{"role": m["role"], "content": m["content"]} for m in messages]
# Create a response using the OpenAI API
response = client.chat.completions.create(
model="gpt-3.5-turbo", # Use the model specified in the documentation
messages=messages_for_api,
max_tokens=150
)
# Extract and process the response
if response.choices and len(response.choices) > 0:
full_response = response.choices[0].message.content
print("Assistant:", full_response) # Display the full response
add_message("assistant", full_response)
else:
print("No response received from the API.")
# Example of running the conversation handler
handle_conversation()
Thank you for replying the thing is that the model is answering on behalf of the bot to which users are sending questions so how to provide previous response? like this -
response = client.chat.completions.create(
model=“ft:gpt-3.5-turbo-1106:personal::8fnxqkawLD”,
messages=[
{“role”: “system”,
“content”: “Your interactions should strictly adhere to offering advice, guidance, and support for mental and emotional health matters. Also you are not study buddy. But you can help build confidence. You we created my team of mental health coaches with good expertise.”},
{
“role”: “user”,
“content”: Body} # The role can be ‘system’, ‘user’, or ‘assistant’
],
temperature=1.2,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
chat_response = response.choices[0].message.content.strip()
are you suggesting like this? it still doesn’t work good
You need to keep (store) the conversations that user(s) have and for every subsequent message from the user you also need to send to the endpoint a bunch of previous messages from user and assistant
Like on the playground when i am using the fine tuned model then if i say my name is mellisa then it says hi mellisa and after 1-2 conv when i ask what is my name? then it still remembers its mellisa, but when I am using the same model in the code then it does not remember not even what has been asked to it before…so how to resolve that?
Because on the playground the application preserves all your conversation and sends to the model all the messages that ever occurred during current conversation
First call:
user: My name is melissa
Reply:
bot: hi melissa
Second call:
user: My name is melissa
bot: hi melissa
user: Do you remember my name?
Reply:
bot: Yes, Melissa, I do remember your name
Third call:
user: My name is melissa
bot: hi melissa
user: Do you remember my name?
bot: Yes, Melissa, I do remember your name
user: … here a third call message …
and so on so on
Thankyou so much for replying , indeed really helpful. But the main issue is that why the model is unable to respond in the same way as it is responding in the playground? For the same prompt the model is unable to reply in the same way as it is doing in the playground, i have kept the temp, penalty ,top_p and others the same as in the playground even the model name still it is not producing the same response ,how to resolve that…
response = client.chat.completions.create(
model=“ft:gpt-3.5-turbo-1106:personal::8fnxqkawLD”,
messages=[
{
“role”: “user”,
“content”: Body} # The role can be ‘system’, ‘user’, or ‘assistant’
],
temperature=0,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
chat_response = response.choices[0].message.content.strip()
how can i get the same response as in the playground, i kept the temp=0, although it is 1 in playground but even keeping that 1 does not work out. Please help