AI with chat mode in Playground seems more clever than API

It is great to chat with AI using ‘text-davinci-003’ model in Playground. AI remembers what I have said and we can continue to talk and focus on a topic.

But, when I chat with AI using openai API, AI seems losing the context of the conversation. AI has no idea about what I have said.

Have you sent any previous message or context to it via API?

I have tried the method used in the Chat Bot by adding context tag, but it doesn’t work. I have no idea how ChatGPT and Playground make AI can use the context info dealing with the new question.

1 Like

Just spend the last three or four exchanges in the prompt as context to get more relevant answers.

Here is a small snippet of my current small project (python) :

def chatbot(request):
    if request.method == 'POST':
        # Récupérer l'entrée utilisateur
        user_id = request.POST['user_id']
        message = request.POST['message']

        # Stocker l'entrée utilisateur
        user_input = UserInput.objects.create(user_id=user_id, message=message)

        prompt = [{"role": "system", "content": "I want you to play the role of Bill, an expert in motorcycle. I will "
                                                "ask questions about my moto and you will answer my question only as "
                                                "Bill, not as an AI language model.."}]

        for bot_response in BotResponse.objects.filter(user_input__user_id=user_id).order_by('timestamp')[:3]:
            user_input = UserInput.objects.get(id=bot_response.user_input.id)
            prompt.append({"role": "user", "content": user_input.message})
            prompt.append({"role": "assistant", "content": bot_response.message})

        prompt.append({"role": "user", "content": message})

        # Envoyer la requête à l'API OpenAI
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=prompt,
            max_tokens=256,
            temperature=0.7,
            top_p=1,
            frequency_penalty=0.0,
            presence_penalty=0.0,
        )
      

      # End of the function

@francoisnoel62 doesn’t appending to the prompt risk exceeding the maximum token amount for a single prompt?