Chatbot on Django not understanding messages context

I have been trying to provide context to my chatbot, and did this using the ‘messages’ parameter, but I kept getting no response from the chatbot after introducing that to my code. I see the following error come up:

openai.error.InvalidRequestError: Unrecognized request argument supplied: messages

I made sure that my OpenAI key is valid and my API version are up to date, and I feel this has to be an issue with my code, but I cannot figure out how to get the bot to understand/respond using this context provided through the messages parameter. I’m afraid it may be an issue with implementation in Django because I can provide context easily in Jupyter when I don’t need to write a separate file for the chatbot (see code below), a separate file calling the chatbot function: views.py, and a separate file implementing the chatbot using JavaScript in index.html. There must be a communication issue, but i have been losing my mind trying to find it for the past few days and would greatly appreciate help!

chat_bot.py

import openai
from django.conf import settings

def get_chat_response(prompt):
    openai.api_key = settings.CHATGPT_KEY
    messages=[
        {"role": "system", 
         "content": "You are an STI health assistant, an automated personal sexual health counselor. "
        },
        { "role": "user",
         "content": prompt  # User's message
        }
    ]

    response = openai.Completion.create(
        engine='text-davinci-003',
        messages=messages,
        max_tokens=50,  
        temperature=0.5,
        n = 1 
    )
    print(response.choices)
    return response.choices[0].text.strip()

views.py (just the part with chat function)


def chat_gpt(request):
    
    response = get_chat_response(request.GET.get('prompt'))
    # print(response)
    

    return HttpResponse(json.dumps(response), content_type='application/json')

index.html (JS where chatbot is implemented)

<script src="https://code.jquery.com/jquery-3.7.0.min.js" integrity="sha256-2Pmvv0kuTBOenSvLm6bvfBSSHrUJ+3A7x6P5Ebd07/g=" crossorigin="anonymous"></script>
    <script>
        document.addEventListener("DOMContentLoaded", function() {
        var sendBtn = document.getElementById("send-btn");
        var userInput = document.getElementById("user-input");
        var chatBox = document.getElementById("chat-box");

        sendBtn.addEventListener("click", function() {
            var message = userInput.value.trim();
            if (message !== "") {
            appendMessage(message, "sent");
            userInput.value = "";
            scrollToBottom();
            // TODO: Handle user message and generate bot response
            $.ajax({
                url: "http://127.0.0.1:8000/chat_gpt/?prompt=" + message,
                type: 'GET',
                dataType: 'json', // added data type
                success: function(res) {
                    console.log(res);
                    // alert(res);
                    appendMessage(res, "received");
                }
            });
            }
        });

        function appendMessage(message, type) {
            var messageElement = document.createElement("div");
            messageElement.classList.add("message", type);
            messageElement.innerText = message;
            chatBox.appendChild(messageElement);
        }

        function scrollToBottom() {
            chatBox.scrollTop = chatBox.scrollHeight;
        }
        });

    </script>
1 Like

The payload is incorrect.
Here is an example of the corrected payload for those legacy models.

import openai
from django.conf import settings

def get_chat_response(prompt):
    openai.api_key = settings.CHATGPT_KEY

    response = openai.Completion.create(
        model='text-davinci-003',
        prompt=prompt,
        max_tokens=50,
        temperature=0.5,
        n=1
    )

    print(response.choices)
    return response.choices[0].text.strip()

edit
docs : OpenAI Platform

1 Like

Welcome to OpenAI community @palakshah2024.1

If you want to use the chat completions endpoint, use:

openai.ChatCompletion.create()

with a compatible chat model like gpt-3.5-turbo or gpt-4.

1 Like

Hi thank you, this is correct - it is how I had the code written when the chatbot was executing just like chatgpt. But, how can I provide context to the bot? i.e. I want to be able to tell it how it should be answering questions and that is the functionality I have been having issues implementing

@palakshah2024.1 use chat completions endpoint

Context generally refers to everything you’re pushing to the LLM.

I think specifically you mean the system message - which I think you’re using correctly. if you just want to test behavior, consider trying it on the playground first https://platform.openai.com/playground

you can then actually click “view code” on the top right and compare it to what you had.

one thing to note is that gpt-3.5 used to sort of ignore system messages, so probably best skip gpt-3.5-turbo-0301.