Openai API chatcompletion format returns irregularly

Hello, I am currently trying to make a chatbot using the chatGPT API.

The form returns differently each time I ask openai for an answer.

for example

  1. Plain as a line answer (this is very optimal).
  2. {‘prompt’: content, ‘question’: content, ‘context’: content}
  3. {‘completion’:{‘chunks’:[{‘text’:‘Hello?’,‘index’:0}],…,‘id’:…

At first, like number 1), the answer came out very well. Then, suddenly, a strange answer comes out like 2&3. If the format is constant, it will be tuned accordingly, but it is impossible to predict the irregularity of the answer based on the rule.
Why? The code is nothing special.

result = openai.ChatCompletion.create(
            model=model_name,
            stream = True,
            messages=[
                {"role" : "user", "content": str(msg)}
                ] )
    return result
for event in response:
                        if event["choices"][0]["delta"].get("role") == "assistant":
                            yield '{"event":"start",\n"data":"stream"}\n\n'
                        if event["choices"][0]["delta"].get("content") is not None:
                            response_message = event["choices"][0]["delta"]["content"]
                            json_data = json.dumps({"text": response_message, "sender": "assistant"}, ensure_ascii=False)
                            yield '{"event" : "message",\n"data":' + json_data +"\n}\n\n"
                    yield '{"event":"end",\n "data":"stream",\n "docs":'+ str(json_docs) + '\n}\n'

The code you have there is using the completions function call and not chatcompletions, so it has no way to remember context, unless you provide it. Here is a bit of python flask code that will create a super simple chatbot with context memory:

import os

import openai
from dotenv import load_dotenv
from flask import Flask, render_template, request

load_dotenv()  # load env vars from .env file
openai.api_key = os.getenv("OPENAI_API_KEY")

app = Flask(__name__)

# Global variable to hold the conversation
conversation = []


@app.route("/")
def index():
    global conversation
    conversation = []  # Clear the conversation when user starts a new conversation
    return render_template("index.html")


@app.route("/get_response", methods=["GET", "POST"])
def get_response():
    global conversation
    message = request.args.get("message")
    conversation.append({"role": "user", "content": message})
    completion = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",
        messages=conversation
    )
    response = completion["choices"][0]["message"]["content"]
    conversation.append({"role": "assistant", "content": response})
    return response


if __name__ == "__main__":
    app.run(debug=True)