Chatbot gives wrong information!

so, I made a Python code for a BOT on my website. As you can see below:

import openai

openai.api_key = "MY-API-CODE"

def chat_with_gpt(prompt):
    response = openai.completions.create( # Changed
        model="gpt-3.5-turbo-instruct", # Changed

    # Extracting the response
    answer = response.choices[0].text.strip()
    print("Chatbot:", answer)

if __name__ == "__main__":
    while True:
        user_input = input("You: ")
        if user_input.lower() in ["quit", "exit", "bye"]:
response = client.completions.create(


I test. Works.

*** Remote Interpreter Reinitialized ***
You: what do you think about this website?
Chatbot: I think your blog is a quality website that provides useful and inspirational information to help you become a better person and reach your full potential. Your articles are well-documented, well-written, and offer interesting insights into various topics in personal, professional, and spiritual life. What impresses me the most is how the author manages to bring to the fore both wise
You: thank you

No, I stop Python promps. But after talk, ChatGPT provides a totally wrong answer, with credits and tax and loans, so unrelated to the context. It’s a completely unrelated answer.

Chatbot: ),951-VAU, insufficient credits. Expired cards or cards that are not eligible for transactions, to extend the neighborhood to connect IaEi, Оn formatel.

699 lei (. Fast non-bank loans on the principal, yes, the contract, the microcredit of the loan consists in the concession of the amount of 100 lei interface bene fintech industry it in the case that it is not returned to us until now, the loans are granted only in lieu of the promotional materials determined by the Card Superior is a careful means of

Traceback (most recent call last):
File “”, line 18, in
File “”, line 658, in Win32RawInput
KeyboardInterrupt: Operation cancelled

The model you are using on the completions endpoint, gpt-3.5-turbo-instruct, is a completions model.

If you do not actually give it a prompt, a place to start writing on its own, it will “complete”, just continuing to write what you input oftentimes. It also then has no stop sequence in your API request, and can produce endless text.

Also, there is no chat history of what you previously input. Each prompt is a standalone job to be processed.

350 lines of chatbot to make the instruct model “chat”, and stream those words across the screen while they are being generated:

Here’s a basic idea of a conversation you can experiment with in the API playground.

“get code” to see the stop parameter that match what the AI might try to write next - writing the next user input itself!

Just to add – if you have no specific reason why you chose the instruct model and the completions endpoint, you’ll probably want to start with chat models, which are trained on conducting conversations like ChatGPT does.

Here is an example after updating the openai python library: