API gpt saying bogus stuff while using free $5 tokens

Is it true that the API gpt while on a free plan says bogus stuff and forgets the conversation context after one reply?

Hallucinations are common when using the API, try modifying your prompt to get more consistent and stable replies from the model.

1 Like

The APi’s chat completion method does not have a memory. You must provide previous turns of conversation if you want it to be able to follow the topic of a chat, and not just answer standalone input.

You have access to the same gpt-3.5-turbo-xxxx series of models as everyone else. The model alias gpt-3.5-turbo now points to gpt-3.5-turbo-0125, which is less expensive and particularly poor, but it isn’t designed simply to “make stuff up” just because you are using free trial credits.

Here for example is a “chatbot” that maintains some memory, written in python with the openai library (pip install openai)

from openai import OpenAI
client = OpenAI()

system = [{"role": "system", "content":
           "You are a helpful expert AI assistant."}]
user = [{"role": "user", "content":
         "introduce yourself"}]
chat = []

while not user[0]['content'] == "exit":
        response = client.chat.completions.create(
            messages=system + chat[-20:] + user,
            max_tokens=1000, top_p=0.8,
            stream = True,
    except Exception as err:
        print(f"Unexpected {err=}, {type(err)=}")
    reply = ""
    for part in response:
        word = part.choices[0].delta.content or ""
        reply += word
        print(word, end ="")
    chat += user + [{"role": "assistant", "content": reply}]
    user = [{"role": "user", "content": input("\nPrompt: ")}]

It runs in a loop, making an initial call to the AI with a predefined user message to get a “welcome” response. Then you can type in your input.

You will note that it has a list called “chat”, a place where previous conversation between the user and the AI assistant is added after every API call.

You will also see that upon every call messages are sent:
messages=system + chat[-20:] + user
with a limit of 20 extra chat messages, so the AI understands what’s been talked about, but doesn’t have infinite memory and infinitely growing expense. In practice, you’d have more intelligent chat management.

Additionally, the use of a top_p parameter less than 1.0 restricts the AI to producing highly-likely words.

So: the system message gives it a purpose that is not “saying bogus stuff”. The chat history makes it not forget. The model choice shown is smarter than some others.

1 Like

I’m building the chat bot in kodular using blocks, can the same code you shown me work there too?