Best prompt engineering to simulate the remembrance of the conversation

Hello, I’m new to OpenAI & it’s my first post here :slight_smile:
I’m trying to develop a bot similar to ChatGPT, but the responses are not promising.
I’m storing the user’s & bot’s responses on MongoDB as an Array of Objects, then I feed the OpenAI with the last 10 messages.
The prompt is like below:

    "Human: Hi, my name is John.\n\n",
    "AI: Hello John! What can I do for you?.\n\n",
    "Human: ELI5 Software in one line.\n\n",
    "AI: Software is....\n\n",
    "Human: What's my name?.\n\n", ...

And when the user asks about his name, though he provided it in the first message and the prompt didn’t reach the max token (using text-davinci-003, max_tokens=2048), the bot can’t figure out his name.
Another problem with this approach is that when you send dialog-based prompts, like: Human: ... AI: ..., the AI will generate a name for itself and send that as the response too! (e.g. Robot: I don't know your name or Bot: Nice to meet you.) I somewhat handled them using regex (subtracting the first occurrences of : and everything that came before it! Which is not ideal.)

What’s the best approach to developing intelligent Artificial Intelligence through the OpenAI API, so that it could chat continuously and remember the key phrases of previous conversations? Is this possible without fine-tuning or embeddings?

My configuration:

import openai
# To optimize async requests to the openai
from aiohttp import ClientSession
openai.api_key = "####"

async def text_generator(prompt_str):

    # Prompt format: f"{HumanOrAI}: {TextBody}.\n\n"
    # ["Human: Hi, my name is John.\n\n", "AI: Hello John! What can I do for you?.\n\n", ...]
    prompt_list = list()

    response = await openai.Completion.acreate(
        stop=["Human: ", "AI: "]

    await openai.aiosession.get().close()

    return response.choices

The OpenAI API models do not have session management. The standard way to attempt to build session management with the API is to condense and feedback prior information with each new prompt. So, you keep the information you think is relevant and feed that back to the API.

The API completion does not “remember” anything because there is no session management.

The other “main” option, which is what I’m doing, is to focus on other projects and tasks not related to session management and wait for the “soon to be released” ChatGPT API, which we are all “fingers-crossed” confident will have session management capability. :crossed_fingers: