Hello, I’m new to OpenAI & it’s my first post here
I’m trying to develop a bot similar to ChatGPT, but the responses are not promising.
I’m storing the user’s & bot’s responses on MongoDB as an Array of Objects, then I feed the OpenAI with the last 10 messages.
The prompt is like below:
[
"Human: Hi, my name is John.\n\n",
"AI: Hello John! What can I do for you?.\n\n",
"Human: ELI5 Software in one line.\n\n",
"AI: Software is....\n\n",
"Human: What's my name?.\n\n", ...
]
And when the user asks about his name, though he provided it in the first message and the prompt didn’t reach the max token (using text-davinci-003
, max_tokens=2048
), the bot can’t figure out his name.
Another problem with this approach is that when you send dialog-based prompts, like: Human: ... AI: ...
, the AI will generate a name for itself and send that as the response too! (e.g. Robot: I don't know your name
or Bot: Nice to meet you
.) I somewhat handled them using regex (subtracting the first occurrences of :
and everything that came before it! Which is not ideal.)
What’s the best approach to developing intelligent Artificial Intelligence through the OpenAI API, so that it could chat continuously and remember the key phrases of previous conversations? Is this possible without fine-tuning or embeddings?
My configuration:
import openai
# To optimize async requests to the openai
from aiohttp import ClientSession
openai.api_key = "####"
async def text_generator(prompt_str):
# Prompt format: f"{HumanOrAI}: {TextBody}.\n\n"
# ["Human: Hi, my name is John.\n\n", "AI: Hello John! What can I do for you?.\n\n", ...]
prompt_list = list()
openai.aiosession.set(ClientSession())
response = await openai.Completion.acreate(
model="text-davinci-003",
prompt=prompt_list[-10:],
temperature=0.9,
max_tokens=2048,
top_p=1,
frequency_penalty=0,
presence_penalty=0.6,
stop=["Human: ", "AI: "]
)
await openai.aiosession.get().close()
return response.choices