Hi there I'm still exploring the basics and I have a problem with implementing a open ended conversation with GPT-3 in the python console

I have no idea how to go from here. So far I’ve already implemented a one time prompt-completion but I can’t find a way to make a continuous prompt-completion similar to the chat example on the playground.

Here’s the code I’ve done so far:


openai.api_key = "some key"

myPrompt = "The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.\n\nHuman: Hello\nAI:"

start_sequence = "\nAI:"
restart_sequence = "\nHuman: "



response = openai.Completion.create(
	engine="text-davinci-002",
	prompt=myPrompt,
	temperature=0.9,
	max_tokens=150,
	top_p=1,
	echo=True,
	frequency_penalty=0.0,
	presence_penalty=0.6,
	stop=[" Human:", " AI:"]
)

print(response.choices[0].text)

Feel free to criticize my code and suggest what I should do. Thanks😁

1 Like

Paste the response into the prompt?

Or, use tkinter’s text entry box. I have used it to replicate some of the functions of the playground for one of my own projects.

My understanding is, with the straightforeward Completion endpoint, it only knows about what you pass it in the prompt (and suffix, if you give it). So to have something more than a one time exchange, you have to kind of fake it by passing the previous chat in its entirety, as the first part of the prompt, in your new completion.

So its not really an ongoing chat (holding previous prompts/completions in local memory between endpoint requests), its more like a series of one-offs that simply include more conversational context.
(anyone correct me if I’m wrong)

1 Like

You are correct, that is exactly how it works. GPT3 has no memory outside of its prompt, and/or fine-tuning

2 Likes