Use GPT-3 to generate GPT-3 prompts

The idea here is that sometimes GPT-3 might want to do something that it’s not necessarily trained to do by you, perhaps it can try to be a little original and generate its own demands? I thought it generated decent results.

What do you think of this?

If you’d like to test it on your own:

Picture:

Perhaps it’s important to know that the prompts actually result in what I asked them to. I would recommend you try out what you ask it to generate, it might be fun to explore what mistakes we can make it get and clean them up.

An issue that arises:

The prompts themselves might be good (more tweaking is absolutely necessary), but an issue that I come across is the temperature settings and such. I’m not sure if I add them as random variables it’ll understand what I mean, nor did the python code really translate very well. Any ideas are welcome on how to bypass that. Thank you for reading.

3 Likes

The temperature level determines how random or correct the response completion will be. If the bot will be more factual, then you want to set it lower, and if you want the bot to be more creative in the answer or less obvious, then you want to set it higher.

So exactly what lover or higher?

It depends on your testing with the bot. You have to try a lot of times so you figure out which is the best temperature for your purpose.

I share with you this article from Algowriting:
A simple guide to setting the GPT-3 temperature | Medium

Regards.

I call this method a “metaprompt”. In theory, with a series of metaprompts, GPT-3 can think about anything.

3 Likes

Neat name. I will adopt it from now on. Have you had any success with ‘meta-prompts’? If you don’t mind sharing, what’s the coolest application you can think of for it? I go with the typical ‘omg it can think creatively’, but I’m wondering how much further we can take it.

1 Like

I was actually pointing towards how I would be able to tailor the responses I would get from the secondary prompts, rather than the original. So, what I’m curious about is, when the second prompt comes around, how can we anticipate its needs settings wise. Regardless, Thank you for the useful information.

Yes! I have developed what I call “recursive cognition” with GPT-3 whereby you give it a topic and ask it to generate questions, those questions are then used to generate the next prompts, and so on.

This first prompt is used to generate questions

The second one takes the output from the first to generate answers.
image

Here the cycle repeats with more questions:
image

This process can be augmented by adding facts via the Answers endpoint and empirical sources such as Gutenberg books and Wikipedia articles, but as you can see, GPT-3 has plenty of knowledge embedded.

6 Likes

Oh, I’m sorry. I have re-thinking this. Please tell me if I’m right: Do you want to know the best settings for the GPT-3 generated prompt (the second one)?. In that case, that would be awesomely curious.

1 Like

Wow. That’s actually very interesting! Thank you for sharing this. I think it’s very neat, and the end of the recursive line should look very clean with the right prompts. Super neat.

1 Like

@IsaacTheBrave, I really like the idea of a “metaprompt”! Very interesting initiative.

Now she is going to start learning on her own… How nice that we are on the machine side. That’s why I have never left my microwave alone while it was beeping…

2 Likes

I am trying to think of a way to anticipate the token needs for a prompt and I think there’s one way I could think up that would be helpful, which is a separate function which, like you said, does some kind of rating for the results.

What I’m thinking would be perhaps successful at this is training ML on some prompts/outputs/tokens and see if we can generate reliable results. I say ML because there’s a few issues that I come across when I try to think of doing this with other approaches. The primary one, was if I would want to do a rating of the prompts through some pre-defined parameters, it won’t be so great. The calculator prompt was much larger than the others, yet, the output was tiny! (Just one number.)

I think a pre-trained algorithm that tries to anticipate settings for prompts on the basis of the prompt itself would be useful. Curious if I can just go around the entire thing and ask GPT-3 to provide the settings somehow. Would be kind of cool to see GPT-3 choose limits for itself. Although, definitely put a verification on any requests, the algorithm COULD go wild and cost you a fortune per request.

If I get any successful results with either of those approaches, I will post them here.

1 Like

That’s definitely an interesting direction I wrote about in the past.

OpenAI has the instruct-series models that operate similarly.

1 Like

Enjoying this thread!

Metaprompts are an area of GPT-3 that I am currently researching. I first came across the term in a paper by Laria Reynolds and Kyle McDonell (15 Feb 2021) and find the area extremely intriguing.

@daveshapautomator I also like your term “recursive cognition”. What you outline reminds me of cybernetic n-loop learning which is another field I have spent time researching. The idea in n-loop learning being that as you build each loop the level of learning becomes more sophisticated.

No doubt settings like temp are always important but I think so too are your semantic structures of how you set GPT-3 up to develop metaprompts. Metaprompts are still going to be a reflection of your original prompt in not only structure but nuanced associations between the words.

2 Likes

Awesome! Thanks so much for that tip :grinning:

Oh and I noticed the reference you linked to @m-a.schenk was a different one (same authors and same month though) than the paper I was thinking of. Though that reference you gave also looks really good. Certainly, @kylemcdonell and Laria Reynolds are doing some really fascinating work!

1 Like