Meta-Prompting Concept: Asking Chat-GPT for the best prompt for your desired completion, then to revise it before using it

Has anyone employed this approach? I’ve found it helpful when crafting prompts, to literally ask Chat-GPT to help create the prompt for a given goal that I will describe to it while asking what could be the most effective prompt for such an outcome.

Asking AI to design the best prompt for you to use with AI to achieve your desired AI behavior, and then asking it to further improve it, before using that very same prompt with AI, is what I describe as “meta-prompting."

Here is an example of the approach using GPT-4: https://chat.openai.com/share/77c59aeb-a2d6-4df8-abf6-42c8d19aba3d

P.S. I wrote a quick 80-page ebook about available for Kindle in case it can be of value to the community: https://www.amazon.com/dp/B0C6PRFNL5

8 Likes

You are making the assumption that ChatGPT is a good prompt engineer. It’s not. Your prompt improvement does provide additional detail and context but you are missing several key elements of a great prompt. Why did you rush to publish a book on this I’m not sure, but congrats on selling some copies.

Never seen someone start a prompt with “Dear ChatGPT” before…

If you want to see this technique performed expertly and trained on prompt engineering techniques that are proven, you can do a free trial of my AI tools at SmartGPT.

3 Likes

@MarkFulton Thanks for chiming in here with some thoughtful feedback!

I think you might be underestimating Chat-GPT’s emergent ability as a prompt engineer.

I would suggest you try this technique yourself, although I realize that if someone is already an expert prompt engineer perhaps this approach isn’t as advanced as other methods that might exist, including few shot techniques referenced in research papers.
But to answer your question, compare to how most people use Chat-GPT, I am confident that Meta-prompts can greatly improve AI outcomes, and I wanted to share that knowledge with others.

Regarding the “Dear ChatGPT” part, you can totally ignore that, it’s like the word “the” in a Google search that gets ignored (and was actually the first time I ever saw it included in that example with the Mona Lisa prompt).

Regards,

1 Like

I’ve found that prompting abstract really helps. But not in the sense of asking for a prompt.
Tried alot and was never pleased but I disagree with not releasing premature books. It still is a good thing for beginners and gives people ideas.

2 Likes

I’ve deleted the mentions of the ebook, so why is this post being flagged as spam?

As a moderator I did roll back the first post so that the link and image are restored.

For those reading this, there have been a few flags raised with this topic so if it seems the public is missing something between the lines they are.

Hopefully this topic is copacetic at this point. :slightly_smiling_face:

1 Like

It’s fascinating to hear about the untapped potential of GPT-4 as a prompt engineer. Given your confidence in its abilities, one might wonder if this insightful message was itself a clever product of GPT-4’s prompt engineering prowess! After all, with your belief in GPT-4’s capabilities, who’s to say you didn’t ask it to draft a message highlighting its own strengths? Now that would be a delightful twist – a testament to GPT-4’s skill, and perhaps a sly nod to the age-old saying, “Practice what you preach!”

1 Like

Hi Mark. I went to your site. Looks like a great tool and I think I’m going to buy it. I just wanted to ask you, how does your tool compare to something like PromptPerfect? We use that now so I feel like it would be a one or the other type situation right? Thank you!

I made my life easier by writing a GPts that does the Meata-Prompting for me, it works very well for my needs :
https://chat.openai.com/g/g-tH8fLNSDw-prompt-artisan

1 Like

looks like there is a new paper from Stanford, they at least used the name of “Meta-Prompting” :laughing:
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding

1 Like

I know, I saw it a few days ago, thanks for sharing! :smiley: Here is a related quote from that paper that draws some contextual parallels to my definition of Meta-prompting from the book, albeit in a far less technical manner than this, and perhaps only coincidentally. Of course, the context is totally different:

"2 Meta Prompting
Intuition and Abstract Overview. The modus operandi of meta-prompting is to use a model2 to coordinate and execute multiple independent inquiries and subsequently synthesize their responses to render a final response. This mechanism, in principle, endorses an ensemble approach, drawing from the strength and diversity of independent specialized models to collaboratively address and tackle multifaceted tasks or problems. We posit that while a single, general-purpose model might deliver valuable and useful insights into generic queries, combining the perspectives and conclusions of multiple domain-specific models (which we also refer to as experts) has the potential to yield more comprehensive, robust, and accurate solutions.
2 *Our use of the term model refers to the application of an LM with certain prompt templates to play a specified “role.” We typically only use a single LM (e.g., GPT-4) to implement all the models in an execution."*

So some students from stanford read this thread and made a science paper from it? Congrats!

1 Like