How to replicate any results using GPT3 prompts

Hi Jeff,

I hope you are doing well! :slight_smile:

I’m relatively new to this, and I’m on the path of learning it… but I have pretty specific needs, which I guess would take a bit of time to accomplish. I was wondering if you’d be open to some collaboration? Or even taking me as one of your new client :slight_smile:

Let me know what you think!


Hi Pete,

Hope you are doing well too!

Interested to hear more about your specific needs.
Many projects can be accomplished quite quickly with the new AI models :slight_smile:

Feel free to send me some further info or we can schedule a chat if you prefer.


Hey Jeff,

Sounds great! I don’t think here we can send private messages, so any chance you have discord? Or what channel would you recommend? Would be great to chat about it.


You on LinkedIn?

1 Like

Sent you a message! :slight_smile:

Lots of great info in this thread Jeff - thanks for that!

I’ve been playing with your prompt example to generate an entire article & had a few questions I hope you won’t mind answering.

  1. Where does your prompt end & the output begin? It seemed to work best for me if the end of the prompt was “First create an 11 paragraph article outline:”

  2. I could not replicate the full output you got in your example. Did you get that entire output from a single request?

For me, most of the time I will just get the outline, and then I will run it again, adding the first outline section heading that it generated to the end of the prompt, and then GPT-3 will finish the article, although it still didn’t replicate your results as robustly.

I usually either get a list of numbered point or a paragraph or two for each section, but not both.

Up until now, I didn’t think I could get long form content like this without few-shot prompts or a fine-tuned model, which are obviously more expensive. So thank you for opening my eyes to better prompt engineering :slight_smile:

Hi Jim,

I build the prompts interactively - Davinci 2 - T= .7 Freuency/presence penalties both set to approx 1/3rd of their max value. These are starting values and then you play with the parameters as you test.

Then I type something like:

“I need an expert to demonstrate how GPT3 can be used to write an article at least as well as” and hit submit. Davinci will then generate some text which I either accept, or edit to steer the prompt creation in the direction I wish. It’s an interactive process to allow Davinci 2 to partially guide the content creation & prompt evolution using it’s own knowledge & patterns.

When your prompt is complete to the point it will write a full article, then you can identify the sections of the prompt that can become variables (to be inserted at runtime to create an unlimited number of articles on different topics).

First, use interactive generation to create prompt, then use the created prompt with the variables inserted to determine article titles/content for subsequent articles.
Does this make sense?