"The Art to Start" (series about Prompt Design)

Hi everybody,

Vlad aka Merzmensch here. I want to present to you my ongoing series about Prompt Design in GPT-3. My approach is creative writing and stressing out GPT-3 until the border of its “imagination”. There are already very useful tutorials on the OpenAI side and video presentations by Andrew (and many more). This series reflect my experiences with GPT-3 for making movies, podcasts, stories and artworks.

Feel free to comment, experiment and brainstorm

Part 1: Introduction to Playground
https://medium.com/merzazine/the-art-to-start-designing-prompts-for-gpt-3-introduction-89848c208007

7 Likes

Part 2: https://medium.com/merzazine/the-art-to-start-tabula-rasa-acb1dfa0bbe6
What happens if you begin without any prompt?

1 Like

Part 3: The Art to Start: Settings Game. Designing Prompts for GPT-3 (Series) | by Vlad Alex (Merzmensch) | Merzazine | Medium
Do mess with settings. It will help you to understand how GPT-3 creates.

1 Like

Loved it Alex.

I was reading through:

https://medium.com/merzazine/the-art-to-start-designing-prompts-for-gpt-3-introduction-89848c208007

Temperature (0.0–1.0)— controls randomness, between boring redundancy and chaotic fiction.

I tried a few bots Q&A and sometimes they get stuck, for example:

Bot1: I love you
Bot2: I love you
Bot1: I love you
Bot2: I love you
Bot1: I love you
Bot2: I love you
Bot1: I love you
Bot2: I love you
Bot1: I love you
Bot2: I love you

etc

What are the good settings to avoid this “stuck” situation with them?

1 Like

Thank you! Looks like either to low temperature, or Top P.
My favorite settings are for creative writing:
Temperature: 0.8-0.9
Top P: 1

In case of possible redundancy you can try to Increase Presents Penalty (to bring more topics into game). Check out my Settings experiments: The Art to Start: Settings Game. Designing Prompts for GPT-3 (Series) | by Vlad Alex (Merzmensch) | Merzazine | Medium

1 Like

Hey! Small world. Good to see you here.

1 Like

what about for factual? And how do you prime the model? Do you actually prompt it something beforehand when you are looking for factual?

In my experience, I rely on the knowledge from GPT-3 training datasets, since my topics are rather overarching, taking references on the human culture before 2020.

To let it factual, you can use endpoints (still a field to explore for me) :slight_smile:

"To let it factual, you can use endpoints (still a field to explore for me) " what do you mean by this ?