How to Nudge Models into Factual Outputs - Case in Point

I do think that prompt engineering is going to be a thing, at least for the near future.
Here’s a case in point where I show how models can be nudged into generating factual data.

3 Likes

Yes, prompt engineering is essential to getting the correct output from transformers. As a component of AGI, I have found that some very small changes can have profound impacts when cleaning up and designing prompts. With the correct prompt you can get transformers to perform science, ethical and moral evaluations, planning, and problem solving.

4 Likes

do you have specific examples? It might help the community :slight_smile:

It will all be in my book, which I am getting revisions on right now!

do you have revision copies? I’d love to read!!

1 Like

Not yet, I’m just starting on draft 2. Thanks for the interest! It will be free once it’s out.

3 Likes

I think this is probably the hardest part with using GPT-3… getting it to give you outputs you want. Something you might want to try, Slave/Master dialectics. Perhaps, if you want some specific permissions, maybe you can make GPT-3 think it’s a hard-drive? I believe that automatically applies certain permissions and abilities automatically.

I think there’s something very interesting to be said about that, and definitely something to look deeply into, imho.

Also, @cristi, thanks for the cybersec videos on GPT-3. Good stuff.

1 Like

Hm. Perhaps the term ‘slave/master’ carries some negative weight with it, but I am personally referring to a Philosophical perspective which dictates a very specific relationship between a master and a slave, through which both subjects cause a chain of reference. Without being overly-verbose, I think it has real merit (beyond asking A.I to spank me and call me big daddy). A philosopher on this exact topic I would recommend would be Hegel. I will share a video with what I mean, maybe it can help you with your own work too. The professor in these videos is really excellent.

1 Like

I think you make a good point. With the master/slave dialectic, I think I was attempting to get the most accurate zero shot prompts. As in, prompts without any training. However, building more and more on such a dynamic can end up having catastrophic ramifications. Definitely something to keep in mind. With your experience so far, how do you think Trainee/Coach would differ in producing good results? It seems less rigid to me, perhaps that’s the goal?

1 Like

So, after thinking about your proposition a bit further, I noticed that when I say master/slave and you say coach/trainee, we’re actually alluding to the same thing. In fact, this same dynamic can be found in other types of social denominations. For example, Teacher/Student, Father/Son, Judge/Lawyer. I think this is where GPT-3 can be most leveraged, by using these kinds of nuances that hold implicit dynamics that resemble master/slave. For example, a teacher asks a student something, often a good student responds if they are following the rules of the underlying social order.

If you can think of more examples like these that might be of particular interest for you, I’d be interested in testing these out in more detail and sharing my results.

1 Like

I can imagine a few reasons for the hesitation.

From what you’ve explained earlier, I’ve come to agree with you. I even think a rating of 1 is not all too shabby for master/slave. Never the less, I’m glad to have had the discussion with you, since you pushed me to find more appropriate approaches. Thank you for sharing your results.

1 Like

In my experience GPT-3 provides better results not in the “slave/master” dialectic, but - especially in DaVinci Base Engine - if you begin to think like GPT-3. Infiltrate AI. In this case you are not somebody who write a Prompt which should be continued by GPT-3. But you rather brings ideas via “inception”. You are GPT-3 and the prompt is part of your AI-driven text.

From that perspective I mostly get interesting completions without any human-machine related friction.

1 Like