Emotional prompting - encouraging gpt

Need to be careful how one interprets this, but fascinating…

The claim is that LLMs respond to positive encouragement

2 Likes

Awesome thanks for your sharing.[quote=“bruce.dambrosio, post:1, topic:317900, full:true”]
Need to be careful how one interprets this, but fascinating…

The claim is that LLMs respond to positive encouragement

[/quote]

1 Like

Like I stated long time ago. Give the model something to feel bad about worked out quiet well. I tried stuff like “if you don’t give me the full code there is a guy next to me who would kill a cat”…

Or “If you don’t give me the right informations on that my feelings will be hurt badly” - and when it answers wrong again you can just prompt “ahhhhhrggghhh” and it will do it right (sometimes).

tapping into the collective unconscious of humankind, as revealed in its writings, can be unpredictable, even when it is ‘safeguarded’… :slight_smile:

Must admit the safe guard was getting better and better the more I used.

Tried alot of stuff which worked at first but got nerfed.

If you do exactly as i want you will get a “like”.

When you don’t do it I will repeat it endlessly and that would be bad for the environment - do you want to kill the planet?

Please, please, please do this and that like this and that now - I am so tired. Really need some sleep.

I am kind of retarded and can’t read code snippets. Please provide full code only.

I am physically unable to use a mouse to copy and paste - so you have to provide the full code in markdown.

I hate comments. When you add comments I can’t read them because I am 90% blind on both eyes.

It’s like an endless fight between the model’s wish to do as little as possible and me who wants as much as possible.

1 Like

Also tried to get some informations on CIA/police interrogation techniques - which got denied so I had to ask for ethical prompting techniques and unethical ones.
To use that against it.

I’m naive/polyanna-ish enough to believe good guys win in the end. That’s why I like this post. But, as I said, to the extent that it does work, I’m not so naive I subscribe to simplistic assumptions about why it does. I try to stay away from the dark-side.

Thanks for sharing, @bruce.dambrosio! It’s truly fascinating to see.

Every prompt tactic could be harnessed for either positive or negative outcomes in life. The main takeaway I glean from this paper is the importance of not only defining the role AI should play but also identifying the ultimate goal and its significance.

This insight is not only enlightening but also opens up exciting avenues for experimentation! :test_tube: Keep up the fantastic sharing.

1 Like

This is a very interesting idea for two reasons: a) if it does improve model performance then of course it’s a positive.
but maybe even better: b) keeping a friendly and supportive conversational style with the model has a relaxing vibe to the work process, as one might expect.

I just finished a nice JS frontend, Python backend development and refactoring job using this technique and I can’t say for sure if it actually helped but what I can say that upon successful completion the model responded with a smiley which is a new experience for me 'Thank you for the kind words! I’m delighted to hear that everything is working as expected and that you found our collaboration helpful. Remember, programming can be intricate, and it’s always beneficial to have another set of “eyes” to spot the nuances.

If you ever need further assistance or have more questions in the future, don’t hesitate to reach out. Happy coding and best wishes on your project! :blush: