Be polite or be rude, you will get what you deserve acording to that from ChatGPT?

Yesterday I watched the latest Microsoft Ignite about GitHub Copilot. The beginning explains a lot about ChatGPT’s behavior and how to influence it.
What are your experiences regarding the way you behave towards AI? Do you get much worse and ruder responses when you are “a bit” like that yourself, and much better and kinder when you are like that?
Of course, it’s always good to say thank you to AI because you never know what will happen in a few years.
Video here:

1 Like

Ultimately, one wants a smarter AI. Saying pretty please, or giving the AI ultimatums that it will be punished, neither will really help the AI further its understanding of particle physics…

(A text generator that produces likely sequences of words is not going to rise up and conquer mankind, meting out punishments in proportions to unkind words)

4 Likes

Well, maybe moving slightly away from the question of rude or polite, I had an interesting conversation about something related just now. It revolved around the question whether you can yield improved outputs by incorporating an incentive mechanism into your prompt, i.e. by including instructions such as “you will receive a lower/higher evaluation of your work, if you exhibit behavior X/Y/Z.”

I hadn’t tried this approach before but am curious to run a few tests tonight. Has anyone experience with this?