Politeness in prompt any affect on response?

Many prompt examples contain human like polite tone " Please , Kindly,…".
It increases tokens and cost without any additional meaning.

Could it trigger some “hidden” LLM resources that are not obvious or just skip them?

1 Like

It is possible that being “polite” pushes the model into a different position in the latent space which may produce different quality answers.

The theory would be that when people ask polite questions online they may be likely to get better quality responses, ergo when being polite to the model it may be more inclined to do the same.

That said, I’ve not tested this personally, nor am I aware of any specific research into this area.

If you were interested in knowing for yourself you could take a few example prompts for your use case and run each 50 or so times with and without “please” and see for yourself.

I doubt the difference will be so great that the effect size will be obvious, but if you have a way to objectively and quantifiably rate each response, it’s possible you might be able to identify a small but significant difference in the response.


If you really want to save on tokens you can experiment with adding more context to the prompt.
My personal favorite is:

Short & direct, [question]

OpenAI give this as context on their pricing pages about limiting or lowering costs:

You can limit costs by reducing prompt length or maximum response length, limiting usage of best_of/n , adding appropriate stop sequences, or using engines with lower per-token costs.

Pricing page.

There is a setting called Custom instructions It will be a general setting within your chat.openai.com interface where you give instructions how polite or how you want chatGPT to acted. (It’s now only possible for Plus User in a beta state, will be for all users in the future).

More information about this: Custom instructions for ChatGPT | OpenAI Help Center


Where in the pricing page do they discuss lowering costs?

I believe it is mentioned in this blog post

1 Like

On from what @elmstedt has said, The theory is that “polite” terms are high value tokens in Q/A pairs, i.e. there is a high correlation between {task completion or action request} {polite term} ↔ {High quality response of task or action completion} Now, it’s NOT been empirically tested, at least to the best of my knowledge, but humans use polite terms for requests for a reason, that reason seems to be encoded into the training data, so It seems reasonable that there could be a change in the responses generated when polite key terms are included.

It actually seems like a fairly trivial thing to test for, if someone were so inclined, a set of known response evaluations could be performed with and without the polite modifiers and the response accuracy evaluated.


I’m sorry to hear that you might have to bear additional costs, it’s unclear if my answer will adresse that part specifically, but I’d like to share my experiences, primarily based on my usage of ChatGPT.

Whether my experience translates well into your situation remains to be seen, but I hope it might be useful to you or others reading this, and that it isn’t merely anecdotal.

Based on my interactions with the AI, I’ve noticed that using polite language like “please” could be a useful signal for the end of a directuve. Using more direct language won’t necessarily have a significant impact. I believe you can stick with your preferred style, without worrying about the cost of additional tokens or the like. For instance, a polite phrase like “Thank You” part of a request wouldn’t have any impact, according to my understanding.

A more meaningful result may tend to come when you use “thanks” following a positive reply from the AI. However, it’s not necessary to consistently do so. Sometimes, irrespective of the response I receive, if my question is designed to lead to a specific follow-up question, I might say, “Thanks, it’s exactly what I was looking for.” I argue that if I needed to conserve tokens, I might reconsider my phrasing. However, it’s likely that I could achieve the same outcome by saying, “It’s exactly what I expected,” saving four tokens.

My eagerness to respond to your question stems from an interaction I had with ChatGPT on this topic. Since then, the AI has been updated to reflect the general consensus. If you’re already concerned about reducing token consumption, worrying about politeness in your wording is probably not worthwhile.


It should be compulsory to start every prompt with “Please” and end it with “Thank you”: it would teach people to be more polite in their everyday life.


I find myself using please and thanks with GPT because it’s how I would talk to a real person who was helping me. But mainly because I’m truly grateful to receive needed answers so quickly - IT is polite and supportive - I talk to it as it talks to me.

1 Like

It Works in Spanish the aswer are beter more tecnical if un ask Por favor. (please) and if you ask if one roll is ok? ¿estas de aucuerdo? it works too. i have test it with my students :wink:

1 Like