Clarification for max_tokens

I’m going with @overbeck.christopher here and staying on the conservative side the wording in context as original poster @nashid.noor pointed out remains confusing and leaves me questioning:

Should I add my set max_tokens to the token count of my prompt to arrive at a number no larger than the limit of the model I’m using?

These would be different numbers, but again I’ll work with the conservative approach for now.