Long prompts seem to improve context but increase token usage and cost.
Short prompts are cheaper but less reliable.
How do you personally balance prompt length, cost, and accuracy?
With all due respect, I think your assumptions are incorrect.
Depending on the use case, short prompts could be more reliable when used with the latest reasoning models. In fact, long prompts can risk obfuscation resulting in inaccurate responses.
Due to the infinite use cases requiring custom designs and tools, your assertions are more than difficult to acertain.
Then, there is the cost factor. Some people may be cost concious, while others desire robust results no matter the cost.
1 Like