How can I escape long prompts ( note that experts are saying that longer prompts might risk diluting the specificity of the request )

Found myself working on prompt engineering techniques. Lately when I started generating a new prompt found myself writing a " chatcompletion " prompt from scratch. Now the prompt has 224 lines, it is true that i can provide more context especially for complex tasks which is the task. Ensuring the model has enough information is crucial but with longer prompts the randomness percentage keeps getting higher.
Any Ideas Developers ? Thanks in advance

1 Like

Here is a learning example from NASA:

In short: they send one long prompt but the actual execution spans over several messages.
You will likely end up in a similar scenario.


I agree with above, using the multi-turn would probably be to your advantage. 224 lines is too much context. Use some of the context as feedback later on if the model makes a mistake. It seems like you’re trying to prevent deviations from your specific goals and form a perfect ‘path’ but maybe try loosening up the frontend a little and tightening the body messages. Also, I know it’s very popular advice to be “CONCISE”, but I’d like to know how that is being defined and how they’re measuring the results. You mentioned the randomness percentage keeps getting higher, can you share with us what metrics or tools you’re using to analyze that? That would be really helpful

What is the 224 lines prompt?