How can I escape long prompts ( note that experts are saying that longer prompts might risk diluting the specificity of the request )

Found myself working on prompt engineering techniques. Lately when I started generating a new prompt found myself writing a " chatcompletion " prompt from scratch. Now the prompt has 224 lines, it is true that i can provide more context especially for complex tasks which is the task. Ensuring the model has enough information is crucial but with longer prompts the randomness percentage keeps getting higher.
Any Ideas Developers ? Thanks in advance

Here is a learning example from NASA:

In short: they send one long prompt but the actual execution spans over several messages.
You will likely end up in a similar scenario.