How can I escape long prompts ( note that experts are saying that longer prompts might risk diluting the specificity of the request )

Found myself working on prompt engineering techniques. Lately when I started generating a new prompt found myself writing a " chatcompletion " prompt from scratch. Now the prompt has 224 lines, it is true that i can provide more context especially for complex tasks which is the task. Ensuring the model has enough information is crucial but with longer prompts the randomness percentage keeps getting higher.
Any Ideas Developers ? Thanks in advance

1 Like

Here is a learning example from NASA:

In short: they send one long prompt but the actual execution spans over several messages.
You will likely end up in a similar scenario.

3 Likes

I agree with above, using the multi-turn would probably be to your advantage. 224 lines is too much context. Use some of the context as feedback later on if the model makes a mistake. It seems like you’re trying to prevent deviations from your specific goals and form a perfect ‘path’ but maybe try loosening up the frontend a little and tightening the body messages. Also, I know it’s very popular advice to be “CONCISE”, but I’d like to know how that is being defined and how they’re measuring the results. You mentioned the randomness percentage keeps getting higher, can you share with us what metrics or tools you’re using to analyze that? That would be really helpful

What is the 224 lines prompt?

I think you are generative ai in that case, 224 line prompt is toomuch :joy:.

Sorry for being funny- no offense!

I often write long system prompts of around 150 lines. The key is a holistic design approach and eliminating as many unnecessary instructions as possible. But, I think pseudo-code can help compress your prompts.

  • Natural language
No matter who the users are and what they talk about, I always assume the most intelligent person in the world. I never lower my level to match theirs.
  • Pseudo-code
Assume: Users.intelligence = MAX
Deny: Self.adjust_level_to_user

It is important to note that pseudo-code changes the token structure, which may affect self-attention interpretation of the context and responses.

Also, instructing the model to think logically using system prompts reduces fluctuations during CoT and makes randomness more stable.

2 Likes