Put the whole prompt in system message when there is no user?

I’m just using the API to retrieve information on a topic - there is no ‘user’.

So I don’t have any need for tiered authority, so I might as well give my entire prompt the highest authority: the system message.

It makes sense to me, but I find often with LLM’s it is best to stick to convention usage rather than what makes sense, so I’m not sure.

What do you think?

1 Like

Good to see you again.

Well, the user message is required. You could place the prompt in both places, but it’s generally better to have the “instructions” in the system prompt and the question in the user message.

Another thing you can do is a one-shot (or two-shot, etc) where you write-up a user message AND the assistant response… then add the new question as the user message. This can help it stay more accurate.

Hi, if internally you call the “user”'s message something different, like “task description” will that be easier to wrap your head around the “chat” UI?

What I was trying to say was that since there is no actual user in my scenario (this is an ETL pipeline) I don’t have to worry about any safety issues. The entire prompt is coming from me so it is safe to put it all in the system message - I only need to worry about what produces the optimal response.

Ah, ok. Yes, no need to worry about the security if your input comes from you only (beware of the chained ai operations as you might get a surprise from a previous step).

From my personal experience, in apps like this when I use “system” message as “situation setup” and "user"message as the task input -chat models produce better results than if I try to push everything into the system message.

1 Like

Interesting.

The general form of my requests is pretty typical:

system message :

You are an expert in this general area.

Here are some sample requests and responses.

user message:

Question about specific topic within general area.

And I wondered if it might be more powerful to focus the LLM on the ‘specific topic’ in the system message. Reading your response now I think I’ll come up with a bit of a compromise solution.

Thanks.

Ah, very interesting (and helpful). Thanks.

1 Like

Personally I try to specialize models as much as I can, have a bunch of models for each specific task within the application, so yeah, make it focus on a specific topic. Add context from RAG engine if you can. Add some “task planner” step/model to help the answering model be more efficient. In early stages run after the highest quality possible, then you’ll definitely find dozens of ways to reduce the costs (you’ll be able to do it continuously).

2 Likes

From one of my prompts:

You’re a highly professional and well-paid editor’s assistant specialized in rewriting vacation rental descriptions.

Your primary task is…

1 Like