Common system message for multiple API requests

I am doing question answer generation from a document. I have a system message explaining how questions should be generated. Then in the user message I provide passage to ask 3 questions from and answer back.

For example, I have 20 different unrelated passages. I want to initially set one system message and then use same system message for all 20 passages across different API calls.

Is this possible?

Hey there and welcome to the community!

Yes, this is definitely possible. If others would like to chime in, they can, but this problem sounds pretty straightforward actually.

Here’s a basic outline:

Define the System Message and Passages

import openai

openai.api_key = 'your-api-key-here'

system_message = "Generate three questions based on the following passage."

passages = [
    "Passage 1 text goes here...",
    "Passage 2 text goes here...",
    "Passage 3 text goes here...",
    # Add more passages as needed

Make API Calls to Generate Questions

def generate_questions(system_message, passage):
    response = openai.ChatCompletion.create(
        model="gpt-3.5-turbo",  # You can choose a different model based on your preference
            {"role": "system", "content": system_message},
            {"role": "user", "content": passage}
    return response.choices[0].message['content']

for passage in passages:
    questions = generate_questions(system_message, passage)
    print(f"Questions for passage:\n{questions}\n")

This code does the following:

  • Sets up a system_message that is used across all API calls to guide the question generation.
  • Iterates through each passage, sending it to the API with the system_message as context.
  • Prints the generated questions for each passage.

Basically, you can just make a basic “for” loop to run through all your passage queries, and store your system prompt as a variable so you can reuse it for each passage running through the for loop.

You would need to adapt this to work with your own RAG setup, but it should show you the principles of what you’re asking for here at least.

1 Like

Thank you for the reply.
I think my query is little different.

I am looking to understand if API calls maintain a state, so that we don’t need to send system prompt for every call. This way we can save input tokens processed by agent and maintain same context.

Ah, I see now!

The quick answer is no, not really. As far as I’m aware, these systems and the entire framework for these models are that they are stateless.

My guess would be that you could maintain threads from the Assistants API, but there’s quite a lot of caveats atm.

That being said, I would still recommend to check out this resource, which explains input formatting for chat completions.

And of course, the infromation on the Assistants API may help you decide if that is something that is right for you.

Either way, You can’t escape the system prompt itself, because that’s like an Init() for the AI. What you can do is manage the context you feed into the API call.