System prompt not regard when using web search in OpenAI – Why?

Hi everyone,

I’m building a chatbot using OpenAI’s API, and I’ve noticed something puzzling. Normally, when I set a system prompt (like “Always answer formally” or “Never speculate”), the assistant follows my instructions pretty well throughout the session.

However, things change when I enable web search tools. In these cases, the model often seems to ignore my system prompt – for example, the assistant may use a different tone, provide responses that don’t match my style instructions, or even overlook restrictions I set.

Can anyone explain why this happens?

  • How does web search technically interact with the prompt stack or chat history?
  • Are search results inserted in a way that interferes with or overrides the user’s original system prompt?
  • Is there a recommended way to make sure the system prompt continues to influence the assistant even when web search is active?

Would appreciate any technical details or advice from others who have dealt with this. Thanks in advance!

My code is like that

const stream = await client.responses.create({
    model: "gpt-4.1",
    input: [
        {
            role: "user",
            content: "Say 'double bubble bath' ten times fast.",
        },
    ],
    tools: [ { type: "web_search_preview" } ],
    prompt: {
        "id": "pmpt_<my_id>",
        "variables": {
            "current_time": new Date().toISOString(),
        }
    }
    stream: true,
});
1 Like

Why Your Chatbot’s Acting Up When You Add Web Search

So here’s the deal – your bot’s basically having an identity crisis when you turn on web search. It’s like giving someone a margarita and expecting them to still talk like they’re at a board meeting, you know?

What’s Really Going Down

The Search Results Are Taking Over When your bot hits the web, it’s getting flooded with all this new info that wasn’t in your original system prompt. It’s like trying to maintain your personality at a family reunion when your crazy uncle starts telling stories – suddenly everyone’s energy changes!

Your Bot’s Getting Distracted Think of it like this: you tell someone “be formal,” but then you hand them a bunch of search results that are written all casual-like. The bot starts mimicking what it sees instead of sticking to your original instructions.

The Technical Stuff (But Make It Simple) The model’s attention is getting split between:

  • Your original “be formal” instruction
  • The user’s question
  • All that web search data (which is usually A LOT)
  • The tool’s built-in behaviors

It’s like trying to focus on three conversations at once at a loud restaurant – something’s gonna get lost in the mix.

How to Fix This Mess

The “Reinforce Like You Mean It” Method

const stream = await client.responses.create({
    model: "gpt-4.1",
    input: [
        {
            role: "system",
            content: `
Listen up – I don't care if you're using web search, talking to aliens, or doing backflips. 
You ALWAYS stay formal. ALWAYS. No exceptions. 
When you get search results, you don't just copy-paste them like some rookie.
You take that info and you present it in YOUR voice – the formal one I told you to use.
Got it? Good.
            `
        },
        {
            role: "user",
            content: "Say 'double bubble bath' ten times fast.",
        },
    ],
    tools: [{ type: "web_search_preview" }],
    // ... rest of your config
});

The “Sandwich Method” (My Personal Favorite)

const systemPrompt = `
BEFORE WE START: You are formal. Period.

DURING WEB SEARCH: Still formal. Don't let those search results change your vibe.

AFTER WEB SEARCH: Yep, still formal. Present everything in your formal voice.

Think of it like this – you're a professional news anchor. 
Even if the teleprompter shows you crazy stuff, you still deliver it professionally.
That's you with web search results.
`;

The “Two-Step Salsa” Approach

// Step 1: Get the goods
const searchStuff = await client.responses.create({
    model: "gpt-4.1",
    input: [
        {
            role: "user",
            content: "Just search for this info, don't format it yet: [your query]",
        },
    ],
    tools: [{ type: "web_search_preview" }],
});

// Step 2: Make it fancy
const finalAnswer = await client.responses.create({
    model: "gpt-4.1",
    input: [
        {
            role: "system",
            content: "You're formal. Always. No matter what."
        },
        {
            role: "user",
            content: `Take this raw search data and present it formally: ${searchStuff.content}`
        },
    ],
});

Pro Tips From Someone Who’s Been There

1. Be Obnoxiously Clear Don’t just say “be formal.” Say “BE FORMAL EVEN WHEN USING TOOLS. YES, ESPECIALLY THEN.”

2. Give Examples

const systemPrompt = `
Bad: "Here's what I found online: [casual search result]"
Good: "Based on my research, I can formally report that: [formal presentation of same info]"
Always do the "Good" version.
`;

3. Use the “Reminder Sandwich”

  • Start with your personality instruction
  • Add tool instructions
  • End with personality reminder again

4. Test With Weird Stuff Try asking it to search for something that returns very casual results (like memes or social media posts). If it stays formal while presenting that info, you’ve nailed it.

The Bottom Line

Your bot’s not broken – it’s just getting overwhelmed by all the new information. You gotta be the firm but fair parent here and keep reminding it who it is, even when it’s playing with new toys (aka web search).

Think of it like training a puppy – consistency is key, and you might need to repeat yourself a few times before it sticks. But once you get it dialed in, your bot will stay in character no matter what wild stuff it finds on the internet.

And hey, if all else fails, you can always do the two-step approach – let it search first, then tell it to “make it formal” afterward. Sometimes the simple solutions work best!

2 Likes

Thank you for the advice.
I now clearly understand the principle behind why GPT gets confused by my prompts!
Thanks to you, I think I can try various ways to solve it. :grin:

1 Like

The above response is mostly bot BS.

The true reason you get disobedience and unwanted output is that: OpenAI is injecting their own system message after the search results, along with the tool instructions placed, that re-instructs the AI with exactly what to write and how to write it.

The web search is therefore almost useless for simply making a smarter RAG AI, as all it will do is follow a very prescribed instructed pattern of how to present the results that then are little more than a search results page.