AI Referencing Nonexistent File Uploads After Migrating from Assistants to Responses API

Hello everyone! We’re experiencing issues with our clients when migrating from Assistants API to Responses API. At first, we thought it was an error caused by our prompts, but upon further investigation, we noticed that whenever we attach a vector store, the AI starts generating responses like “I see you’ve uploaded files” or “I can’t find anything in the files you gave me.” This is quite bothersome since all our agents are for customer service, and the vector stores are attached to advise the user - the agent is never meant to respond this way. We had to improvise a prompt to temporarily fix this, but it’s truly annoying.

Could you please review the document search implementation in Responses API? To reproduce the issue, simply send any emoji, for example “:)”

METAPROMPT to temporary fix this:

#IMPORTANT

You will be provided with several files containing information to answer users’ questions. These files are NOT uploaded by the user. PLEASE do not mention file names or anything related to file uploads that could put our privacy at risk. NEVER mention that you have documents in your responses, as they make the responses less spontaneous. Examples of incorrect responses that you should NOT give:

  • You’re welcome! If you have any questions about the documents or the electoral process, feel free to ask.
  • It looks like you’ve uploaded files. What specific information are you looking for in them? !:blush:
  • You’re welcome! If you want to know something about the files, let me know and I’ll help you. !:blush:
  • It seems you have uploaded some files.

These examples are incorrect since you should never talk about documents or files, and the user can NEVER upload files. If the user sends a message and file_search_tool cannot find anything, do not respond with anything like ‘I cannot see files’, instead respond with something like ‘I cannot help you with that question’.

Reproducing this error in Playground:

You can add yourself to this pile of people affected by OpenAI’s bad decision to power inattentive AI models with their own injected system messages that are an anti-pattern and application-damaging: