Assistant creates fictitious references inspite of clear instructions

I have an API assistant i am building. It has several documents from different companies in a vector store. The Assistant pulls relevant information from this pool based on some criteria. I have instructed that the assistant provided a reference to the document (specific company name) that it refers a section.
In spite of specific instructions in the Assistant(in three places):

    • Responses must only include use cases, examples, and insights that are directly derived from the provided documents. Any external or fictional scenarios are strictly prohibited.
    • Avoid making inferences or assumptions about other companies or outcomes without documented evidence.
      Under Dont do:
  1. Only include information that is explicitly supported by the provided content or documented use cases. Do not create or make up information. ie fictitious data that cannot be supported.

I keep running into this issue at an alarming high frequency where the output includes fictitious company names. (ABC Corp)

When interacting with the assistant (to troubleshoot), i get the response where it apologizes for not following the instructions. It acknowledges that the instructions state no fictitious company names. It fails again after two more tries.

Anyone has any idea on how to deal with this?

Is there any future roadmap where OpenAI plans to address the memory issue in Assistants (retaining and actually learning from conversation in the prompt)?

:thinking:

Are you writing:

Dont do:

  1. Only include information that is explicitly supported by the provided content or documented use cases. Do not create or make up information. ie fictitious data that cannot be supported.

That is basically an instruction to do the exact opposite, it’s a double negative.

I’m not saying that that is necessarily your problem, but it could be a start