GPT Assistant making assumptions instead of referencing knowledge files

We’re super excited to implement Assistants so we can have a smart chat bot within our application!

We’re running into one major snag, though. Even though I have retrieval enabled, and uploaded a comprehensive help doc, ChatGPT will often respond with things like “Look for something like a button that says…” rather than referencing the help docs. My theory is that ChatGPT knows about our business in its training, so it tries to answer based on what it thinks it knows.

I’ve added “IMPORTANT: Please search the knowledge files before answering” to the instructions, but it seems to disregard this completely.

Hopefully someone from the OpenAI team is reading this and can respond.