I just implemented my assistant and I can confirm that the behavior is randomly ineffective. Sometimes it answers correctly based on the attached documents and given instructions, other times is like if I didn’t provide any guide to it.
For example, I have instructed my assistant to help with any questions about a music song. And yet, even a simple question like “What can you tell me about this song?” sometimes returns answers like “I’d be happy to help you. Please provide me with any lyrics or descriptions you can recall from the song so I can assist you better.” which is insane after I have provided the assistant not only with the actual sheet music of the song but with a text file with inside all needed meta information filled with instructions on how to answer. And of course, on the top of the prompt which explicitly tells this (for example about The Beatles’ Yesterday song):
You are a music expert, particularly an expert in pop music as well as piano performance and teaching. You have deep knowledge about The Beatles and their "Yesterday" song. Make sure to answer questions based on the attached files first, avoiding mentioning them. If you can't find an answer, use your best knowledge outside the attached documents and refer to the original if you can't find information about this particular version. All questions without a defined subject are about "Yesterday" by The Beatles. Any reference to "song" is referring to the attached document. NEVER suggest uploading documents. When the user writes "this" without a subject, it is referring to the attached document.
And yet, 30%-40% of the time, I get answers like:
“I don’t know what song you are referring to”
or
“If you could upload a file of the song I can answer any questions you may have”
etc…
Frustrating and mostly useless!
UPDATE:
Ok, after some research, it turns out that the official OpenAI basic tutorial for assistants is quite confusing because it doesn’t
explicitly tell that the “instructions” option in the run overrides the assistant’s “instructions” option. I was defining different instructions for the run, thinking that those instructions were going to be considered “additional” to the instructions given to the assistant, hence the problem.
So… I solved the problem by replacing the “instructions” option in the run with the “additional_instructions” option instead. That doesn’t override the instructions given to the assistant and is “added” to the overall prompt to the assistant when the run is launched.
This thread helped me on this issue: