Assistants API Not Utilizing User-Uploaded Files

When using the Assistants API, messages that include a file nearly always run into issues.

Context: I’m using the Python client to access the Assistants API and integrate it into a Flask backend.

From my experience, I will upload a file and send it along with a test message such as “what’s in this file” or something to that effect, and 9 times out of 10 I’ll get the same canned response of:

“Sorry, but I’m unable to view the contents of this file”.

However, sometimes, it’ll randomly work with no problems. The issue does not seem to be correlated to the user message or the contents of the file as I’ve tested many combinations and the results seem to be completely random.

Does anyone know how to get the Assistants API to work better in this regard?

Thanks!

Do you attach the file id in the message like this?

const thread = await openai.beta.threads.create({
  messages: [
    {
      "role": "user",
      "content": "Create 3 data visualizations based on the trends in this file.",
      "file_ids": [file.id]
    }
  ]
});

Reference page

            msg = client.beta.threads.messages.create(
                thread_id=thread_id,
                role="user",
                content=message_content,
                file_ids=[file_id] if file_id else []
            )

Here’s the code I’m using to attach the file to the user message.

I have also noticed this same behavior. Basically, sometimes a file that is included on the message thread cant be accessed by the LLM. It is not 100% reproducible which makes me wonder if there might be some kind of race-condition, or similar going on. The patterns I have observed are:

  • Generally this is not an issue when the model is GPT 4
  • Very often happens when the model is GPT 3.5 turbo 0125 (which is the only 3.5 to allow retrieval tool)
  • When using 3.5 if I first upload the file, then create two messages, both with a reference to the file. Usually the first will fail, and often the second that runs after the first has completed, will work.
  • Using the OpenAI Playground to test the Assistant usually works, even when the model is 3.5, but when I call from my own application I often get the failures (no explicit error, just that the LLM says it cant access the document). I also validated that the API payloads and responses from the Test UI match with what I send from my app, so there is no different in the usage, just the outcome.

Hoping this may give a bit more insight into what may be causing the problem.