Hello!
We are facing an issue when integrating GPT-4.1 via API into our AI systems.
Even though we carefully followed the documentation for uploading and referencing files, the model fails to access or retrieve information from the attached documents during interactions. Instead, it hallucinates random data as if it couldn’t read the document properly.
An important point: when we test directly in the OpenAI Playground (using the Assistants area), the file access works perfectly. However, in our system’s API integration, the problem persists.
We have thoroughly reviewed our integration and the official documentation, and everything is implemented correctly. Moreover, when using earlier models (GPT-4.0 or GPT-4o) with the same logic and setup, there are no issues — the models access and interpret the documents as expected.
This issue only happens with GPT-4.1.
We also noticed that when using functions alongside file retrieval, the failure seems even more frequent with 4.1.
I would like to know if anyone else is encountering this issue and if there are any official recommendations or workarounds to fix or mitigate this problem.
Thanks in advance!
1 Like
Facing similar issue ourselves. Any response from OpenAI?
No, I sent a message in the help chat explaining the situation, but they sent me the message below in quotes, which didn’t help much, since the step-by-step instructions they provided had already been followed a while ago, and the same issue still occurs.
"Here are some steps and considerations to help address the issue:
1. Verify File Upload and Referencing Logic:
- Ensure that the files are uploaded correctly to the API and that the file IDs are being referenced properly in your requests. Double-check that the file IDs are valid and match the ones returned by the API after the upload.
2. Check for Model-Specific Changes:
- GPT-4.1 may have subtle differences in how it handles file retrieval compared to earlier models. Review the API documentation for any updates or changes specific to GPT-4.1, especially regarding file handling and function calling.
3. Test with Simplified Requests:
- Try sending minimal requests to isolate the issue. For example, test file retrieval without using functions to see if the problem persists.
- If the issue occurs more frequently when using functions, consider testing with simpler function definitions or disabling functions temporarily to identify the root cause.
4. Review Rate Limits and Token Usage:
- Ensure that your requests are within the rate limits and token limits for GPT-4.1. Exceeding these limits can sometimes lead to unexpected behavior. Monitor the token usage for both the input and output to ensure that the context window isn’t being exceeded.
5. Compare Playground and API Settings:
- Check if there are any differences in the settings or parameters used in the Playground versus your API integration. For example, ensure that the same model version, temperature, and other parameters are being used.
6. Enable Logging and Debugging:
- Enable detailed logging in your system to capture the exact requests and responses. This can help identify any discrepancies or errors in the API calls. Compare the API responses for GPT-4.1 with those of GPT-4.0 or GPT-4o to pinpoint where the behavior diverges.
GPT-4.1 is a newer model, and there may be some edge cases or bugs that are still being addressed. OpenAI continuously works to improve model performance and resolve issues, so keeping an eye on release notes for updates is a good idea.
For the meantime, consider using GPT-4.0 or GPT-4o for file retrieval tasks until the problem with GPT-4.1 is resolved. Since these models work as expected in your setup, they can serve as a reliable fallback.
If the issue persists after completing these steps, please let us know so we can assist you further.
Best,
Chenny
OpenAI Support"
I believe there was/is a file system issue that is currently being addressed.
I’m experiencing the exact same issue. File retrieval works fine in the Playground, but not through the API. Has anyone received an official response from OpenAI or found a workaround for this? Any insights would be really helpful.
Yes, I sent a message explaining exactly what I said in this thread, and they gave me some suggestions for things I could try to understand the problem—but they were all things I had already tried before even asking for help: rereading the API documentation, reviewing what I’m putting in the prompt, checking the permissions for OpenAI and for my own system, etc. It’s still throwing an error. The funny thing is that it works perfectly with the other models, but with this one—which is more accurate in countless other respects—it doesn’t work.
So, there’s hope that they’ll really fix it, but it’s taking a long time to address this bug. So many others were fixed much faster. All we can do is wait.