Assistant API not recognizing file type base on the file name

I came what might be a bug in API processing a file against what GPT-4 webpage does.
When a file is uploaded, via GPT Web page, the file is accessed via the path file_path = ‘/mnt/data/actual_file_name.csv’ - [CSV is example. But the point is, it is accessing the file with the actual file name with its extension]

But when a file is uploaded on a Message in a Thread, the same file is accessed with the file id which has no extension.
file_path = ‘/mnt/data/random_file_id’

Because of this, the Assistant API version is not able to recognize the file type and attempts to find the type through various ways, which is taking more time.
In case of the web page version, since it knows its csv from the file name, it immediately opens code interpreter and imports pandas, which is not the case with Assistants.

1 Like

It’s not a bug, the files aren’t read by the file names you provide, only the code interpreter uses the file name you used to upload.

Yes. But my point is, code_interpreter reads the file with different names when it comes to Assistant API and GPT web page. Because of this, there is an additional step to identify the file type before opening it with right python package (for example pandas). Some times, it is unable to find the file type at all in Assistant API, where as the same file is read properly on the first go in the GPT web page.
Again, in both cases, I am referring to the code_interpreter input only.

You can try it your self. Create an assistant with code_interpreter enabled. Upload a csv file in a new Thread message and ask what is in the file. See the Code interpreter step.
Do the same with normal web page GPT and see what is the code_interpreter input.

This may not be a bug. But this difference may be the cause for API executing mutliple steps to identify the file type.

1 Like

I agree I also noticed a difference. The ChatGPT UI immediately recognizes the file type, while I always need to specify it in my prompt for the Assistants API to understand it right away.

Better UX is that the engine knows the file type immediately imo.

Ah I see what you mean

If you notice the GPTs makes files id objects, then it tells chatgpt via prompt engineering: you have fileIDs uploaded. fileID1 has the file name yourfilename.ext and is accessible by /mnt/sandbox/yourfilename.ext

In addition, you have access to a Python code interpreter tool. This tool allows you to run Python code and scripts. The 'pdfcreate.py' and 'modified_pdfcreate.py' scripts are located at '/mnt/data/pdfcreate.py' and '/mnt/data/modified_pdfcreate.py' respectively. You can execute these scripts within your Python environment to fulfill specific user requests related to creating or modifying PDF files as per the instructions given in the scripts.

Its hard to get that line out of the system prompt, also the line specifying the time because openai fine tuned the model to not divulge that information and also made it so that it doesnt divulge the file names. Its not a perfect fine tuned experience as people can still prompt it out. It was ugly trying to get it out for you, took about an hour.

1 Like

Thanks a ton for sharing this info. That explains why I am seeing the different. I am wonder how you were able to extract this system prompt out of GPT. I think when using the API, we should also follow the same prompt to make it work the way GPT works. Thanks again for this info.

they should just allow us to use our own self hosted sandbox/environment where we can add whatever script or install whatever package we want.

OpenAI kind of does, or doesn’t block it. “Code interpreter” is just a function call made by the API AI model. You can look at “system prompt reveals” of ChatGPT to see the special instructions to inform the AI of a python tool function, put the language in your own system message, another unrelated function to “turn on” functions, and voila! You get functions with code written for performing complex user tasks.

probably true but they still won’t allow you to pip install other packages. i tried asking the LLM to install pyodbc coz I wanted to extract data from a cloud sql for use with CI but no dice.

Your own self-hosted sandbox, which you request, eliminates what “they” allow.

Also, the industrious person can install wheels by unzip and various python programming.