I have a file from which I pull data from my database. csv or json. This is approximately 3 million rows of data. When the assistant tries to read data from this file, the python code it writes gets stuck.
Whether GPT’s processor power or ram power is not enough for this operation. Is there anyone experiencing this situation?
The retrieval and inference for Assistants take place on OpenAI’s infrastructure. Thus, it’s very unlikely that it’s running out of memory given the compute their data centers have.
It would recommend waiting for the streaming to finish and then taking a look at the logs.
I’m using an API for the assistant, which runs Python code for me. I want to analyze a 1GB JSON file with it, but the available RAM shown in the Python interpreter is only 1GB. When I try to load the JSON file, the assistant throws an error.
I understand what you said, but I’m not using the file search area. I’m using the file in the Code interpreter area. I turn off the File Search vector because I’m doing data analysis from a CSV file. For the Code interpreter, I’m writing code in Python version 3.11.8, and it can use up to 1 GB of RAM. When I use the json.load command in the import json library, the assistant gives an error.
The limits are the same for now on Code Interpreter as well:
Files have a maximum size of 512 MB. Code Interpreter supports a variety of file formats including .csv , .pdf , .json and many more. More details on the file extensions (and their corresponding MIME-types) supported can be found in the Supported files section below.