How to go about the fact that i cant return csv from action endpoint?

Hello there,
I would like to ask how you go about the fact that from actions you cant return a csv directly. And what i would like to do is return a csv from my endpoint and using code interpreter do data analysis on the returned data. My first idea was to save the csv on my server, but gpt cant also access remote csv files… So is the only solution that the user downloads the csv and uploads it to the chatgpt manually??

Hello!

What about conversion to a json file and back to csv?

What I’m confused on here is what is the purpose of the csv file. This can affect the way in which you could work around this problem. Could you provide more details so we could give more catered advice?

But if the context is too big, wont the gpt make a mistake?? And also I might have an issue with the size of the json i would be returning as I can only return 10Mb response… And when analyzing data, the file could be 10K rows for example. So thats why uploading might be the way to go. But about this json->csv conversion, when gpt gets response from the action it doesnt save it automatically into a variable, so it just recieves the json as raw input, so i think in the conversion it would make ton of mistakes when given longer json.

Well, again, I don’t know what you’re trying to do with the csv, and in most cases it would be recommended to handle contents in chunks, perhaps line by line. Parsing it programmatically is the most logical and efficient way to go. However, we need more details in order to give you more helpful advice :slightly_smiling_face:

Hello there…

It seems we’re on the same page regarding the challenge of handling CSV data directly from an API response within the GPT environment. Just like you, I’ve been looking for efficient ways to perform data analysis on such data without the need to manually download and upload CSV files.

From my experience, the current limitation is that GPT interprets API responses, even those in CSV format, as plain text strings. This significantly restricts our ability to leverage libraries like pandas directly within GPT to, for instance, sum values in column X or group by column Y. Instead, GPT attempts to parse and manipulate this data by sequentially assigning it to variables in Python. This approach has two major drawbacks:

  1. It can’t process very large datasets effectively since it’s not actually downloading a file but analyzing a string.
  2. Even with shorter responses, it takes an excessively long time to analyze and process any request made to it.

I’ve tried working around this limitation using JSON, CSV, and XLS formats, but unfortunately, none of these approaches have proven successful.

Does anyone have any ideas or suggestions on how to overcome this challenge? It would be great to find a more direct method to analyze API response data within GPT without the cumbersome process of manual file handling.

1 Like

yes yes… the gpt cant directly work with the variables returned using interpreter which makes it very limited, as for longer variables its useless