Question Regarding GPT Plugins with Code Interpreter with ZIP files

I know that I need to upload a zip file to save time for my files. But why does the GPT plugin unzip the files every time? Wouldn’t it be better and more efficient in terms of time and resources to extract and save the content of the zip file instead of using the zip file each time?


Welcome to the community @Karek

IIRC GPTs don’t/can’t modify the files that are uploaded to their KB.

Why not upload the required files without zipping them?

1 Like

There are two reasons:

  1. It does not allow me to upload all the files at once. I need to select 9 files each time.
  2. The final upload gives an error stating that the limit for uploading files is 20.

That’s correct. There’s a limit of a max 20 files that can be uploaded to the GPT’s KB.

What’s the average length of files?

Are the files unzipped once per conversation or several times per conversation? I expect the first case but if not then you might try to adjust the prompt to unzip only once per conversation.
Otherwise you could create an action to pull just the files you need from another server.

Edit: and you could look into concatenating the files into fewer, larger files.


This data is a telegram group exported messages, consisting of approximately 62,689 messages only texts, all media files excluded. The average file size is 1MB, with a total of 40 files. The combined size of the zip files is 6.49MB.

Due to the limitation on files, I am considering creating an action that makes a Telegram API call to search the group in real-time. This would provide the benefit of a faster process, as the search would utilize the Telegram search system and ensure that the data is up-to-date. Is my solution correct? Or would it be better to wait for OpenAI to increase the file limit?

PS: the current openai process takes ~3-5 minutes unziping and processing the files on each reply.

I was thinking of establishing a direct connection between my Telegram bot and OpenAI actions. This would make the process faster. What do you think? Has anyone ever tried making API calls from OpenAI actions before? Is it safe to include my API key with OpenAI actions, or will it be visible to customers on the front end when they click to view the analysis process?

1 Like

That’s a viable idea.
You can check the docs to get started as this is a common use case.

Yep—this would be the way to go! GPT Actions are designed for this use case, for you to connect your GPT to an existing API & it’s absolutely safe to include your API key so long as you add it through the authentication system inside GPT Actions here:

I ran into a similar situation when I wanted to upload a set of around 80 auto-generated C# class definitions. My intent was for GPT to be able to reference them when helping me with the server and clients that depend on this structure.

If I tossed them all in a zip file, it was unable/unwilling to browse the contents and I had to ask it to look up specific files. I ended up concatenating all the files into a single file - there’s plenty of ways to do this via cli - and making some adjustments to indicate the start and end of each original file.

It now works as desired where it’s able to tell me about stuff like inheritance and transitive dependencies between the classes, which it had no hope of doing when they were individual files. It can still get confused about complex relationships and I occasionally have to kick it when it starts omitting stuff, but it’s well within achieving what I’m after.

1 Like

Thanks for the info.

You can try concatenating the messages into multiple files, as there’s a 2M token limit per file for text files.

You’ll find this FAQ regarding file uploads for GPTs useful.

Adding actions for knowledge fetch would:

  1. Make things slower albeit with upto date info.
  2. Not be able to utilise semantic search on whole KB. This means your knowledge retrieval will be limited to the search results returned by the telegram API.
1 Like