Should I relying on Code Interpreter vs Actions for custom code?

I have a piece of code that does analysis on data in a very specific way and I’m trying to decide between running it within Custom GPT’s code interpretator vs hosting on an external server and having Actions call its API.

I could host it externally and the OpenAPI Schema plus externally hosted code ensures consistent execution of the same code everytime, but I will need to make sure user data is managed carefully on my side etc…
Code interpreter keeps all the data within ChatGPT, which is better for data privacy, but when I give the Python code to it as a file, it randomly adjusts and hallucinates new things into the code, which is not what I want.

Am I using code interpreter wrong? If so, how can I ask it to execute exactly the code I gave it as an attachment every single time?

I decided for using external API endpoint.
Keep in mind this has some cost to you, the developer, for hosting.
Maybe look at Replit, they have a decent offer that costs around 10$/month for a year.
Code Interpreter it tempting but slower and less reliable.

1 Like

Thanks for sharing your experience! Good to know that I’m not crazy choosing to keep things at an external service :wink:

1 Like

In fact, both solutions are feasible. Nevertheless, it’s essential to consider that if your external API requires data processing, such as conducting analysis and returning the results, sending the entire dataset from a custom GPT to that external API could pose challenges. It may be difficult, or at times, impossible to transmit the entire dataset seamlessly.

A potential solution could involve breaking down the data into smaller, manageable chunks and then reconstructing it within the external API endpoint. However, it’s important to note that this approach may not be suitable for handling large files efficiently.

Thanks @vah.sah for the input, it is indeed a ceiling that I’m starting to bump against as I test bigger and bigger datasets. Fortunately for my current use cases, it’s still rather okay.

Hi @gmkung! I randomly stumbled upon your question and this thread while I was searching for code interpreter related topics on Google.

I don’t mean to sound like a shill but we might be building exactly what you’re looking for. It’s called E2B - GitHub - e2b-dev/E2B: Cloud Runtime for AI Agents. We basically give your LLM an open source cloud computer (a sandbox).
We don’t do any LLM calls directly, instead, we’re focused on the code execution layer for LLMs.

One of the most frequent use cases for our users is building custom code interpreters and we have a dedicated support for that with a new version that behaves pretty much like the code interpreter functionality that ChatGPT’s has (provided you prompt the model right way) - Experimental feature - Stateful code interpreter by jakubno · Pull Request #326 · e2b-dev/E2B · GitHub

We’re also completely open source. We also have a cloud offering where charge users per usage (how many seconds a sandbox is running) and we give credits to every new user - E2B Docs - Runtime Sandbox for LLM Apps

Let me know if you have any question, happy to help. Or shoot me an email at “vasek at e2b.dev”