The “it” that you are referring to is likely not the API, as one would discover from the topic “please make a code interpreter API”. You are likely talking about ChatGPT and even pasting text, ChatGPT that already has a code interpreter mode available to some.

1530 lines of python code, 12229 tokens plus system prompt and instructions, analyzed by AI via API:

The application is structured as a chatbot interface using the PySide2 library for the user interface. It consists of several classes and functions that work together to create the chatbot interface.

The main class is the ChatbotApp class, which is a subclass of QMainWindow and represents the main window of the application. It contains the UI elements such as the chat history area, input box, and various buttons for interacting with the chatbot.

The ChatbotApp class has functions for initializing the UI, handling user input, sending messages to the chatbot, and updating the chat history. It also includes functions for managing the chat history, such as adding, editing, and deleting messages.

Other classes include JPushButton, which is a subclass of QPushButton with custom styling and functionality, AutoTextEdit, which is a subclass of QPlainTextEdit with additional features for handling user input, and MessageText, which is a subclass of QTextEdit for displaying chat messages.

There are also utility classes such as CustomVBoxLayout and CustomSplitterHandle that provide custom layouts and handle resizing of UI elements.

The application uses signals and slots to handle user interactions and update the UI in response to events. It also includes error handling for API requests and provides options for changing the AI model used by the chatbot.

Overall, the application is designed to provide a user-friendly chatbot interface with features such as message editing, role switching, and customizable AI models.

(The analysis does miss mention of features and methods, and also isn’t exactly a “describe the structure” type instruction as requested.)

The point of “code interpreter” is to allow the AI to write and execute its own code, to provide calculated answers and output beyond the ability of the AI language model intelligence. Or “keep rewriting and testing this function until it returns the right value” as one possible case.

And the point of similar function via API - which can be done with your own code now - is to extend that to your own pay-per-use chatbot without Alzheimer’s.

In ChatGPT with CI, you can actually upload your own code or class to be executed by the AI to the virtual environment datastore via UI, but the output returned must be in the function format the AI-written code shim can understand. Or it can ask for the whole file to be printed within the python environment for its own analysis.

I think that is partially true. Yes, indeed, it is a plugin, but perhaps based on a fine-tuned model that is not publicly available.

API is a vague word. It can be huge or small. I think a very small API would be very useful. Three things would be on my wish list:

  1. Uploading and downloading files
  2. Using the existing or similar ChatGPT API on the Code Interpreter
  3. Allowing the Code Interpreter to interact with other plugins.

All those things should be pretty easy, technically. We can already upload/download the files including code by hand. Well, it will not run your uploaded code directly, but it will generate the equivalent code and run it.

Since it is a plugin, it should be able to route prompts to other plugins when necessary.

OpenAI states “Disabling internet access limits the functionality of our code sandbox, but we believe it’s the right initial tradeoff. Third-party plugins were designed as a safety-first method of connecting our models to the outside world.” But, enabling internet access for data only, not code, should be safe enough, shouldn’t it?

That’s not unpopular at all, I think that more and more people are starting to realize what you’ve stated exactly, that Langchain is adds an unnecessary layer of complexity that adds close to zero value.

1 Like

I understand your point of view about LangChain. It’s true that LLMs and the ecosystems around them have grown tremendously this year, and that many of the capabilities LangChain initially enabled are less necessary now. However, I think it’s important to remember that LangChain was a groundbreaking project that made advanced LLM techniques accessible to thousands of people. It was a major step forward in the development of natural language processing, and it paved the way for many of the advances we’re seeing today.

I also think it’s important to consider the use cases for LangChain. Even though LLMs have improved significantly in the past 6 months, there are still many situations where LangChain can be a valuable tool. For example, LangChain can be used to:

  • Build applications that use multiple LLMs.
  • Create applications that require a high degree of customization.
  • Integrate LLMs with external data sources.
  • Build applications that are robust and scalable.

Of course, whether or not LangChain is easy enough to implement or worth the effort to do so is certainly debatable. However, I believe that LangChain still has a lot to offer, and I’m excited to see how it’s used in the future.

2 Likes

definitely agree the upgrade to API of code interpreter

1 Like

Yes, I would love access to code interpreter via API. Be great to allow users to upload an excel file and ask question about it and generate graphs etc.

1 Like

Isn’t this literally the concept of an API :wink:

1 Like

I have been tinkering with something similar GitHub - pgalko/BambooAI: A lightweight library that leverages Language Models (LLMs) to enable natural language interactions, allowing you to source and converse with data.
Very alpha at the moment, but the initial results are encouraging. Runs and executes locally with access to internet

just brainstorming but what if the code sandbox was on the customer side? Via something like Docker containers.

1 Like
  1. That is a great idea. It’s also something you can already implement pretty much on your own if you were inclined.
  2. I think OpenAI would be at least somewhat reluctant to facilitate that. There’s a not insubstantial risk in allowing the AI the ability to run code in the wild. With a cloud sandbox they at least can ensure it’s locked down and can’t hurt anything.

Of course (1) is a great counter-argument to (2), but I think lowering the barrier to entry too much creates the potential for a PR nightmare for OpenAI through misuse or abuse.

But, who knows?

Honestly, I think a great intermediary step would be the ability to link it with an Azure Notebook. (I’d prefer Google Colab but, you know, Microsoft…)

But, sincerely, a great idea! Also, it reduces computational loads for OpenAI, so win-win?

1 Like

like you said, openAI would probably be reluctant to have sandbox that they don’t control.

With cloud providers, OpenAI can scale up on sandboxes on demand so they can just price the code interpreter API to reflect their cost.

Now for Open Source implementation of the code interpreter, docker like environment would absolutely make sense

1 Like

I found some similar ideas:

1 Like

Are you working at OpenAI?

Please tell us whether to wait for CI API release or build on top of unofficial solutions…

:slight_smile:

Summary created by AI.

Many users have expressed their enthusiasm for the Code Interpreter (CI) integrated into OpenAI’s GPT-4 and believe that this feature could enhance various applications through logic validation. Participants are eager to see the introduction of an API for CI landongarrison, Primordial, stevejking, 93INFINITY, etc. and some have suggested that they would be willing to pay more for this feature @landongarrison, @Primordial.

There’s also speculation around building one’s implementation landongarrison, including conversation around security concerns and potential sandboxing @frank_behr. Different tools for code interpretation have been shared like, Langchain twtdata, elmstedt, Langchain’s Python interpreter onhate.

Users are leveraging CI in various ways, executing complex arithmetic operations @jeevan, analyzing numerous data files @Primordial, or working on text documents @stevejking. A few participants have also shared potential open source implementations such as github/shroominic/codeinterpreter-api @Shroominic, bambooai @pgalko and codesherpa @eric43.

Nevertheless, without any official communication from OpenAI, the uncertainty whether to wait for an official API or to rely on unofficial solutions is lingering @elco.

Summarized with AI on Aug 14 2023
AI used: gpt-4-32k

Maybe what we need concerning CI and API, is a better documentation and examples about how to use function calls with API.

I am also keen to see the Code Interpreter API. Please provide it soon!! thanks.

I recently came to know about Odineye.xyz which is an open source API s for doing the same thing.

Open source API?

Interesting site. It uses GPT and can graph almost exactly like Code Interpreter does. It doesn’t output any logs or code though. I was having fun but now it doesn’t load :thinking:

1 Like

ROFL, seems you have broken the code
its working now. I used it to get some analysis really quick. Having a discussion with Odineye team for access to APIs

This is my first attempt at a code interpreter using the API.

	{
            "name": "run_python_code",
            "description": "Executes any given Python code string with exec(code_string,{}{})",
            "parameters": {
                "type": "object",
                "properties": {
                    "code_string": {
                        "type": "string",
                        "description": "The Python code string to execute."
                    }
                },
                "required": ["code_string"]
            }
            ,
            "returns": {
                "type": "object",
                "properties": {
                    "function_response": {
                        "type": "string",
                        "description": "The redirected stdout result from the exec function."
                    },
                    "details": {
                        "type": "string",
                        "description": "Exception details in case of an error."
                    }
                }
            }
        }

Here is the function. It is part of a bigger source code.

    def run_python_code(kwargs):
        try:
            response = {}
            
            rpc_globals = {}
            rpc_locals = {}
            
            code_string = kwargs.get("code_string", "function_response = 'No code given.'")
            
            original_stdout = sys.stdout
            new_output = io.StringIO()
            sys.stdout = new_output
            
            try:
                exec(code_string, rpc_globals, rpc_locals)
                function_response = new_output.getvalue().strip()  
            finally:
                sys.stdout = original_stdout

            if function_response:
                response["function_response"] = function_response

            response_string = jsonpickle.encode(response)
        except Exception as e:
            response = {
                "function_response": "Error executing code.",
                "details": str(e)
            }
        
            response_string = jsonpickle.encode(response)  

        return response_string

jsonpickle is a somewhat more powerful replacement for json.

i’m using this system prompt:

You are a python code interpreter.
Results of the code or any formula must always be output with print().
With the help of the run_python_code function, you can execute any Python code, including those that display graphical elements.
You do not produce code that could harm the user or their hardware.
You describe the code before you execute it.
Example:
U = R * I
U = 10 Ohm * 1 A
Print(U)

Conversation:

# user #: Calculate the resistance of a 10m copper cable with a cross-sectional area of 4mm^2.

# assistant #: To calculate the resistance of a copper cable, we use the formula:

R = ρ * (L/A)

where:

  • R is the resistance,
  • ρ is the resistivity of the material (for copper it’s 1.68 x 10^-8 Ω.m),
  • L is the length of the wire,
  • A is the cross-sectional area of the wire.

Given:

  • L = 10m,
  • A = 4mm^2 = 4 x 10^-6 m^2 (since 1mm^2 = 10^-6 m^2).

Let’s calculate the resistance.

# run_python_code (request) #:
{

“code_string”: “rho = 1.68 * 10**-8\nL = 10\nA = 4 * 10**-6\nR = rho * (L/A)\nprint(R)”

}

# run_python_code (function) #: {“function_response”: “0.041999999999999996”}

# assistant #: The resistance of a 10m copper cable with a cross-sectional area of 4mm^2 is approximately 0.042 Ohms.

For smaller codes, it works quite well. I’ve built in a query before each execution so that one can review the code and possibly decline its execution.

I consider this more of a study. Because exec has some significant drawbacks. Using exec ties the execution to the program in which it is run. So, one would have to lock the entire application in a sandbox to avoid any problems. Also, the use of the globals and locals parameters is a bit tricky. Especially when GPT thinks it needs to create functions. At least one learns a lot about the exec command if one didn’t already have this knowledge. :slight_smile:

In the next step, I want to use a Jupyter kernel. One could lock it separately in a Docker container. But I still need to familiarize myself with it a bit.

1 Like