Why not to use the Code Interpreter and Plugins together?

Has anybody seen a blog or something that describes why the Code Interpreter and Plugins cannot be used at the same time?

The use cases to use them together are too many to count. I don’t think OpenAI would miss the idea. So, I just want to know their product road maps and the rationale of releasing the Code Interpreter as an independent product now.

Because they are two entirely different models.

Each has been trained to be able to do what it can do.

OpenAI would need to train yet another model which is able to invoke the Code Interpreter and read plugin manifests to be able to reach those API endpoints.

I’m not saying it cannot be done, but it’s not likely to be a high-priority item as both are independent beta products they are using for testing.

2 Likes

I think the chance is low they are actually different models. Probably just routed differently.

Demonstration

model: gpt-3.5-turbo
System prompt: duplicate of exposed CI text, modified to accommodate backend’s function insertion
Function: disabled function description, but some parse-able function call value is required to get to a function-happy AI and its function rewrite internal endpoint by model selector. No function API request, and you get back “I’ll write some code” - with no code (still recognized and filtered though).


user: “I’d like to get a calculation of Fibonacci sequence: the 100-110th elements in the sequence, and their sum.”

return:

{
  "id": "chatcmpl-999999999999",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "gpt-3.5-turbo-0613",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "function_call": {
          "name": "python",
          "arguments": "def fibonacci_sequence(start, end):\n    sequence = [0, 1]\n    for i in range(2, end+1):\n        sequence.append(sequence[i-1] + sequence[i-2])\n    return sequence[start:end+1]\n\nstart = 100\nend = 110\nfibonacci_numbers = fibonacci_sequence(start, end)\nfibonacci_sum = sum(fibonacci_numbers)\n\nfibonacci_numbers, fibonacci_sum"
        }
      },
      "finish_reason": "function_call"
    }


user: create a 25% pie chart graphic

  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": null,
        "function_call": {
          "name": "python",
          "arguments": "import matplotlib.pyplot as plt\n\n# Data for the pie chart\nsizes = [25, 75]\ncolors = ['green', 'white']\n\n# Create the pie chart\nfig, ax = plt.subplots(figsize=(2, 2), dpi=100)\nax.pie(sizes, colors=colors, startangle=90, wedgeprops={'edgecolor': 'white'})\n\n# Remove labels and ticks\nax.set_xticks([])\nax.set_yticks([])\nax.axis('off')\n\n# Save the chart as an image\nchart_path = '/mnt/data/pie_chart.png'\nplt.savefig(chart_path, bbox_inches='tight', pad_inches=0)\n\nchart_path"
        }
      },
      "finish_reason": "function_call"

So gpt-3.5-turbo is now my code interpreter. Actually, I need to run its code now.

The method of returning confirmation files like .png were displayed to user when a file location is generated would need to be discovered by code interpreter user, and that’s where your API guessing might fail you.

With a bit of risking your plus ChatGPT account, one could possibly stimulate revelation of plugin input and output in AI-seen language also.

Why can’t they be “used together”? Because ChatGPT is told not to. Because only one function is called at a time, and there are few prompts that could call both, even iteratively; instead you’d get unpredictable plugin calling or answering by Python coding (and yes, only Python). I can pass a valid function along with python to simply confuse the AI which is appropriate.

1 Like

The other reason, and this one seems to be obvious to me, knowing code interpreter generates a venv dir to work within for each chat and that virtual drive/Compute space is sandboxed to prevent it from reaching outside it’s bottled environment, that they want to prevent the webui interface from being used by millions of copypasters running plugin enhanced autogpt loops with the power of code interpreter on “free cloud compute space” as it would be pricey.

Hi
Where did you get that they are using different models on different datasets?
I don’t think that’s the case so would love to learn about that.

@AIdeveloper
You can replicate and ultimately best what CI can do with the plugins model or with the API and function calling.
All what it take is a good system prompt and very pretty simple and straight forward python code.

I don’t know quite what you mean about different datasets, but they specifically refer to Browsing and Code Interpreter as “experimental model[s]” in the blog post announcing them,