Which model does code interpreter use?

Which model does OpenAI’s newly released code interpreter use? Is it GPT4 or a variant? Is it available via API?

1 Like

Welcome to the forum!

It uses GPT-4 and had a number of support functions around it, you can build agents to perform similar tasks via the API.

1 Like

I wonder if someone can help me figure out why I am seeing chatgpt-3 as the model when using code interpreter. I realized this when the model refused to do sentiment analysis because it couldn’t.

I start a new chat on chat opena .com and pick gpt-4 and then code interpreter. I then ask what model I am chatting with and it says the following:

“ I am based on the GPT-3 model, trained by OpenAI. The exact version or iteration of the model isn’t specified beyond that. Please note that my training data includes information up until September 2021, and I don’t have the ability to access or retrieve real-time data or updates post that date.”

I see a plus at the bottom of the screen for uploading files and I can analyze csv files using the code interpreter feature. I would really like to use GPT-4 functionality, so would really appreciate it if someone could point out what I am doing worng.

The training data has a cut-off date of 2021, GPT-4 was built after that point, thus no reference to it, asking the model what version it is is unreliable.

1 Like

If you really want to know, banana test it.

“Today I have four banana. Yesterday, I ate two. How many bananas do I have now?”

GPT-4 can get this right.

2 Likes

Interesting Sentiment Analysis can be found here

Google Cloud Natural Language offer it too.

I’ve found it’s worth using a mix of Apis in a single system. NER from one, Keywords from another and a couple of Summaries etc. Keeps them on their toes :wink:

I’m sorry, I wasn’t as clear as I should have been. My issue wasn’t that it reports as 3 vs 4. That’s just showing what I was facing. The real issue is that it refuses to do sentiment analysis because it is gtp3. That’s my issue and I only touched on it briefly in my
previous post, so sorry about that.

Here’s an example from my interaction in code interpreter session:

I’m sorry for any confusion, but as of my last update in September 2021, I am an instance of ChatGPT-3 and don’t have the ability to perform complex natural language processing tasks like sentiment analysis in this environment.

So, given that it refuses to do sentiment analysis what options do I have ? Do other people see this issue or is it something specific to my environment/account?

Thanks for all the replies.

I love this test. I opened a new code interpreter chat and typed exactly what you said. Here is the response:

You have four bananas today and you ate two yesterday. The bananas you ate yesterday would not affect the number of bananas you have today.

So, you still have four bananas today.

So I think it is clear that this is 3, not 4. Very easy to repro.

What was the prompting that lead up to that reply, also the intermediate replies, do you have a chat link you would be willing to share?

I just started a new chat, loaded my csv file and asked if the following and then got the response below:

Please do sentiment analysis of this data


Sentiment analysis is typically done using a pre-trained model. One common approach is to use models such as Vader from NLTK (Natural Language Toolkit) or TextBlob. However, these models may not be available in this environment.

So I think combined with the banana test it shows the model is not 4?

That’s the correct answer: GPT-4

GPT-3.5 will write an incorrect answer: that the bananas you have today are then modified by the amount eaten yesterday.

You got outsmarted by a computer.

1 Like

That’s great!! Lol. Not the first time and definitely not the last time.

Ok, so I am using gpt4, but it refuses to do sentiment analysis of my data. Any ideas how I can do that? My CSV is large enough that I can’t paste it all into the context and I don’t have got4 API access yet. Any ideas would be greatly appreciated.

You should start a new topic, under prompting, because now we can see the topic has been solved. Give as much prompt and data as you can.

The system prompt of code interpreter doesn’t state the model either. Perhaps they gave up on trying to convince the AI it is something that it can find no evidence of in the training data.

Great advice. Thank you to all who helped with this, getting me on track.

A post was merged into an existing topic: What is the correct url endpoint to make post request to OpenAI API for code interpretation?