API - Code Interpreter - Doubts about data security and Privacy

Hi,

I’m trying to figure out the best approach to use GPT-4 for my work, and I’m currently experimenting with both the web version and the API. I successfully replicated the behavior of my custom GPT with the API. It’s an assistant specialized in feasibility studies analysis with a knowledge base built from an Excel file, and it uses the code interpreter.

What I don’t like, for security purposes, is the requirement to upload the .xlsx file to OpenAI’s environment. I’d prefer not to expose company data in storage I can’t control.

Is there a different approach to keep the data in-house?

Can anyone suggest an alternative method?

Thank you very much for your kind interest.

Not really, no.

You cannot simultaneously give the model access to the data and not give the model access to the data.

The only way to have an LLM access your local data and have that data remain local is to run the LLM locally, which is not something you can do with GPT-4.

1 Like

Thanks for the feedback. Unfortunately, OpenAI doesn’t show any disclaimer about the use of uploaded files (or at least I can’t find anything). In my opinion, it’s a strong limitation.

It’s all in the policies documents.

1 Like

Thanks Jake, I will analyze them.

What’s your interpretation of this part described in the ‘Europe Privacy Policy’?

User Content: When you use our Services, we collect Personal Data that is included in the input, file uploads, or feedback that you provide to our Services (“Content”).

In my use case, I’m uploading Excel files to perform data analysis. Does OpenAI collect them?

Of course they do.

They could not provide you the service of having the model reference the contents of the file if they did not collect the data first.

You should read sections 8, 9, and 11. They will give you more information.

1 Like