File Object without uploading to OpenAI

Is it possible to convert a dataset loaded in-memory to an OpenAI file object, and allow an assistant to access the file in messages without uploading this file to OpenAI? The idea is to keep data on an internal database and never upload to OpenAI for increased security.

No I don’t believe so.

Why not use a function callback from the LLM and implement your search in local code?


Just to make sure I understand, you are suggesting to use Function Calling to access the data I need when prompted, and pass that output to the assistant run?

Will code interpreter be able to use this output and generate / execute code against it? The goal is to use the assistant to analyze data (e.g. basic operations in Pandas).

That’s correct

You can experiment with different formats of the results set and send this to the assistant together with a prompt for how you’d like the assistant to process the data.

1 Like

That is great to know. If I needed to load a particular dataset depending on the situation, is there a way to guarantee the function loads the correct one? Or at least raise an error if it does not load the correct one?

To make this concrete, let’s say I have User A and User B, and I want to enable User A to ask questions of dataset A, and User B to ask questions of dataset B. In my function I’ll have to have some logic that says, “This is User A, so let’s load in dataset A”.

I want to be able to ensure the correct loading of the dataset, given I know the relationship between User and dataset in advance.

1 Like

In your function you might want to take a parameter that tells you which user it is being called for.

You can then upload your specific results dataset from within your code to a Thread using this endpoint:

get the ID and then refer to it in your next message.


You could respond with a large message including a data result in an e.g. json format and see how well the language model processes it.

1 Like

Ah yeah so I am trying to avoid uploading a file to OpenAI due to privacy concerns. I will try returning a json from my function and seeing how well the assistant can process it.

Will the assistant automatically be able to use code interpreter after being given the data?

You are still sharing data with Open AI despite the latter being less persistent.

Yes, I believe you could prompt it to do that, see:

Thanks for all your help! I was hoping the data being less persistent would be better security-wise, although I suppose OpenAI will be storing it for some time either way.

1 Like

This might be worth a read: