How can I handle authorization in assistant api?

I developed a chat application using assistant API. I have a large data set, I converted it to json and write python code to gpt and using pandas, I get results suitable for the scenarios I want.

My problem is that I cannot split the main data, so I do not want some users to access sensitive information. how can I do that ?

I guess you would just not include the private/sensitive data? Or can you not split it? I don’t believe a model would reliably keep part of its input secret…