Need an architecture for safe implementation of ChatGPT into the enterprise architecture of a financial services firm. The firm wants to offer a single app interface that users can query using a LLM, but the data used to create an answer is sourced both from OpenAi’s massive data set as well as the firm’s internal, proprietary data. The difference between these two data sources should be transparent to the user and a single LLM answer should be returned.
How can ChatGPT’s model learn and train from the internal company data without this data being exposed to the rest of the world?
Welcome @timminer
Hi, I am looking to solve the same problem. Were you able to resolve it and create a solution?