Role based access in LLM's

Hi,

I’m using the GPT-4 API to develop a chatbot. The chatbot operates with two LLMs: one is trained on specific documents, and the other is the default GPT-4 LLM. When a user asks a question that the first LLM can’t address, it defaults to the standard GPT-4 LLM for a broader response.

Now, I want to introduce new client-specific data. I want these LLMs to provide answers based on this data, but only after authenticating the user as belonging to a specific organization. If the user is verified as being from Company 1, responses should be sourced from Company 1’s data. Conversely, if the user is from Company 2, responses should come from Company 2’s data. Has anyone worked with something similar? Any ideas or help would be appreciated!

Assuming you are using RAG & langchain, One way is to have 2 different retrieval chains initialized - 1 each for a company.

Second way is to have same chain, with different retrievers set at the time of query.

Long story short, you will have to have the right retriever pointed while questioning.

Yes. I have an SMS responder that is set up to respond based upon the phone number of the sender. I also have an Email responder that uses email confirmation to confirm identity. Once that’s done and user is identified, access to information is restricted to what is allowed in the user’s profile.