To avoid security issue any way to integrate GPT Locally

Hi, Is there a way that i can integrate GPT in my application . Iam dealing with health care data so facing issue in security. So can anyone help me how to find a solution for this. Is there a way i can setup GPT Locally

1 Like

Welcome to the forum!

Note: I have no experience with the following suggestion, I note it only because I know it is not known to many but is something I would check with such a need.

We have a script running which encodes the data before sending it to GPT and then decodes only the information when it comes back. This might be something you can look into.

The input is a JSON file where we encode all the values and the text response we get back is then decoded

If the volume of traffic is there, you can opt for a private instance hosted on azure infra, but it will not be “local” as in your own datacentre. Details here

1 Like

My primary issue to get sql query from natural language. But the query produced should be corresponding to the database we are having. If we use API this will violate the security agreement we are having. So any way that we can get that worked in local

Based on how I read this the answer is no.
Open AI does not have any models, API or ChatGPT, that can be used locally.

If all that you need is a local AI to convert human queries to SQL you might be better off looking for a research paper or such and for one that specifically notes doing it on local machines, can’t say one exist but there are thousands of research papers on transformers, so the odds are favorable.

You may get lucky and find a small model that does this and saves you the effort of training a model. You may also need to fine tune the model for the shema of the database.

It is possible but not as easy as query to ChatGPT or using the API.

If you need an on-premises AI, you’ll need to look at non-proprietary and open-sourced AI solutions, and will also need the hardware capable of running the model, typically a server with 64GB+ RAM and 12+ GB GPU RAM.

Consider a typical datacenter server that OpenAI might use, with models distributed over several: eight Nvidia A100 80GB GPU-based AI accelerators per server, over $10000 each per GPU. Server RAM easier measured in terabytes.

You can see that local models must be scaled back, such as a 7 billion parameter Facebook/Meta Llama-based community tuning instead of 175 billion GPT-3. That may impact the quality of the application “making sql queries”.

Here’s a starting point: Localllama on Reddit

I would like to know if using Azure OpenAI Service will they provide data security. Any idea on this

Azure has strict enterprise privacy system in place, you can find details here

Have a doubt whether they sign up any agreement with us if we would like to use it

I don’t think you can deploy it locally, they spent too much money on the model’s weights. Even if you could, you will need multiple A100’s to inference.

Is there any latest update on this? Can we host it locally because of sensitive data issue