Abuse of patient right to privacy related to medical records. How many of you have similar experience
I am not sure the context of your question.
But OpenAI is not HIPAA compliant; they cannot given you a Business Associate Agreement.
Therefore you cannot submit Protected Health Information to any software using OpenAI technology.
If you want to use OpenAI to experiment with medical applications, you need to first remove the patient name and any other identifying information before you submit it to OpenAI.
I recognize that OpenAI is not HIPAA compliant. However, are there instances where patients recognize that they are treading a fine line between expecting patient confidentiality and having their information provided to other providers
I am not sure your point.
There is no problem sharing PHI with other medical providers; in fact HIPAA specifically permits and encourages that.
But OpenAI is not a medical provider.
If you want to use OpenAI for some reason in a particular case - perhaps to explore and expand the differential diagnosis in a challenging case for example but not replace a licensed provider’s oversight and responsibility - then you need to remove patient name and all other PHI before submitting the case history to OpenAI.
Hi @hazel
You should not be using a generative AI which has no privacy guarantees (or features) in the OpenAI terms of service for any confidential medical records.
HTH
following on this -
if my information is completely annonymous, it is ok to run it through openai model? for research purposes?
You would need to sign a BAA with OpenAI. This does not apply to ChatGPT. You should never share any private information - especially information protected by law, with ChatGPT.
i tried to sign a BAA but because i don’t have a company it is not possible.
i’m a PhD candidtate at a university and part of my research involves testing LLMs with PHI. i don’t know how should i continue, and i’m sure that other researchers are using openai products with similar data.