I am just curious that if we are allowed to use the API to build an app using OpenAI api to receive information about a person’s health/lab and answer their questions?
Is this allowed if
We make them sign an agreement that this is an AI and not a real doctor.
Is there a way around this or is this not allowed at this point?
I think it is better to develop techniques or specific AI for medical professionals to use, not for general public. Medical professional usage of AI can help to not to miss something or to sift through enormous information available out there when the provider or nurse or another healthcare personal looking for some specific information.
I think this technology has extensive area of possible usage in medical area, but I am mostly talking about usage by medical professionals. Patient usage could be another big area by special designed AI. So, in my opinion different, specific AI apps can be made to use in education, medicine, engineering, etc. Possibilities are endless. In this era, available information is so vast for a person to learn and manage. For example, medical professionals are often looking into information before important decisions on their patients, and continuing their education to improve themselves. AI can help them to not to miss something important while they are making their decisions.
I agree. A specific AI can be develop for medical professionals to use. For example there are many programs available, but Epic is made and in use in hospitals and clinics. The hospital that I work using Epic at a sophisticated level and it is great in my opinion.
For example an AI can help the emergency room triage nurse to direct the patients correctly and help her not to miss something important. The nurse would load the patient information, HPI, medical history, allergies, medicines that patient is using etc., then the AI will suggest for this patient to be hospitalized for this this this reasons, or to be in observation, etc. Or the nurse will make the depiction and check her decision with the AI, and AI will suggest other way or ask one more thing to be completed before the decision. It can help ED or other providers too on diagnostics for example. Which labs, imaging to order in this patient and why. But at the end the provider will decide.
I saw that Martin Shkreli made an AI medical chatbot. It uses GPT and costs $20 a month. Shkreli is a controversial character, and this venture seems highly risky and irresponsible. I hope OpenAI pulls its API from the project before it’s too late.
However, the ideas in this thread sound far more reasonable. Still, based on how much even large tech companies are struggling to railroad their LLMs, we are probably not there in terms of giving medical advice, even to professionals. Another problem is hallucinations. Despite the impressive improvement with GPT 4, hallucinating and giving someone the wrong treatment or medication can be disastrous. But based on current progress, I can easily see us getting there soon.
Yes, but it’s still relative. For example, in this study, hallucinations for neurosurgery board exams was 57.1% for Bard, 27.3% for GPT-3.5, and 2.3% for GPT-4. Hallucinations may well be part of LLMs forever, but this suggests that there is still significant improvements to be made.
GPT-4, GPT-3.5, and Bard are all quite generalist. Yet, their hallucination rate varies significantly (57.1% vs 27.3% vs 2.3%). Doesn’t this suggest that hallucinations can be decreased significantly while not moving too much on the generalist vs. sectorial scale?
I don’t really agree on the idea of needing 100% accurate systems.
There are many cases of misdiagnosis and even with 10% hallucination rate this would drop significantly.
Patients should be given the option to choose if they want to be treated with help of AI but without any recourse claims like an opt in option “do you want to have statistically better treatment but at your own risk?”
Not every misdiagnosis leads to death. And not every hallucination must lead to a misdiagnosis. I am pretty sure you can drop that to 0.1% with techniques used on top of just calling GPT-4.
Validated meta-analyses sounds definitely like the way to go. I’d prefer a centralized solution but there is privacy that has an even higher value than just life.
That hallucination rate surpasses the current GPT-4, making this dangerous.
Training professionals to collaborate with it, rather than permitting direct patient interaction is a better approach in the near future. This would capitalize on the strengths and mitigate the weaknesses of both parties while minimizing any potential risks.