Artificial Intelligence in Medicine: My Personal Experience

I want to share my experience using Artificial Intelligence in a medical case. A few weeks ago, my 11-year-old daughter began experiencing intense abdominal pain and difficulty urinating. When we took her to the hospital, the doctors detected a large tumor and immediately referred her to the oncology department. The oncologists suggested the possibility that it was a malignant tumor due to its characteristics, but a gynecology specialist, who is a friend of ours, reviewed the entire file and had a different diagnosis.

With so many doubts and uncertainties, I decided to turn to artificial intelligence using all the information collected from the clinical studies. I have a system that I developed to help people with autism socialize. I made some small adjustments and input all the clinical analysis information. The advantage of my system is that it can connect to the APIs of OpenAI, Anthropic, Mistral, and Gemini, which allowed me to obtain results from several artificial intelligences. The results indicated that the gynecologist’s diagnosis was the most accurate, which gave me some peace of mind by ruling out a malignant tumor.

The day of the surgery arrived, and it was finally discovered that the tumor was a hematometra. Thus, both the gynecologist and the artificial intelligence were close to the final diagnosis. I am aware of the restrictions regarding the use of artificial intelligence in medical matters, but why not start training artificial intelligences specifically in medicine, as in the case of Med-Gemini? This could help so many children with cancer or other diseases.

It is truly sad to see children receive the news that all available treatments have failed and that they only have palliative care left, to die with as little pain as possible. It is heartbreaking to witness human suffering, especially that of children.

Imagine that an artificial intelligence finds the cure for cancer or another disease. That would be a monumental advance for humanity. AI would no longer be seen only as a threat but as the savior of humanity.

For my part, I will talk to some doctors to investigate already diagnosed cases and compare the results with those of the artificial intelligence. This way, I can evaluate the accuracy rate of the AI and how AI can contribute to improving the work of doctors.


This software was modified for this purpose. Originally, the software was created to assist people within the autism spectrum, hence the animated avatar that speaks and listens. My wife is a psychologist, and I am a software engineer. We are researching how AI can help improve the quality of life.

Note: This software is for personal, non-commercial use only, strictly for research and development purposes, never as a substitute for a healthcare professional.

3 Likes

Can you please elaborate on how you used AI to reach this diagnosis?

I love the story. But it’s just a story so far. A story I’ve heard too many times.

Your flowchart is very misleading and seems purposely ambiguous. Can you please repost it with legible text

2 Likes

These are the studies conducted:

Blood Analysis:

Immunology: 3 results.
Hematology: 20 results.
Hormones: 1 result.

Imaging Study:

Ultrasound: image results.
Contrast-enhanced computed tomography: image results.

All the results from these studies were input into the artificial intelligence.

The diagram is for building custom bots, it is not the flow chart describing the medical case.

Thank you!

I am actually working very closely with someone who is doing something very similar. If I understand correctly you are looking to build a survey that addresses all the areas and gather some insight into a person’s condition.

In my person’s case, she’s doing it for people with rare diseases. It doesn’t diagnose but it helps doctors & researchers gather quick insights.

She has a questionnaire of 255 questions. Using an AI-infused form service it usually distills these questions down to 20 by using a medically-trained embedding model to on-the-fly put together the branches of the survey based on the previous answers.

It’s mostly database logic, and only uses AI for immediate tasks (like embedding). In our tests we found that with questions & their branching logic even GPT-4 begins to struggle. Not to mention it’s higher latency and more expensive when the survey is mostly the same.

Also, the answers are open-ended, which is huge for capturing all angles. There’s nothing worse and disconnecting than a multiple-choice survey that doesn’t have your exact answer.

The coolest part about it is that you can use a model like GPT to iterate through the survey. So we gather case studies of certain rare diseases she is looking to capture, we feed it to GPT and ask it to fill out the survey using the available information. Then we compare the case study results to the data we collected & see how well the survey was able to capture the important details.

If you’re interested send me a PM! I am looking for more testers and feedback. Me encantaria ayudarte pues!

Regardless. It would be interesting to connect considering how similar these proposals are.

1 Like

I am using an approach that involves inputting all clinical analyses and the patient’s medical records (safeguarding their personal data) into AI systems. This generates a response from the AI, offering an additional perspective on the problem or highlighting details that could assist the physician.

In the psychological area, there are various tests that can be conducted with open-ended or multiple-choice questions. The system allows combining both options and also supports the addition of images, videos, and audios. The software is designed for you to customize your Bot, choosing from six different avatars or no avatar at all. Additionally, you can use the language model that best suits your project (OpenAI, Anthropic, Mistral, Google Gemini).

Attached are my contact details for any further inquiries.

Flow:

Yes we are doing the same.

We realized a flow chart wouldn’t work as we rely on embeddings to

  1. Accept open-ended answers (which voids the usefulness of connecting blocks together)
  2. Send the user to the next most relevant category and even eliminate categories

It allows for a fully dynamic survey that targets the areas that concerns the user. The idea is that we provide a (necessary) massive amount of questions and then let branching logic create the chart on the fly.

Lastly, we also don’t want to focus on a visual editor as (mentioned before) we run GPT-4 through the survey using case studies to see how the survey can be improved and if it captures all the details efficiently.

I have written a non-opinionated framework to facilitate this all using Supabase.

I’d be lying if I said we plan to be able to diagnose with this. Maybe in a longggg while. There is some serious legal problems with claiming the ability to diagnose patients so we specialize in only distilling the survey results into a schema for doctors to read and rapidly extract data from. We hope to find some interesting information in cluster analysis as well.

But if so, you would need questions that capture every single angle. I see you’re using ReactFlow and it won’t even accommodate the amount of questions you’ll need to have.

I have created a fun demo to try it out with, which shows off all the features. I like the color in yours and the friendly face. I am just focusing on the fundamental first and letting the user (eventually) control the actual layout


Using embeddings to organize the next question in the category


Creating (from a predefined list) and sorting new categories


And something more important that is lost in a typical chat: The ability to modify older questions by simply clicking on them and altering the text.

All of this is facilitated by a simple schema instead of a flow chart. I have a custom GPT for now that my clients can use to make changes without being fluent in JSON

{
  "ALL": [
    {
      "fn": "embed_sort",
      "args": [
        {
          "keys": [
            "dog",
            "dolphin",
            "cow",
            "hippo",
            "chicken",
            "penguin"
          ],
          "level": "questions",
          "entries": [
            "guess_animal_start"
          ]
        }
      ]
    }
  ]
}

Or, just simple logic-based flowing. If you say you eat cardboard for breakfast

{
  "cardboard": [
    {
      "fn": "show",
      "args": [
        {
          "keys": [
            "cardboard_gross"
          ],
          "level": "questions"
        }
      ]
    }
  ]
}

Or, like in the example above. Saying “Yes” to the guessed animal finishes the guessing game & shows more categories

{
  "yes": [
    {
      "fn": "finish",
      "args": [
        {
          "keys": [
            "guess animal"
          ],
          "level": "categories"
        }
      ]
    },
    {
      "fn": "show",
      "args": [
        {
          "keys": [
            "branching_logic",
            "random",
            "feedback"
          ],
          "level": "categories"
        }
      ]
    },
    {
      "fn": "sort",
      "args": [
        {
          "keys": [
            "win_feedback"
          ],
          "level": "questions"
        }
      ]
    }
  ]
}
1 Like

I developed a scripting language based on JSON (primitive AI) that includes all the options present in the diagrams. In later versions, I made efforts to make it visual, allowing anyone to create their flows easily and intuitively. This system can handle over a thousand nodes per topic and combine hundreds of topics. Additionally, it enables the creation of flows with AI models, sending prompts and receiving responses from LLMs, and allows for modification of all elements. Values can be stored in variables and the entire conversation context can be accessed at any point. It also includes condition nodes, API, multimedia, among many other functionalities.

The technology used is as follows:

  • Web Framework: Blazor (.NET Core 8)
  • Language: C# 95% / JavaScript 5%
  • Diagrams: Blazor Diagrams Custom
  • Web Server: IIS
  • Operating System: Windows Server
1 Like

LLM-compatible

I don’t really think you’ve addressed any of my points or really demonstrated how this managed to diagnose your daughter correctly.

You have shown some small irrelevant examples but nothing of what what you’re claiming.

I’ve never heard of blazor diagrams. Looks neat though and very similar to ReactFlow.

A big part of our study (or, their study) is that some of the answers need to be open-ended. Not multiple choice. Which eliminates the ability to branch out the next option.

I hope you can spend some more time to read this and maybe answer some of these thoughts because it seems like it’s fully fleshed and I wouldn’t mind moving towards it.

I also think you misinterpreted my example.

I am not specifying any exact branching. I am asking the user to describe an animal, not multiple choice state an animal name. Then I am using an embedding model to branch the next question (which is what it thinks the animal that you’re describing is). This was supposed to demonstrate that the branching is dynamically created on the fly based on the user’s input.

Using your flow chart, as far as I can understand it, this is impossible as it’s static.

To me, infusing AI into surveys is a great idea. To capitalize on it is to be able to capture unstructured semantics and perform consistent, somewhat deterministic functions on the results.

I don’t really think you’ve addressed any of my points or really demonstrated how this managed to diagnose your daughter correctly.

You have shown some small irrelevant examples but nothing of what what you’re claiming.

There may be some confusion because my native language is Spanish. I will try to explain as clearly as possible:

  1. When you create a bot (assistant), you assign it a role. In my case:
    Role: You are a medical specialist in all medical fields.

  2. A diagram is created with an infinite loop of questions and answers to the LLM (see flow2.png).

  3. I have all the clinical results in text format and I write the following prompt:
    “Interpret and diagnose an 11-year-old female patient who presents with the following symptoms… and the clinical test results are as follows:” Here I include all the clinical tests and the medical record.

  4. The LLM responds and gives me five diagnoses in order of probability based on the clinical tests. The entire conversation is saved in the context node for any additional questions.

  5. I switch between different LLMs and models to compare diagnoses. Since all the information is already in the context of the conversation, I can share it with the other LLMs.

Note: It is very simple, like creating an assistant in OpenAI, with the difference that I can switch to different models from other companies, allowing me to see different results.

I’ve never heard of blazor diagrams. Looks neat though and very similar to ReactFlow.

Here is the link to the diagram libraries:
blazor-diagrams.zhaytam . com

Note: You have to customize your nodes and functions to have a unique style.

A big part of our study (or, their study) is that some of the answers need to be open-ended. Not multiple choice. Which eliminates the ability to branch out the next option.

I hope you can spend some more time to read this and maybe answer some of these thoughts because it seems like it’s fully fleshed and I wouldn’t mind moving towards it.

I also think you misinterpreted my example.

I am not specifying any exact branching. I am asking the user to describe an animal, not multiple choice state an animal name. Then I am using an embedding model to branch the next question (which is what it thinks the animal that you’re describing is). This was supposed to demonstrate that the branching is dynamically created on the fly based on the user’s input.

Using your flow chart, as far as I can understand it, this is impossible as it’s static.

To me, infusing AI into surveys is a great idea. To capitalize on it is to be able to capture unstructured semantics and perform consistent, somewhat deterministic functions on the results.

The tool is quite versatile; you can create an infinite number of combinations. You can use much of what the OpenAI API does, but the difference is that this structure works for all LLMs, allowing me to use the same logic for all LLMs. Just explaining everything it can do would take hours.

If I use the nodes created to interact with the LLMs, I can do much of what the Assistants API does. Imagine it as a small visual programming language (inputs, outputs, variables, loops, conditions, etc.). The possibilities are almost infinite.

I can have both open-ended and multiple-choice questions.

Conclusion: The system is quite versatile. The modification made was to create a context node that stores all the history and shares it with the different LLMs. This system allowed me to analyze different diagnoses in various LLMs, two of which matched the final diagnosis.

1st, I’m glad your daughter is not as sick as first thought, hope she is doing well. Second, I believe IBM is very active in this space…medicine and medical research. Mr. Watson.

1 Like

Thank you for your good wishes, I am very relieved that my daughter is getting better. I appreciate your concern. Yes, you are right about IBM. Their Watson system has had a significant impact in the field of medicine and medical research. Additionally, I have also heard that Med-Gemini is making remarkable advances in this field. It’s impressive to see how technology can help in these areas.

1 Like

We worked in a different field (legal analysis) but with similar approach and requirements:

  • limited dependency on AI
  • limited decisions made by AI
  • total control over the whole branching and traceability of conclusions

We came up with “analysis workflows” that are basically question/answer branches presented as a configurable tree of “checks” with unlimited branching from a node and possibility to include several nodes in checkpoint firing conditions.

Sure legal stuff is far easier than medical, but worth sharing some experience/see collaboration possibilities if you’re interested.

1 Like