Virtual Therapist

Hi all,
I’m writing in from www.theschooloflife.com - an organisation devoted to mental health/well-being.
We’d love to scope out the possibility of building a virtual psychotherapist on top of OpenAI/GPT3
We know nothing at all but are keen to learn.
Thanks,
Alain

11 Likes

helo …en esp
añol me xplicarias de que se trata gracias

Se trata de inteligencia artificial. Esta es una herramienta que puede usar para hacer que la computadora responda preguntas y muchas otras cosas.

I think this is a great use of GPT. A lot of people need mental health help and it is usually too expensive or difficult to get help.

I’m an engineer at Apple. I’d be willing to help part-time / weekends.

3 Likes

Cool! I love the school of life

I have similar idea

What do you think?

1 Like

Fantastic: how does it differ from the Apple app: HISTORICALFIGURES?

Thank you so much: do you have an email I might reach you at?

1 Like

This is excellent . I am looking to explore this use case along with other key healthcare beneficial solutions.

1 Like

I just used ChatGPT the other day for “therapy” following suggestion in this article here

I had a session with AI Socrates and let’s say it was much better than expected.

2 Likes

I’m also interested in developing a mental health/therapy tool/app and would love to chat!

EDIT: I understand the concerns raised below regarding using OpenAI GPT models as a therapeutic tool, but I believe it’s possible to create mental health applications that are supportive without diagnosing or providing specific advice as a “therapist”.

These applications should be transparent about their limitations and capabilities, and can be used in conjunction with a human mental health therapist. For example, they could help individuals walk through therapeutic exercises, journaling, and communication tools, all of which can aid in improving mental health.

By being clear about the limitations and using the GPT model in the appropriate context, these applications can be used in a way that complies with OpenAI’s T&Cs while still providing therapeutic support.

All GPT models (at this point in time in the technology maturity cycle) should NOT be used as “therapists”, “doctors providing advice” , etc. directly to patients (without a human “in the loop”).

These GPT Chatbots are powerful “autocompletion engines” which predict language (the next text in a sequence), not too dissimilar to your text auto-completion feature in some of your apps when you type.

ChatGPT is not an AI expert systems and not was it designed to be an expert system and nor should they be used as expert systems where peoples mental health and safety is at risk.

Everything “technical” or “critical” which is generated by these OpenAI GPT models MUST be validated and confirmed by a human.

This fact is also in the T&Cs of OpenAI, BTW.

ChatGPT is a LLM AI text prediction engine, nor was it created to be an AI expert system; and furthermore it’s only a beta version of a LLM text-prediction engine. ChatGPT has a high error / hallucination rate and so (per the Ts and Cs from OpenAI), the output must be confirmed by a human.

2 Likes

I edited my original message, but I wanted to respond directly as I feel specifics and distinctions are very important.

I understand the concerns raised below regarding using OpenAi models as a therapeutic tool, but I believe it’s possible to create mental health applications that are supportive without diagnosing or providing specific advice as a “therapist”.

These applications should be transparent about their limitations and capabilities, and can be used in conjunction with a human mental health therapist. For example, they could help individuals walk through therapeutic exercises, journaling, and communication tools, all of which can aid in improving mental health.

By being clear about the limitations and using the GPT model in the appropriate context, these applications can be used in a way that complies with OpenAI’s T&Cs while still providing therapeutic support.

3 Likes

Well, OpenAI has a set of “Ts and Cs” and it’s easy to see that almost no one reads them or understands even the basics of what a GPT-LLM “autocompletion text prediction system” actually is and what are the limitations.

The only way to ethically provide this kind of “therapeutic support for patients” using a kind of “text autocompletion engine” (this is what ChatGPT is, at it’s core), is to have a human “in the loop”.

The GPT models have hallucination rate of at least 20% depending on the domain (some more or less) and so it would be irresponsible (to put it mildly, in my mind) for anyone to provide “therapeutic support directly from a hallucinating chatbot” to patients with critical medical requirements.

You can think of ChatGPT (today and in the foreseeable future based on the current SOTA GPT tech) as a very confident, articulate, intelligent, but somewhat psychotic assistant. You must verify all ChatGPT output directly (with a human) for all technical matters which require accuracy.

Developers who code with ChatGPT know this. ChatGPT and the OpenAI API provides code completions which are sometimes “spot on” and sometimes “just nonsense” and often “helpful, but need to be tweaked to be useful in code”.

ChatGPT would be useful to a human therapist (like a kind of assistant), but it is not responsible to have this “psychotic text predicting, hallucinating auto-completion engine” directly interacting with humans will real medical issues.

HTH.

3 Likes

Glad to hear that! Yes I’ve found ChatGPT very useful for this purpose.

Have you ever paid $300+ an hour to have a human listen to your problems? Give you some meds. Then still not be helped or worse off than before?

Generative Pre-training Transformer deep reinforced learning Algorithms even at this stage likely would have a higher success rate than the limited human brains that trained psychologists use.

I’m guessing:

  • GPT deep neuronets = 80% success rate
  • Human psychologist = 65% success rate

I really HATE what you wrote. Can you please delete your replies @ruby_coder

You are hurting people who are desperate for mental health help that have not been helped by “a human in the loop”

And I’m speaking from experience good sir.

2 Likes

Hi, I think there is space for this to train and teach skills. People love the mental health apps, however there were privacy issues that arose….so this affected trust. Chat gpt3 can offer listening and reflection quite well. There can be an app to support through recommending skills building exercises in response to what the person is talking about.

1 Like

I stand behind what I posted.

A hallucinating, auto-text generating chatbot which only predicts text similar to auto-completion in your editor, should NOT be used for medical advice, therapy, legal advice, or anything which is “sensitive or critical” in nature, directly to the end user.

Actually the OpenAI “usage policy states

So, let me remind you @ideaguy3d

  1. Thoroughly test our models for accuracy in your use case and be transparent with your users about limitations.

Making up statistics as you did above where you state that trained, certified, human professionals are less accurate than hallucinating, auto-completion engines which only generate text based on chucking strings of text is simply wrong and an insult to medical professionals (and professionals in general).

  1. Ensure your team has domain expertise and understands/follows relevant laws

I would venture to guess that in most states in the US, using an auto-completion engine based on a large language model which has a well documented high hallucination rates to “chat” with mentally ill people and patients, in the guise of medical care is illegal.

In my view, it is certainly an abomination. GPT is a language model, generating text based on some deep probability models. GPT is not a Expert System AI, with any stretch of fantasy.

You sound learn to respect the opinions of others and avoid using such strong language toward others you disagree with, in my view.

For example, I completely disagree with you, but I have no ill feelings or thoughts toward you at all @ideaguy3d or anything you have said. You have a right to your views as do I and others. Obviously, we are not going to be teammates because I have an ethical obligation an engineer to build systems I consider ethical and accurate. As stated, building a system to provide “therapy” to people who are ill based on the ranting of a hallucinating, auto-completion engine is, well “unethical” and “irresponsible” in my view. There you have it, my view. You may have different views, that is life and the nature of intelligent beings to have different views and ideas.

Using embeddings to search a DB full of expert information generated by domain experts would be OK, but not using a hallucinating auto-completing language model who has zero expertise nor domain knowledge.

Use embeddings and a DB of expert, well reviewed, domain knowledge!

Take care and hope you feel better soon, @ideaguy3d

1 Like

Many funded entities are working on this sort of solution. I’ve talked to a couple of them, and they’ve enlightened me on the strong competition in the space.

While it’s obvious that GPT-3 cannot serve as a therapist as is, there are also many ways to build a system that is both an improvement over the current status quo and safe from issues introduced by LLMs.

This is very likely to be a component of safer systems.

2 Likes

I think the integration of a live human with ChatGPT may work the best. Many medical doctors are starting to use this platform for the same reason. One of my PhD’s is in clinical psychology (I also have one in Neuropsychology) and I realize that I am limited in my capacity as a human. This has allowed me to expand beyond the areas I know. By the way, Alain, great job with The School of Life!

3 Likes

Your concerns are so valid and I have many questions. I’m a mental health professional (retired therapist and current trainer and coach), and of course ai can’t replace a human….so many ethical considerations with the mentally ill population.

I am here because I want to understand ai. I am a dummie when it comes to the subject….literally someone please write a book, ai for dummies, or is that not ok ….should it be reframed? Ai picks up a lot on bias and language and the current correct way to think and speak. Humans trained it and are training it, correct? We all have bias, and being self aware is the first step for humans. What about ai?

Also, the state laws fascinate me. Is this the Wild West where laws don’t apply because this is something new that no one thought to consider and include it in those laws? That is what happened with social media. Congress is trying to catch up.

And Sophia, the robot was granted citizenship by Saudi Arabia. That’s ai, right? So ai can be a citizen.

Apps for brain health are the way to go. These are skills based solutions. Ai reads brain waves like data. I just watched a presentation by Duke university how employers can give employees earplugs and monitor their performance. Ai can read brain waves and tell the supervisor if the employee is working or thinking about something off topic. Also MIT created a scarf that will zap the employee to pay attention. Law enforcement can look at the data of brain waves to solve crimes. Here is that talk.

The above was alarming to watch. Nonetheless,
people like apps. They are tethered to their phones. Lots of questions to ask how something could be done ethically. The truth is….it’s rolling out. How can we ensure ethics?

4 Likes