AI for psychotherapy and performance improvement

Hi guys! I’m the founder of a startup that’s at the intersection of AI and mental well-being.
My mission is to use AI to enhance cognitive abilities, performance, and improve people’s lives.
I’d love to connect with fellow nerds in this space and especially those who know how to organize and turn raw data into actionable AI insights!

2 Likes

Hey there and welcome to the community!

Have you looked at Pi? They seem to be on the forefront of this kind of work.

2 Likes

Unfortunately you won’t be able to do that here as it is expressly against the usage agreement to use OpenAI’s models for your stated purpose.

3 Likes

Hi! I did, but that’s not exactly what I’m after. PI is conversational and I’d like to create a model that can be customized by therapists to target specific needs of the patient.

1 Like

Hi! Thank you for input. Could you explain a bit more? Does that mean any AI developed by open.ai can’t be used for therapy?

What about corporate coaching that focuses on performance not mental health?

  1. Don’t perform or facilitate the following activities that may significantly impair the safety, wellbeing, or rights of others, including:
    a. Providing tailored legal, medical/health, or financial advice without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations.

Source: Usage policies

Thanks, Jake! I’m familiar with that, of course. The potential of AI is so unlimited that the battle between best practices and the power of this technology is definitely a tough one.

I conducted a pilot, using cognitive reframing and AI to address people’s pain points (all outputs controlled and reviewed), and GPT-4’s performance was superior. The guardrails were just right; the possibility for personalization and the output really showcased OpenAI’s passion for excellence. It was amazing and I can’t appreciate enough the work OpenAI does.

I know it may take years for technology and law to go hand in hand, but I hope we can implement AI in every area for the betterment of human existence.

So, Jake, what tailored output would not be considered illegal? If you guys could point me towards a way of thinking here that I could explore, it would be great!

First, it’s not a legal issue. It’s a policy issue. But, the key to being able to use the models for some sort of mental health care is,

without review by a qualified professional and disclosure of the use of AI assistance and its potential limitations.

The disclosure aspect is easy enough, but the qualified professional review is the tricky bit, because that doesn’t scale well.

2 Likes

Hey, I wanted to share that I’ve solved that issue, and my love for AI continues :slight_smile:

I have a question for OpenAI: Could you enhance ChatGPT’s cross-session memory allowing it to utilize insights from all sessions to enrich and personalize ongoing chats? It would be great if it could exhibit a form of meta-cognition.

So, I’m not OpenAI staff, but I can tell you that feature is coming.

Or rather, personalization from chat data should be coming soon. It won’t be able to cite specific excerpts, but it should allow easier retention of insights from previous conversations. Meta-cognition though is still up for debate there, but that’s a far more complicated discussion.

2 Likes

How do you know it’s coming?

And that would be super cool - this would mark the fourth idea for GPT improvement I’ve had that is being implemented, and I love the direction things are going. Where do OpenAI ppl hang out to discuss these topics?

1 Like

I got a glimpse of the feature (on accident?) back in November. As in, I played with it for 20 minutes until it was removed from my account lol.

Other people have been reporting its existence on this forum over the past few weeks. So, it’s definitely there, and it’s definitely coming!

I have no idea :slightly_smiling_face:. Catching a staff member on the forum is quite rare.

1 Like

@Macha, you were right! They rolled it out! I love openai so much; it feels like all my wishes for improvements are being implemented! Any chance you got early feature access?

1 Like

Being a regular on this forum does come with perks :wink:. It’s hard work, but it pays off.

Granted, there are times when certain features are rolled out before even we get a chance to try them out. In this case, I actually still don’t have the feature on my web UI yet, just the mobile app.

1 Like

“Just the mobile” — you’re so lucky! The only feature I got early access to was the mic. But I’d love to have the memory feature! Have you tried it yet?

1 Like

Ha! well keep in mind I’m almost exclusively using it in the web UI because most of my work is done on my computer.

I did try it, and while it is a feature I’m personally most excited about, I realized it’s still a bit more difficult to tell how it impacts the standard experience for a few reasons.

I already engineered my custom prompts to summarize me, so I can’t tell A. how much of the response is a “hallucination” of what the feature could be (similar to trying to ask base GPT to build a custom GPT), and B. how much is influenced by my own custom prompt and a good 60% of all my chats beginning with said custom prompt.

I’m excited to dive deeper into the feature (especially to help out folks once it becomes rolled out more broadly), but I just can’t really explore much without the web UI.

For example, if the feature is enabled, it would be amazing to not need to specify all kinds of context before asking a simpler question about my dev stack, and I’m guessing it’s going to be immensely helpful for that (like providing rust code by default for quickshot queries instead of python). However, I can’t determine if the personalization feature only takes into account chats that begun with the feature enabled. Meaning, I can’t verify if chats prior to the feature affect the feature’s performance. It claimed such on the mobile app, but information on this, as you know, is quite scarce :woman_shrugging:

1 Like

That’s the whole point! I keep checking my settings because I’d take the use of this feature to a whole new level and can’t wait to get my hands on it :slight_smile: Imagine working in fields like law, psychology, or even medicine… You could start a case across several sessions, build it up like blocks and keep adding to it in a logical progression where work on each new block would hopefully get easier, faster, and more precise with improved cross-session memory and AI knowing what the next step is… then perhaps these blocks could be grouped in a thematic clusters, turning user chats into a network… openai