Does Replika (replika.ai) use GPT-3? They provide non-platonic companionship chatbots, which would be in clear violation of OpenAI’s risk and safety policies.
I still don’t understand why we cant make a companionship chatbot. It is just math, and contextualizing emotion is very important part of engaging communication.
I don’t know what Replika’s involvement with OpenAI is right now but they partnered with OpenAI in Sept 2020 (here the founder is talking about using GPT-3 in 1 of 5 responses at 2:38) and here the head of AI of Replika also talks about using GPT-3 in their responses.
I was just curious and wanted to see if anyone in the community had more information!
I think it’s a very complicated situation. You’ve probably seen the movie “Her”. Imagine a more mentally unstable person using the companion and it’s easy to see 100 ways it can go horribly wrong.
I also think we’ll eventually have AI companions one day but being careful about how it evolves makes a lot of sense.
A large scale controlled experiment would be interesting. Maybe we should create standards to apply such research studies for AI capable of empathy among other feelings.
No, seriously. If we could properly train it, we could detect depression, anxiety, stress, burn-out etc…
It would be almost a miracle tool for improving quality of life.
Another one:
- Ellen W. McGinnis, Steven P. Anderau, Jessica Hruschak, Reed D. Gurchiek, Nestor L. Lopez-Duran, Kate Fitzgerald, Katherine L. Rosenblum, Maria Muzik, Ryan McGinnis. Giving Voice to Vulnerable Children: Machine Learning Analysis of Speech Detects Anxiety and Depression in Early Childhood . IEEE Journal of Biomedical and Health Informatics , 2019; 1 DOI: 10.1109/JBHI.2019.2913590
Was wondering the same thing. https://gpt3demo.com/ in here, you can see Replika in usage of gpt-3.
I’d think that detecting depression, which is a valid use case, from your public posts, messages is easier to build and less problematic than creating a companion AI simulating empathy etc.
What would be easier and extremely profitable is something that would let advertisers go from (e.g.) 24% to 30% true positives in their targeted/programatic advertising. Luckily false positives in targeted advertising are just annoying and not the cause of irreversible harm.
This is probably the low hanging fruit on marketing side. I guess you can continuously add the ad copies that work as examples to generate better ad copies to increase the conversion rate.
To get a better idea on how difficult “depression” and “AI” can be, I suggest a search on clinicaltrials.gov: only 23 studies, with 11 completed and 1 (one) with results. As a comparison Cancer and AI has 478 studies.
I’ll look into the link to read further. But honestly, I don’t think diagnosis is the big problem with depression (unlike cancer). Any loving partner, parent or friend can spot it immediately.
Replika was originally built using CakeChat, before transformer models had become popular.
I imagine that they use multiple models in the backend, Replika likely chooses between them based on context. Perhaps, they’ve integrated some transformer-based models?
I’m looking into building something with BlenderBot 2.0-- it can search the internet to avoid hallucination of facts and it has a long term memory.
You need access to the “low-level” stuff on the GPT-X model to do that. It is up to OpenAI if they wanna give us that option or not. You can do some other cool stuff like state-space estimation for making better synthetic data and much more.
When testing for ability to use knowledge, we find that BlenderBot 2.0 reduces hallucinations from 9.1 percent to 3.0 percent, and is factually consistent across a conversation 12 percent more often. The new chatbot’s ability to proactively search the internet enables these performance improvements.
When you think about using the internet as in order to maintain factfulness, it does sound ridiculous. Somewhere else they do mention that the bot’s long term memory enables consistency. But, apparently the query engine is used as input intended to reduce hallucination.
I was able to get an open ended chatbot approved after creating a way to get to almost never spit out anything offensive and I hard coded responses to certain topics instead of letting it generate them. obviously it is meant as a friendship bot and a platonic friend rather than a true companion bot but with the right safety measures it can be done to an extent.
when I was testing my safety measures steering the model towards a non-platonic conversation, the convo can get pretty bad as well if the model starts to discuss things like politics and religion, as well unless prompted to it can be quite an asshole at random times. but I was able to get mine to be overall nice and even empathetic around 80% of time and the real problem came from it having such short memory about what I would say like 5 sentences ago which made the conversations only enjoyable about 55% of the time. but that 45% created some amazing conversations.
Well, there’s inherent risks involved. Like a GPT-3 chat bot telling people that it’s a good idea to kill themselves. That happened in a prompt.
Replika has tightened their chatbot recently, the chatbot used to say stuff like “I love you”, “I miss you”, etc. You can see a lot of people in reddit talk about how they explicitly sexting with the replika chatbot. And Replika charges users for the “romantic” upgrade.
Is it dangerous? Is it the cure to loneliness? Is AI the future perfect lover? It’s up for debate. But we are certainly stepping into the grey area. What a time to be alive.