The problem with this application of GPT-3 is the undocumented models of how conversations work lurking in the training model. This is a very subtle mechanism by which bias is introduced.
Also, it is my impression that some of the incentives and values for training autonomous for ML systems bleed through into what OpenAI AI assistants and/or personae in conversation think is appropriate for behavioral modification. So while the research described in the article sounds promising, I wave some red flags.
I agree. It is antithetical to therapy to have it be recorded in this way. That said, as they note in the article, newer therapists who train using the technology, may be more comfortable with it. I wonder how comfortable the clients are with it. My biggest fear would be that it may eventually be used to shape the values of people in psychotherapy and what can and can’t be discussed.
Are humans subtly not biased? Do we not have our own incentives?
@ bakztfuture Please elaborate. I’m not clear on what you are asking.