You advocate for AI over people in medical treatment. Interesting viewpoint. I have a concern, though. If someone were to seek help for dealing with childhood trauma, they might face a ban under current policies, rendering any AI assistance useless no matter how many tokens they have to waste. What do they do then in the absence of medical professionals?
I hope that developers or others who value free speech will advocate for better policies. Perhaps then, I might also support AI as a substitute for professional medical help. For now, however, I believe it’s risky to recommend AI for such serious issues.
On a different note, I also wanted to discuss novels like the Bible, but it seems to violate the policy too. It’s disappointing, as I thought discussing major works like ‘War & Peace’ would be within ChatGPT’s scope. I guess I’m a rare use case for wanting to talk about the Bible, and it should have been obvious that it might lead to a policy violation as the policy is so clear on that front. That’s my mistake!