I am in Vice talking about GPT-4chan, GPT-3, AI ethics, & guardrailing.
I think it was a good thing. Its important for people to know what AI is capable of, and that its here, in the wild, and there were no safeguards in place.
That thing did exactly what it was trained to do - a function of the data it was trained on. If anything, the article highlights how AI can be exploited by malicious actors to spread hate on scale.
I agree with Lauren Oakden-Rayner’s opinion that this wouldn’t ever be called an experiment - unleashing a model at such scale without safe-guards or warnings, without sandboxing, without defining participants.
But it also reflects the reality, anyone with the know-how and resources can do the same.
It also brings us to the bots problem. Should bots be allowed on social platforms?