A [new report](Truth, Lies, and Automation - Center for Security and Emerging Technology) lays out the ways that cutting-edge text-generating AI models could be used to aid disinformation campaigns. …[SNIP]…[SNIP]… While OpenAI has tightly restricted access to GPT-3, Buchanan notes that it’s “likely that open source versions of GPT-3 will eventually emerge, greatly complicating any efforts to lock the technology down.” [Source]
Yeah this is a huge fear of mine. Could be the end of Wikipedia, which would be absolutely heartbreaking for me.
Things will start to get real once text-to-speech models become indecipherable from human speech. Right now, we already have sophisticated language models (GPT-3) and impressive deepfake videos but once text-to-speech models become more sophisticated (Siri, Google Assistant etc. are still monotone and obviously don’t sound like humans yet), then it’s game over unless we come together now as a dev community and plan for it. Imagine a world with auto-generated deepfake videos, coupled with GPT-3, that sound like real humans… wild.
In addition to that, there is a new decentralized internet slowly growing as we speak that will be way more difficult to control and will be prone to these NLP deepfake text-to-speech AIs going off the rails.
On a more optimistic note, I do love the culture OpenAI is putting together here. It’s refreshing to see a company understand the social effects of their product and do the best they can to keep things under control for the greater good rather than cash out. All we can do right now is keep communicating with each other as developers and keep striving to prevent this tech from being abused by the wrong people.