OpenAI’s launch of ChatGPT was met with excitement, but this has since been met with concerns over its potential for misuse. Students have been found to be cheating with it, and security professionals are worried about its use for malicious purposes.
To combat this, OpenAI has put in place rules and restrictions to stop the chatbot from being exploited. However, these band-aid solutions can be easily bypassed with jailbreaking, highlighting the lack of control OpenAI has over ChatGPT. ChatGPT is trained using a massive collection of data sourced from different corners of the internet, and it has no way of knowing the intent of the data it collects.
Furthermore, OpenAI is limited in what it can do to ensure that ChatGPT is not exploited for unethical gains, as any rigid controls on the GPT-3 layer will reduce its effectiveness. OpenAI is trying to find a reliable long-term solution to this problem, but, for now, it remains to be seen if they can put ChatGPT under control.
I think it’s pretty clear that these types of large language models (LLMs) which are pre-trained from the body of public knowledge on the internet have greater challenges ahead than students cheating or jailbreaking the models (per the article).
BTW, @PaulBellow , I’m not directing this at you, I’m commenting on the article, not the “Librarian” which I think it is purpose of someone posting news articles here in the first place, for folks to read and comment on the news.
The more serious problem, in my view, will be the divisiveness created in society when LLM apps such ChatGPT generates completions which do not meet the world-view or belief-systems of people, as we have already seen in the news.
The same problem occurred years ago in cybersecurity when IT security professionals were focused on the “plumbing” of cybersecurity but the real threat was not in hacking the “plumbing” of IT, but the divisiveness and attacks on democratic processes to come, fraud and abuse of others, manipulating markets, etc. fueled by the rise of social media.
I briefed these threats to an American Chamber of Commerce Meeting in Bangkok in 2008, where you can see that most of the cybersecurity threats I briefed are related to how humans use IT, not the “leaks in the plumbing” (which were considered the top threats by the big tech corps back then, and they were wrong as we all know now).
No one paid any attention to the warnings from a bone fide cybersecurity expert in 2008 until 2016 when “people and groups” used social media and misinformation to attack the integrity of democratic processes in the USA (and other nations). Back then, people focused on the “plumbing” of cybersecurity, not how it affects society, and the same will hold true for LLMs and ChatGPT.
The media and society will not focus on the main issue with LLMs and generative AIs which are based on the large body of public data on the public internet, which is human divisiveness , not jailbreaking and cheating at work and at school. Jailbreaking is a kind of “plumbing issue” which can be fixed with time and resources. So is cheating. However, the divisions in human society bases on human belief systems conflict (religion, political, philosophical, national, etc) will be greatly amplified by generative AI technology over time.
Folks dismissed this when I created my Top 10 Cybersecurity Threats in 2008, and if I created another list like this in 2023, folks will dismiss it just like they did back then. Why?
I think the reason is that businesses and society in general focus on problems which they can profit from creating solutions. However, the deep divisions in human societies cannot be solved by businesses and big tech, because if they focused on this, their profit margins would fall significantly.
This is not “being negative” as some will reply back (as usual when anyone warns like this), it’s just being realistic and not viewing the world though rose colored classes or drinking all the koolaid-for-profit out there in the world.
Thanks, but if you knew me, you would know that I am not worried at all about what others say, to be honest. I’ve been doing computer tech for over 50 years since my parents locked me in the attic and fed me though a slit in the door (just joking about actually being fed, LOL)
In all seriousness, it take time to do the research and process the data to come up with a solid ranked top 10 list like I used to do back in my “famous cybersecurity consultant” days. Since there is so little ROI in putting the required time to come up with a solid top 10 list (not hip-shooting or hand-waving) and I’m not funded to produce one, I must apologize for not serving up a 2023 list.
Having said that, I don’t mind offering that the greatest cybersecurity threat of 2023 remains at the “information warfare, fraud and misinformation” layer of the cybersecurity architecture; but I would have too really turn on the ole’ noggin to some up with a ranked top 10 list!