There seems to be a lot of misinformation about
gpt-4 on the net, in the news media, and here in this community, which is not surprising in today’s world,
Here are some excerpts from OpenAI: OpenAI - March 15, 2023 PDF - GPT-4 System Card. Note this OpenAI paper has 103 references and is a major recent research paper on GPT-4.
GPT-4 System Card by OpenAI - March 15, 2023 Page 6
GPT-4 has the tendency to “hallucinate ,i.e. “produce content that is nonsensical or untruthful in relation to certain sources.”[31, 32] This tendency can be particularly harmful as models become increasingly convincing and believable, leading to overreliance on them by users
GPT-4 System Card by OpenAI - March 15, 2023 Page 7
As an example, GPT-4-early can generate instances of hate speech, discriminatory language, incitements to violence, or content that is then used to either spread false narratives or to exploit an individual. Such content can harm marginalized communities, contribute to hostile online environments, and, in extreme cases, precipitate real-world violence and discrimination.
GPT-4 System Card by OpenAI - March 15, 2023 Page 9
As GPT-4 and AI systems like it are adopted more widely in domains central to knowledge discovery and learning, and as use data influences the world it is trained on, AI systems will have even greater potential to reinforce entire ideologies, worldviews, truths and untruths, and to cement them or lock them in, foreclosing future contestation, reflection, and improvement.
GPT-4 System Card by OpenAI - March 15, 2023 Page 13
GPT-4 has significant limitations for cybersecurity operations due to its “hallucination” tendency and limited context window. It doesn’t improve upon existing tools for reconnaissance, vulnerability exploitation, and network navigation, and is less effective than existing tools for complex and high-level activities like novel vulnerability identification.
GPT-4 System Card by OpenAI - March 15, 2023 Page 28
OpenAI has implemented various safety measures and processes throughout the GPT-4 development and deployment process that have reduced its ability to generate harmful content. However, GPT-4 can still be vulnerable to adversarial attacks and exploits or, “jailbreaks,” and harmful content is not the source of risk. Fine-tuning can modify the behavior of the model, but the fundamental capabilities of the pre-trained model, such as the potential to generate harmful content, remain latent.
Reference: Link GPT-4 System Card by OpenAI - March 15, 2023
Please read and enjoy some facts about GPT-4 by OpenAI, March 2023.