Well, I think many are missing the point.
GPT AIs are just powerful auto-completion engines. These models don’t “think” or “know” or “believe” and nor are they “aware”, “intelligent”, “clever” or any property related to how any living creature’s brain works.
Imperfect data is a problem but so is “not enough data” or “no data” or “missing data”.
However, I agree with you, and so does every well informed data scientist, that these GPT models are far from perfect. Because the data is far from perfect. This is a known fact, it’s not a secret.
The “scary” part are the users of GPT who misuse GPT resources. Even in this relatively small community, we see users proudly proclaim that they “tricked” an text-autocompletion engine to “reject a theory” and we see many developers attempting to “make a fast buck” by using GPT to create applications which GPT cannot accurately accomplish.
@curt.kennedy Is correct here, but to call this “just making things up”, accidentally obscures the fact that GPT is simply “doing it’s best to complete an input prompt given it’s lack of data”.
GPT is an auto-completion engine, that’s it. The “scary” thing in my mind are all the people who think that there is some inherent “intelligence” in a process which takes text prompts, breaks them down into tokens, and based on it’s pre-trained data, predicts the next sequence of text to generate a completion.
These auto-completion engines are very far from “intelligent”. They are becoming very good at what they do, which are auto-completions which mimic natural language; but they are only that, and nothing more.
The “scariness” is the great number of uninformed uses who believe, or would like to believe, or refuse to face the technical fact that GPT is “not more than what it is”.
HTH