What you have described is not âunderstandingâ, I am sorry to totally disagree with you @prescod.
ChatGPT is simply predicting text as a powerful auto-completion engine based on its LLM. Predicting text based on weights in a deep NN of an LLM is not âunderstandingâ it is just auto-completion, as instructed and trained.
Neuroscientists have very specific definitions and views on what is âunderstandingâ and what is âawarenessâ. Both of these properties, from a neuroscience perspective are emergent properties of âconsciousnessâ. Consciences is an emergent property based-on millions of years of biological evolution.
On the other side of the debate are computer scientists who believe (without proof or practical results) that it will someday be computationally possible for AGI (Artificial General Intelligence) to mimic human consciousness, but that is only speculation. Most neuroscientists disagree that computational-consciousness is achievable in any foreseeable future; but many remain open-minded to future advances in computing power and new scientific discoveries far beyond where we are today.
However, what every neuroscientist seems to agree on (as well as the computational scientists) is that generative AI is no more than a powerful auto-completion engine and therefore it has no AGI capability and nor are generative AIs âconsciousâ or âawareâ of anything and so they have zero âunderstandingâ. OpenAI also has stated this and has asked people not to exaggerate the general capabilities of generative AI.
ChatGPT is not actually âconfusedâ in the human sense. ChatGPT is simply predicting text based on prompts; and when you add enough noise and misinformation in the prompts, ChatGPT will create nonsense. Itâs not âconfusedâ like a living being is âconfusedâ, it is simply âgarbage-in, garbage-outâ. Furthermore, ChatGPT is not âawareâ of anything, itâs just trying to output a completion.
Yann is a computer scientist, not a neuroscientist, and so he falls into the same camp / side as many computer scientists in this decade. Quoting computer scientists who have no formal training and practice in biological neuroscience about what is âawarenessâ or âunderstandingâ or âconsciousnessâ is not going to fly very far in a debate on topics where the core ideas and concepts are more neuroscience in nature than computational.
You are entitled to your view @prescod, of course, but that view is a âminority viewâ held by computer scientists and not held by the vast majority of neuroscientists. This is a site for software developers not philosophers. Have any OpenAI API code to share? That is what we are supposed to be doing here, BTW, developing code.
My formal training is in electrical engineering and applied physics with a focus on computer science, systems engineering and networking. My view, which I have expressed, is the neuroscientist majority view that âconsciousâ is an emergent property of millions of years of evolutional biology and so therefore âawarenessâ and âunderstandingâ are emergent properties of that same evolutionary biological process.
Putting all that aside, the harsh truth of the matter is that ChatGPT is just blah, blah, blah predicting the next sequence of text based its massive underlying LLM. Itâs not âsuperficial understandingâ it simply has no understanding, little more than a typewriter has any understanding of the output when someone types on a keyboard. The typewriter does what it is instructed to do. ChatGPT does what it is instructed to do, which is to predict text and to create completions based on that prediction, little different than text-autocompletions in you favorite text composition app.
HTH
![]()














