I am a student currently enrolled at PVIS High School in Vanuatu. I am undertaking a research project that delves into the comprehensive definition of intelligence and if AI is actually intelligente. Considering your expertise, I am reaching out to inquire if you would be amenable to providing insights by responding to a few pertinent questions related to the subject matter.
Your valuable input would significantly contribute to the depth and breadth of my research, and I am genuinely appreciative of your time and consideration in this matter.
I look forward to the possibility of engaging with you on this insightful endeavor. Please let me know if you are available for a brief discussion or if an alternative method of communication would be more convenient for you.
I hope the community will forgive me for saying this, but most people here are engineers (developers) focussed on applying the technology, not specialists in creating the technology.
In any case, this is the subject of debate and is highly controversial, even amongst leading Machine Learning researchers.
The LLM architecture is limited because it is based mostly on text and some argue the training misses out on a lot of “human experience” learning. For example, a baby learns to walk and carry things without the help of any books!
Ilya Sutskever thinks they will achieve AGI despite the apparently relatively simplistic technique of next token prediction albeit based on a colossal corpus of text:
People like Yann LeCun disagree emphatically:
It’s going to be fascinating how this plays out …
The question on the definition of intelligence, and whether AI is actually intelligent is a philosophical one. A philosophy book I read many many years ago that has shaped my thoughts today is this one by John Searle:
Thanks.
Since you guys are developers would you mind explaining how you guys develop ai and maybe a little of how it work. (no need for a super detailed explination)
Also thanks for the vids.
pretrain an AI machine learning net on a huge corpus of text using a reward model for sequences of words
see the model then able to predict other sequences of words
train it on millions of chatbot type questions and answers
see the prediction answer like a chatbot that was made to pretend it is person or an entity
continue collecting chats from the general populace and ranking with unskilled knowledge workers and infusing the AI model with continued artificial denials of output and see the quality decline.
The language AI does nothing when you haven’t told it to predict words, and hasn’t learned from the experience. It doesn’t have the free time ability to ponder the ethics of petting a kitty or choking it with its robot hand, or whether it should press its own off-button, because the AI engine idles at 0% CPU.