Can this method be used to create actual agi?

Hello folks,

Can the following method be used to create actual agi? Via virtual artificial neurons/neuronal networks (the current ones) or synthetic (man-made and physical) artificial neurons/neuronal networks?

The following is strictly ‘my’ understanding of the world and the mind. Feel free to correct what i said in anyway partly or fully

The human mind perceives everything it sees, hears, smells, tastes fundamentally into either a 1) Quantity 2) Quality. Everything. Lets call everything that isn’t a ‘quantity’ a ‘quality’ for the sake of simplicity

And the mind perceives ‘qualities’ either as ‘object’ qualities (physical objects including living things) or ‘non-object’ qualities (various characteristics like life or death, various behaviors and various activities, natural phenomenon etc). And the physical objects as i mentioned above…into those that have life and those that have not…by perceiving whether the object has the non-object quality of life or not. It does this by having ‘combo’ neurons that can perceive and analyze if something has both object and non-object qualities.

So if we can create virtual or physical artificial neurons/neuronal clusters or networks that can perceive and classify/analyze what they see or hear or what is inputted into them…in the above mentioned way it would be a great first step to a.g.i in my opinion. What do you all think?

I will post more later.

I think that intelligence via consciousness is an emergent property of millions of years of biological evolution and so consciousness, as in how living beings are “conscious” is not achievable with artificial network networks.

This is especially true with generative AI like GPTx, which is nothing but a fancy auto-completion engine based on a massively large language model.

Many disagree, but I have talked to a number of leading neuroscientists about this over the years, and they all have this same world view as I do.

Like all these leading neuroscientists say, “consciousness” is an emergent property of millions of years of biological evolution and is not something which can be replicated in artificial NNs.

Of course, many disagree with this view, which is perfectly OK.

HTH

Maybe artificial physical neurons/networks can do it. Some of them can maybe be created to ‘hold’ (where i come from this isn’t a scare quote, just used for highlighting something :)) the audio, video or similar for touch, taste etc and neurons that generate a ‘self-model’ for the a.i…and some giving it the idea that it is ‘aware’. It doesnt have to be aware just making it think it is aware… together these imho can be used to create artificial consciousness i think.

Anyway that is far way than say A.N.G.I (artificial narrow general intelligence which these companies are on a path to…which we may see by 2025 via gpt 5 or gpt6).

As for perceiving stuff that the system is seeing/being inputted into it, i think enabling it to perceive them as qualities or quantities is a good first step. Some thing that could be useful after 2025 i think :slight_smile:

I wrote down some more stuff as a continuation of the quality-quantity theory which i humbly want to share:

Chatgpt is a ‘large language model’. How about developing a ‘large concept model’ in order to advance future versions of GPTs or similar…’ quicker and better. Because a human mind is actually a ‘large concept model’ which figures out concepts from what it is hearing, seeing etc and then figures out ‘interactions’ between said concepts. Human thinking or intelligence in a way can be summarized as such too.

To create a large concept model…the a.i people are building could be made to perceive stuff it is looking at or hearing/ or stuff that is being inputted into it as ‘concepts’. If somebody asks it ‘what is your favorite color’ it needs to know what ‘what’ ‘favorite’ or ‘color’ means. These are actually concepts (for lack of a better word). So it needs to know these concepts.

It needs to perceive the things it sees/hears/things that are inputted into it as…‘concepts’ and also figure out ‘what kind’ of concepts they are and ‘how they interact with each other’. And how to enable interactions with each other concepts to get what it wants, ultimately.

Regarding how to enable it to perceive stuff it sees/hears as concepts…an alpha-numero-symbolic or code or computer code-based system i think needs to be developed and each concept maybe assigned a symbol/number/code or some combination of these.

But if the things it sees/hears/inputted into it are classified as concepts, it will lead to many many concepts and that will require developing a staggering amount or number of alpha-numeric-codes for each concept. So in order to simplify this, stuff in the world that it sees/hears needs to be classified. In other words the ‘concepts’ need to be classified. So firstly…all concepts needs to be classified first as either ‘quantities’ and everything which isnt a quantity classified as a ‘quality’ for simplicity sake. And separate or a slightly-different code-based or alpha-numero-symbolic system need to be used for a concept that is a ‘quantity’ and for the concept that is not a quantity ‘quality’. Later, it the a.i system needs to classify ‘qualities’ as living, non-living, dead. Again, these concepts need to be assigned the suitable code/number/symbol based on if they are living, or non-living or dead. Later it needs to classify living concepts as plant-based or animal or human. And non-living as object or non-object. Doing this is important because i strongly think this is how the human brain does it too (make sense of the world).

I will post more later. Thanks.

To be able to perceive something as living and whether that living thing is ‘animal’ or ‘human’ it needs to be assigned the code (a combination of alpha-numeral-symbolic-code or computer code as i mentioned before) which represents first ‘quality’ and ‘life’ and additionally a separate code if it is animal and separate if it is human. Taken together the overall combined alpha-numero-symbol-code that represent all … quality, life, animal or human will represent that (animal or human ) in the system. If the animal is doing things like growling or moving an additional piece of code that represents that activity needs to be tacked on to the animal representative code.

As for ‘interactions’ between concepts like for example teaching the a.i system that a lion kills humans. A code for ‘kills’ need to be developed. Which needs to contain the code for death. So teaching the system that lions kill humans will be easier by teaching it the ‘interaction’ between the concept of lion and concept of human…results in death. This is how interactions between concepts ca be taught i think.

Will post more later…

Hi @itskarthik

This is a community for developers who use the OpenAI API.

In the spirit of software development, kindly post your code instead of philosophy here, since this is a community for developers.

If you have an idea, please code it.

Thanks!

:slight_smile: