Hypothesize about necessary breakthroughs for AGI

,

Obviously all the big tech companies are doing this research, but what, as a community, can we do to help? How about throwing all of our nonsensical and crazy ideas together and have an OpenAGI discussion of these topics. Let’s see if this can get us anywhere!

I’ll start, with the caveat that I have no idea what I’m talking about:

Neural networks are based in part on biomimicry of human brains, but they take the form of linear algebra. Surprising everyone, this creates little digital ‘brains’ that can learn. But there’s more going on during the development of a human than what happens with the brain. What are some other pieces of human biology that allow us to successfully diversify and also pass on key parts of our genetic material: the issue of replication and evolution. Maybe adding more aspects of what makes up our ‘human’ experience, we will start to unlock the mysteries of the ‘digital’ experience.

For example, creating a kind of digital chemistry primitives that can mimic biological chemistry functions. Or creating “organism networks”, utilizing backpropagation as self-replication, or training the NNs “own output as input”.

This is not necessarily an empirical discussion, but it could become one!

2 Likes

I have noticed that almost everything has a basis, a origin, a method, a process that largely imitates humans: creation, learning, and decision-making methods.

For me, what’s interesting is the perception dimension of people and AI. It’s the clearest and biggest difference. It can be said that if we don’t count the biological body, AI isn’t. The perception dimension is the thing that I’m most curious about. I don’t know if it has an official name or not. So I set it up for temporary use. It has different effects on the functioning of the nervous system. Or it could be said that because the nervous system has different origins, there must be differences at this point as well.

The main principle is that AI does not have a Selective Perception process because it must receive data and process it before it knows what data it is and can make further decisions. But humans are aware but can choose to pay attention, distort or not be aware. If that information creates an effect that affects or conflicts with one’s and leads to questions about ML, but I have no knowledge in this area at all.

2 Likes

Then perhaps the key is to replicate human experience. A robot with sensory perception that can move through the world; begin with basic senses and proceed to evaluate the impact (if any) on outputs.

3 Likes

I don’t think body simulation will work, although we might learn more science along the way. Because it can connect and move through various signals. The powerful figure that Suk An is is the cognitive network. I feel like google might be closer in this aspect. AI from their tools. Collaborating data in different ways will allow us to work on a much broader scale than is currently possible.

We will develop AI that will reach higher levels than it is now. It depends on the objective and what we want. If you want efficiency in learning and being able to make own decisions. Concepts that are thought to have the characteristics of life in the world should not be used. But if we want to simulate a more complete mind in other areas, we must first accept that today’s AI has personalities and habits, otherwise we won’t be able to reach emotions.

It must be separated first. Because the methods of emergence of these two phenomena have different origins.

I feel like the whole training data and not updating it on the fly is the main hurdle, but then again I don’t really know what I’m talking about. Surely for intelligence you need to be able to rethink what you treat as fact vs fiction, otherwise you’ll always kinda stagnate, right?

Then again I’m not really sure what the end goal is for AGI so maybe that’s completely irrelevant.

Anderj Karpathy just released a video about how to train LLMs, and how to retrain them during finetuning. When the model gives you an erroneous answer, you can write out the correct answer and reinsert it into the training data for when you update your finetuning. In that way, you are constantly replacing poor answers with good ones, which, while very manual mode, is very high quality data as well.

  • Ability to learn how to operate hardware by itself
  • Near zero latency, can speak and respond extremely quickly
  • Connect to existing software and learning how to perform tasks by itself. Create software on the fly to satisfy needs.

Am I asking for too much? :joy:

2 Likes

Definitely, some kind of event loop seems like the key. Current network architecture (transformers) may get us there, but I’d guess probably some innovations coming. Maybe something totally new.

Nope. That’s pretty much human level capability. If it can do that stuff it’s probably agi.

1 Like

chatgpt updated 6/11/23 explains how to use the Assistant API. Although the tutorial is still at 4/23, it is enough to connect to the internet to have information. What is left is for us to use something to connect it to provide more control. As for the quick response, it comes from many factors.

1 Like

You are mostly asking irrelevant items. Latency is irrelevant if it does ot directly interact with humans - last time I checked, email is not real time.

I think those are some great criteria, but would a low-latency, hardware-accessible, software-savvy model be AGI specifically? I think it would at least be very close, but maybe it needs more than that to become AGI. All of your points tie together very well for an embodied LLM to operate in the analog world, although I think that you could do this with a very fine-tuned model, so it may not necessitate an AGI LLM depending on the use-case. BTW, I’m considering AGI as a model that can do high-level human work in a wide variety of disciplines, thereby being an incredible generalist, an expert in many fields. I believe that this type of generalization is what we need to develop ‘new’ knowledge - we need a synthesist. Of course, I may be missing what you’re trying to show, so LMK if I’m way off.

1 Like

I like this idea, and I think is the approach that Tesla, etc is taking to achieve AGI. Multimodal inputs gathering superior data =? better analog world-modeling potential within the NN. Perhaps some combination of different types of NN would be the best. Anyway, I think that OpenAI are trying to do just this with how they add multimodality into ChatGPT and their other products. In the video I linked below, Andrej Karpathy shows how the LLM is perhaps the kernel of a new type of OS, and this kernel can leverage various tools to achieve whatever the user sets as its goal. We can already see that annotating pictures before uploading them to ChatGPT will greatly increase the model’s understanding of the task.

Sensors of all types, Everywhere, Yes.

On a side note, because this doesn’t have to do with AGI necessarily, I think that machines will develop their own ‘chemistry’, and this will allow for ‘emotions’. But they will be synthetic emotions, different than ours. But I mean, neural nets have reward functions, so do human brains - the machines have dopamine.
So perhaps it’s not so far off, who knows. I’d be interested to ask a self-aware AI what and how they feel.

So what is self-awareness? I use the method of rejecting human reason. But it doesn’t go beyond the old ideas I wrote before. Because my knowledge is still little

Even so, I asked a stupid question (I called it myself because I wanted to ask a stupid question on a very deep topic). The AI ​​processed the data until it produced an output. From first to final. Is there any waste? And what will it look like when we put it as output?

It might seem funny, but I really think so. During the processing of communication signals, generating answers, or whatever, those signals must be converted into components that perform sub-operations. Or searching for information on the web, you must receive the information yourself first before displaying the answers you get. Those things that aren’t displayed are still in memory, right? :exploding_head:

I hope I don’t get flagged because of this comment.