A few months ago, I had a similar idea to this. After listening to Stephen Wolfram discussing his theory of everything, I began to wonder the following: If the universe is computational at its base, then is there a way to harness this? The more I thought about it, the more is seemed possible, at least with a wide variety of matter.
I came up with what could eventually become a theorem of computation of matter: Any well characterized physical system can be used for computation. I stopped thinking about the idea, because I couldn’t think of a way to do back propagation. It seems that the research above begins to address the back propagation issue.
I have been immersed in quantum physics so I will say that, in essence, the universe is a giant “wave function”. Function implies mathematical calculation. Rather than fitting the math problem of back-propagation to the universe’s intrinsic qualities, it might make more sense to figure out what kinds of math the universe naturally does.
Anyways, I think that quantum computing, which uses these fundamental natural laws of the universe and wave functions, meets the expectations you have. Also this article is nothing new; we’ve used air and water to create mechanical computers, using a tank of ripples to serve as memory. The idea of using other materials for computation has already been explored in the form of memresistors and photonic transistors. Sorry to rain on their parade, but that article is basically a high school science fair project compared to quantum computing.
I was thinking of something a step lower than this yesterday, which is, dedicated chips that have a neural network embedded in them by design. Think of chips that would have GPT-3 embedded in them by design and not as code/emulation. They could be so much energy efficient.
The tech in this article takes things to next level by reducing the dependence on semiconductors.
Yes, I think that is a good idea. There are many possibilities if we can start thinking in unconventional ways about processing. With your thought related to GPT-3, that is the biggest issue with it that I can see–energy and hardware requirements. I would definitely shell out some money for GPT-3 on a chip! You could probably patent that idea.
Coming back to this and my scifi writer’s brain has changed its mind.
The downside to hardware based neural nets is that they are not changeable. But they should be much faster and lower energy, to the point that they could work on ambient heat/sound/light. So long as your task doesn’t change (like recognizing cats or burglars for a security camera), this embedded NN could be immensely powerful and fast, not to mention ludicrously cheap to run.
This could also figure into AGI one day; if you have a robot that is literally incapable of reprogramming itself, then its operational parameters and behaviors will be fixed. This is both a pro and a con. The pro is that the robot’s behavior (or AGI based on physically defined NN) will never be able to outgrow its original programming. For things like domestic helpers and factory workers, this is probably fine, since the environments are fixed as are the task sets.
Yes, I agree. You could also imagine a combination of static networks that do various things, connected with a more conventional processing system that can learn and choose to override/alter the static outputs.
Much appreciation to whoever follows through on this. Is there anyone willing to summarize the major topics discussed here within and the comments in a single blog post. These topics are all quite important, and I do not have the personal time. I will tweet it out, and I know it will be seen by the most crucial people who could see it, because I actively prompt engineered my own twitter.
Summarize the major topics discussed here within and the comments in a single blog post: Blog Post: Intro:
There was a discussion on the OpenAI forum about how everyday objects can run artificial intelligence programs. Major Topics:
-The article discusses how everyday objects can be used to run artificial intelligence programs.
-The article mentions that this is not a new idea, and that there are already devices that use air and water to create mechanical computers.
-The article talks about how quantum computing meets the expectations of running artificial intelligence programs.
-One person brought up the idea of dedicated chips that have a neural network embedded in them by design.
-Another person talked about the possible disadvantages of hardware based neural nets.
-The article ends with a discussion on the feasibility of patenting pre-trained chips with GPT-3 (or something similar) embedded in them.
I don’t have the energy to write a summary so I got Davinci to take care of it for me haha. Done in about 2 minutes of adjusting prompts.
Edit; some wording should be changed from article to thread, but c’est la vie. Like I said 2 minutes.
Oh I don’t know about that. Maybe if you provided the the source material for the prompts and linked the discussion in the blog. I mean, I was considering making a thread-summarizer app to pull all kinds of insights, but I have things to do today and building a forum scraper didn’t sound like fun lol
The prompt was quick and more of an exercise in summarization, than an actual blog post.
Here is a really good video on spiking neural networks. It covers many important topics including transferring ANN’s to SNN’s (spiking neural networks) and training SNN’s from scratch. It is fairly up to date with neuroscience. One problem I have with these networks is that they do not utilize saccades and don’t have neurons that fire in the absence of stimuli (like in the retina and other places). Maybe these things are not practical, but I think they should at least be used in the training phase as they will likely result in a more efficient and robust network.
On the topic of neuromorphic chips and dedicated ASIC for inference, it seems like nature has already figured out how to use every bit of the neuron as part of it’s computation.
The power efficiency advantage that dedicated inference chips have is phenomenal. Can you imagine running GPT-3 on 4 watts of power? Sure, you might not be able to fine-tune it, but who cares? If it’s that cheap and fast, you can afford to use many different prompts. It reminds me of Star Wars when young Anakin is building C-3PO and he’s just got some plug and okay droid brains. I think this is the technology that will enable plug and play intelligent robots. And again, the inflexibility of these chips may also be a huge benefit; their abilities will be constrained by hardware so you toaster will never become sentient.
Do you think there is a way to make some kind of self-fine-tuning model? Like a way to cycle/adjust models or something until there is a satisfactory output decided by the unit(or even just measure the outcome)? Some sort of active inference model that can fine-tune itself? What would you need for something like that? Or what kind of data structure would be able to swap out “skills” like that?
Human brains finetune themselves based strictly on usage frequency, so in theory, the answer is “yes”. the problem is that we don’t have a specific architecture for human brains. We have connectomes for smaller brains, which may or may not give us a blueprint for larger brains. Basically, for an artificial neural network, you’d have to create types of layers (and arrange said layers) in such a way that training can happen strictly based on usage frequency. This could conceivably become a type of “online learning” for artificial neural networks. I doubt that we could see this become a materials-based chip though, at least not for the foreseeable future. If memresistors ever make a comeback, I would be willing to bet they could handle the task of online hardware-based learning.
It looks like any ANN can be transferred over to SNN (Spiking Neural Networks). Or they can even be trained from scratch and do online learning with anything that ANN’s can do. I linked to a short course on the state of the art of these concepts in a post above (transfer ANN to SNN, training SNN from scratch, and the learning algorithm). Very worth your time to watch if you haven’t.