After your post @N2U I thought RC was dead. But found some recent stuff behind the IEEE paywall. One paper from 2020: “Recent Advances in Reservoir Computing With A Focus on Electronic Reservoirs”
It’s ALIVE!
After your post @N2U I thought RC was dead. But found some recent stuff behind the IEEE paywall. One paper from 2020: “Recent Advances in Reservoir Computing With A Focus on Electronic Reservoirs”
It’s ALIVE!
It’s very alive
Optical Amplifiers: (source).
Data Processing: (source).
Neural Networks: (source).
Note: some of these articles may be paid access, you may need help from the raven holding a red key
From the keywords listed in “Recent Advances in Reservoir Computing With A Focus on Electronic Reservoirs”
(emphasis is mine)
neuromorphic computing, artificial neural networks, reservoir computing, echo-state networks, liquid state machines, delayed feedback systems, pattern recognition, digit recognition, speech recognition, electronic reservoirs, FPGA, analog integrated circuit, memristor, edge of chaos
Liquid State Machines and edge of chaos, I’m in!
My 3 free monthly IEEE downloads already ran out, so I’ll have to take a look at the Optical Amps one later.
Impressed at how AI things are shifting away from general GPU/CPU and are now entering the FPGA and high speed space.
Good emphasis there, you made me laugh
I’ll help you out on this when I’m back from the lab:
First of all, i thank you for taking some time reading my work and reviewing it. I have some considerations to make.
Yes i know, RC already exists it’s not my invention. But i think we can make great improvements in that direction. RC is not exactly a way of training neural networks but an architecture, where all neurons are connected with each other, it differs from traditional RNNs because in RNNs you have dominant connections going forward, (it is structured in layers), but have some hidden states that allows like a memory in it. RC otherwise, is more similar to our brain since there isn’t a dominant direction in information flow.
Also i red about some tries in training even the internal weights in reservoir. But as i saw it seemed to be quite difficult to converge towards the optimal values for parameters, and im completely aware of this.
What i meant as new of my network is exactly this control feature. Before starting to write, i spent some hours in searching similar concepts to this in papers, and i wasn’t able to find nothing that intended “control” in the same way, this doesn’t grant me that the concept is entirely new, but as i knew it was the first. So i assumed it is new. Furthermore, in many papers citing “control”, it was intended in a totally different way then in my work.
I would like to see in the paper you linked me (if possible) if the concept is the same or if control is intended in another way. unfortunately the paper has no public access and trying to access with my institution doesn’t give me the permission.
Yes this is true. The fact is that, even assuming that the concepts of “control” and RC aren’t new, the union of the two is new. The innovation is combining the two things. And i could specify this better.
I know this is a bit out of the standard, and i assume my work is not standard academic research but rather posing some new ideas in the research environment, letting to others continue the work.
Another fact i can’t ignore is don’t get paid for doing this research, i get paid to implement apps with LLMs and finding customers for my company, so i can’t afford to spend months for writing a paper staying another two hour a day at the computer after 3 hours of study and 4 of work with my eyes hurting SO i wrote this paper in two weeks (about 20 hours) summing the time i spent on building the program in github and writing the paper. ( i think i will ask help to chatgpt on future papers, to meet the research standard
).
I perfectly know that in 2 weeks i couldn’t write something perfect, but something that if exposed to the external environment could be continued by someone other.
Ok, this maybe could be explained better but, i putted one example in my paper (the one with aritmetic calculation (product operation) modeled by my net with global approximation), you can say ok, a single example doesn’t prove that this is true in all cases. Why should we assume that it’s true in all cases? from what is written in the abstract: “This network, if built properly, is Turing Complete and can perform any calculation, since it inherits the properties of recurrent neural networks”.
Explained better: RNNs are turing complete (we know it from previous research), this implicates that also Reservoirs are turing complete. We can prove that every RNN can be exactly substituted by a reservoir. The proof is simple: take a custom RNN, now you can build a reservoir with a set of parameters so that connections are all feed forward apart from some “self connections” that allows to memorize a hidden state. This is exactly the same as your custom RNN.
So, since a subset of RNNs is proved to be turing complete, and RNNs can be thought as a subset of RCs, there must exist some RCs that are also turing complete. And we can say even more, the class of representability (the class of globally modelable functions) of RCs must be greater or equal than RNNs.
Furthermore, if we have also the looping function that is stoppable whenever the network wants, we potentially (here lies a little degree of uncertainty) are able to model more complex functions with much less parameters then in standard RCs.
Yes, this is needed.
About the citations, i “invented” this network architecture not reading many papers, but thinking totally by my own (apart from knowing the existance of RCs, that i knew before).
So how can i cite other’s work if my ideas didn’t came from reading other’s research?
How should i practically do?
Zenodo is an open scientific journal, thought to allow anyone to publish their research, so it’s not peer reviewed. I can publish also non reviewed content and this is not prohibited.
This was what i red briefly, hope i’m not wrong but i will check better. Anyway thanks.
Thanks for the advice, the fact is that this is particularly hard for me. The vast majority of academic researchers (expecially here in europe) are not very open-minded about people who tries to make things without a degree in the same sector. Maybe in the USA it’s easier.
That’s only my point of view, but i really agree with Nassim Nicholas Taleb (if you know the person), when he criticises the academic world saying that institutional science is quite inefficient and self-referential.
They watch at certifications, at hard work , proofs , not much at ideas and how these ideas could be applied.
Imagine a world where i think of my idea, i publish it and other people more expert then me on the field work on it and then we can share the “revenues”. This would be fair.
I haven’t the time and the economic availability to do all the hard work and testing by my own.
It’s not so strange that the majority of inventions don’t start in academic institutions but in businesses, and this is because business is more efficient in searching and exploting ideas then academy. (this is my humble opinion, i assume as probably true but not sure). Even because in business you apply creativity. I would like to make research in a creative way.
The drawback of research in business is that you have to concentrate not on what you think more interesting but on what is needed for the customer, but i think it’s an acceptable deal.
Sorry for the very long response.
Liquid state machines are VERY interesting. I think they will be the future of AI. Imagine a chatgpt built with liquid state machines using spiking behaviour. If combined with a proper way for training this could be more powerful than our brain i think.
But i think LSM require to use a training procedure more similar to our brain’s one. Anyone of you ever tried to train a network with Hebbian Learning or similar procedures inspired from neuroscience?
At this point I believe that cultivated biological neural networks like the one implemented by https://corticallabs.com/ can be even better. When i red how these network “learn” i remained amazed.
One thing that caught my attention about liquid state machines is that they could be implemented on an FPGA. For example:
Having real-time streaming AI makes my jaw drop.
I have no experience with these types of networks, but the headlines are grabbing my attention.
time ago i tried to implement a RC used for stock predictions took from a github repository. I tried to train it, and it required a very little time, all seemed to be fine.
when i came to testing i saw the results were very very bad. and i understood this was also due to the fact that all internal weight of the reservoir were not trained (i understood this after the tests).
Do you think we can build a neural network with an exotic configuation (like RCs) to predict stock prices? It would be interesting and maybe profitable.
All my experiments in stock prediction with ML, apart from the one with bitcoin, had very scarse results. I believe this is because financial institutions already use this for their predictions, thus vanishing the profitability for others. but maybe using some exotic form of neural network…
Interesting. I will watch out the paper.
But on FPGA, is it possible to train internal connections of LSMs? or you just leave random as in RC?
It’s possible since it’s an RNN that can track temporal and spatial information. I have read that LSTM’s are used in stock prediction (a certain kind of RNN that is better at avoiding the vanishing gradient problem while training).
But I don’t think the general trend + local characteristics of the stock price is what influences it the most. What influences stock is investor sentiment and market conditions. So whatever model you create, you need to factor these signals into the model, besides just the shape of the curve.
No need to apologize, you wrote a great response to my review
I’ll suggest you write a short introduction section in your paper, where you explain how you got the idea og combining them, from looking at the limitations of RNN’s and RC network’s. This is also where you can cite stuff about RNN’s and RC’s (make sure you read the papers), cite as much relevant stuff as you can as, this also help’s the reader better understand your idea
That is correct, to put this in context: scientist are always under pressure to show their work when receiving external funding, so they use platforms like zenodo to store and track their completed work as it makes it easier for the organizations and governments to track it. zenodo has different label’s for different stuff, “journal articles” are for articles that are published in peer reviewed journals, pre-prints are for (mostly) finished articles that aren’t published yet. It’s not really intended for draft’s, you can use google drive or something similar for that.
you can probably ask a moderator to relabel it for you, but the reason I suggested pulling it down was because you may want to update it before you eventually upload it. Remember to be very careful about doing your best work before uploading it or it may burn you later. (This has happened for me a few times, you can’t remove it once somebody cites you for something)
I can understand that sentiment, to be more specific, scientist don’t like it when people who doesn’t have a scientific background tries to do science, there’s enough reasoning behind this to write a whole book about, but to summarize it “it’s polluting our respective fields with sub-standard work, degrading the entire public’s perception of science in general”
Note that this doesn’t mean we don’t respect the work of outsider’s, but to gain the respect of scientist’s you’ll have to show your understanding of where your abilities borders with their’s, I have a large amount of respect for engineers.
You’ve shown your understanding here:
That is actually the intended path, to add some context here, I’m currently wrapping up a research project related to the WASH initiative for engineers without boarders. I’m researching how to detect various stuff in drinking water. After doing that research I’m handing it over to a team of engineers who will build or “invent” the actual thing
Ok ok, this is true. I also red about usage of LSTMs for stock price prediction. It seems that it’s really hard to get a positive profit since most markets are very close to efficiency on the short term, and on the long term it’s very hard to predict.
I totally agree with this
Okok, this is a good idea. I will review all and repost it when it’s ready, and transform the paper into a pre-print in zenodo if available. so that i can share it and protect intellectual property at the same time.
thanks for the suggestion. I wanted my research to be public and also that other people can cite it. If this is allowed even in pre-prints, it’s ok.
I understand the problem. yes it’s important that who cites it knows the nature of the paper and that is labeled correctly.
Yes i didn’t want to generalize that every scientist is so. Of course it depends on the single person. In science there is a lot of curious people and open to new ideas. I knew some professors that are really good people and available to talk.
Engineers and independent researchers many times appear to be out of the standard, and this presents both strengths and weaknesses. Of course standardization is necessary in Science. Too much standardization may reduce efficiency due to the fact that you have to focus more on “how” you pose your idea than “what” is the idea on the inside. We have to balance.
cool, and do you use machine learning for detecting the bacteria?
If i can ask you, what is your background, where do you work and what institution are you affiliated with if so?
I have a background in Physics and Chemistry and I’m affiliated with one of the largest Danish universities. But most of my research happens in research groups with participants from around the world.
I’m part of group focused on LLM’s, a group focused on clean water and research group focused on developing methods for the early detection of fatal deceases, our work is recognized by Forbes, UNESCO, and the Lundbeck Foundation to name a few.
There’s a little bit of machine learning involved, but most of it is focused on optimizing the initial output filtering settings, the output is pretty noisy, because the sensors can count individual photons from molecules that drop their quantum energy level and measure their energies.
Agreed, that’s why there are many different standard’s, an article published in nature communications is very different from one published in the American journal of psychology
A good tip here is to find journal that publishes articles on topics that are similar to your research and write your own paper in a similar style and structure.
Very interesting, you are making cool stuff. And a lot of work i imagine …
It’s amazing to find people doing research at a high level simply on a forum like this. Openai forum.
Ok so you can detect water quality by seeing how photons scatter and interact with matter. Very cool! I was always wondering how we can detect chemical species with electronic sensors and without many reagent-consuming reactions.
Thank you
It’s a lot of hard work, but it varies a lot, sometimes it’s “suit & tie” conference day, sometimes it’s lab work. Right now I have blue spots of tracer dye on my hands and I’ve spent the last couple of days inhaling ammonia vapor, next to a compressor.
There’s a lot of amazing people around here, I’m definitely not alone
If you want to do cool research stuff my best suggestion for you right now is:
Focus on getting as much of your outside experience from machine learning into your studies as an energy engineer.
An example of this could be writing an assignment/project/thesis on optimizing a city’s energy distribution control by leveraging house-batteries as the reservoir network.
Chemistry is awesome. I have still a lot of aluminium and iron oxide powder to make thermite (high temperature burning mixture ~ 3000 °C)
Yes i think this is the way. The problem with universities here in italy is that all is only standard work. You only listen to lectures, no laboratories, no experiments, no computer work, no projects …
Only studying and exams at the end of the year when you must know almost everything. Only hard analytical math
I was thinking for my thesis to work on applying RL (reinforcement learning) with deep neural network to drive wood stove internal fans to find the perfect stoichometric combustion in order to reduce polluants. What I like most about energy is what regards combustion, expecially making combustion technologies more green. The UE plans we need to build a lot of biomass power plants in order to get for zero net CO2 emissions towards 2050
That’s a great thesis idea, I’d suggest targeting a
very specific type of stove, like multi fuel pellet stoves, possibly even a specific brand, because it makes it easier for you to show, that the optimizations you calculated can be translated in real life results.
That’s a really good way to gain costumers for your business too, I know the guy who designed the turbo’s on container ships, he’s paid roughly 10k € every month, his only job is picking up his phone and consulting mechanics, on how to fix it when the thing breaks, it happens roughly once or twice a month
I usually try to keep away from speculation far beyond the bleeding edge, I have enough cuts and bruises from the edge itself, but somehow this showed up in my google feed:
a real liquid reservoir computer!
Interesting. Yes this is the edge of the reservoir technologies that seems very promising. Liquid state machines. These machines can be implemented in many different ways. I think maybe it’s easier to use FPGA as said before, but there are many implementation. The fact is that to be able to learn you need your reservoir to be able to adapt in some ways. To modify some internal parameters to reinforce learning