I’m always happy to see people interested in science, doing extra studies outside the curriculum always gets a big gold start from me, I’m sure you got a bright future ahead of you 
Reservoir Computing (RC) is a framework for training Recurrent Neural Networks (RNNs) that was developed to overcome some of the difficulties associated with training traditional RNNs. In a traditional RNN, all weights, including those for connections between hidden nodes (internal weights), are adjusted during training.
As I understand it, the new features of the network you propose include the unique features:
-
All internal weights of the reservoir are trained and not left random.
-
Control nodes can control the information flow through the network and perform decisions on the reasoning process, like accepting or not the input or looping many times the state update until an output is accepted.
These features seem very similar to stuff that has been explored in previous research. It’s very possible that you were unaware of the previous works, at the time of writing this.
The concept of training internal weights in Reservoir Computing (RC) is not entirely new. In traditional RC, only the output weights are trained, and the internal weights are typically fixed and randomly initialized. Here are just a few studies that have explored the idea of training internal weights in RC.
The paper titled “An approach to reservoir computing design and training” discusses the dynamics of reservoir computing, where the reservoir dynamic is not only determined by its fixed weights . Another paper, “Reservoir computing approaches to recurrent neural network training”, also mentions training both reservoir-internal and output weights .
The concept of control nodes in neural networks is also not entirely new. The paper “Recognition with self-control in neural networks” discusses different kinds of self-control mechanisms of recognition in neural networks .
With that in mind let’s go over some of key points of reviewing your paper:
Originality: The concept of a Self-Control Reservoir Network doesn’t seem entirely new, you could benefit from pointing out key differences. instead of just stating that the concept is new.
Justification: The paper justifies the need for such a network by comparing it with the functioning of the human brain and existing neural networks. it could benefit from a more detailed explanation of why these features are necessary, why do these features make the functioning more similar to the human brain and how do they improve upon existing models, include empirical evidence to support your claims.
Evidence: The paper states that all what is written has not been proved yet and would need more studies to be confirmed. There’s no empirical evidence or experimental results to support the proposed concept. What methods will you use to test claims? How do you quantify the improvements? Are the claims falsifiable?
Practical Implications: The paper suggests that the proposed network can perform any calculation and can be more efficient than traditional RNNs. it does not provide any practical examples or applications to demonstrate this.
Conclusion: The paper concludes by stating the potential of the proposed network. This is discussion material. a conclusion is for summarizing the key points and suggesting future research directions.
the paper lacks empirical evidence to support its claims. Your future research should focus on testing the proposed network, providing empirical evidence of its effectiveness, and remember to properly cite relevant work.
That last point is important, It’s not just about text, but also about using someone else’s ideas or unique methodology without acknowledgment, even if the words are changed. it’s generally considered good academic practice to cite previous works that have discussed or used similar features. This not only gives credit where it’s due but also helps to situate the new work within the context of the existing literature. It helps substantiate your claims and helps the reader find resources for more knowledge.
Remember that science is the systematic pursuit of understanding and explaining the world and its phenomena through observation and experimentation, whereas engineering is the application of scientific knowledge to design and build practical solutions to problems.
Where to go from here:
Your paper is marked as a journal article, it is not. It’s a pre-print, I assume this is an honest mistake, but it may be viewed as attempted academic dishonesty, as it suggests that the paper has been peer reviewed. This is especially true because you uploaded it to CERN’s archive servers, this may get you blacklisted from posting in an actual journal in the future. This could seriously hurt your career.
My honest advice to you is to pull the paper down, fix the issues and get a “mentor” aka a person within the scientific community with a background in machine learning or similar that can help you determine when your paper is ready for publishing. I’m not say this to discourage you, remember this is 1% idea and 99% work, keep at it and I’m sure you’ll make great progress 