My original idea was to put neurons into logic modeling structures.
By allowing neurons to model an increasing number of logical functions together through the flow of data by resulting nature of their logical structure, I made the first step toward achieving AI by creating a reasonable NN.
It took more than just a reasonable NN to model intelligence. Further logic was modeled by very many more structures that were derived by logically deducing what effective additions to the existing structure would rationally extend its logic.
These specific logical designs to make the AI snowballed into a 14 page sketch of what culminates to a self-referencing intelligence.
I am not that experienced of a developer, and the sketches pose such a challenging task that I have struggled getting traction with getting it done.
I also have struggled attracting attention to my idea, which is frustrating considering its potential efficacy.
I hope to gather attention to my idea and hopefully see it implemented.
I can’t share all the sketches or a link to them in this post, so feel free to email me if you would like to see them.
If you break down your architecture into specific components or blocks, GPT-4 can help you in creating the code for each segment.
It is important that the architecture has solid mathematical foundations before implementation. Creating a neural network architecture, especially one that aims for true intelligence, involves numerous challenges. Ensuring that the network converges during training, optimizing for computational efficiency, and managing potential overfitting are just a few of the theoretical hurdles.
Thank you for your feedback. I’m not really a mathematician, and I wouldn’t say it converges during training, I would argue that the principles used to derive my NN’s structure are based in logic, and that it doesn’t over or under-fit but rather sits in a sweet spot of generating logic based on the logical nature of the structures… I am looking now to get some sort of corroboration of my idea so I can move forward.
Navigating collaborations can be tough without concrete evidence that an idea holds enough promise to warrant the time and effort. Take soccer, for instance: without a deep understanding of tactical formations or the ability to assess player capabilities, one might mistakenly believe they’ve devised the perfect game strategy or player lineup.
In the same vein, without a firm grasp on concepts like convergence, parameterization, and performance evaluation, someone could be convinced of the brilliance of their idea, only to later encounter unforeseen challenges.
I’m not suggesting you’re that person. Without reviewing your work, I can only speculate. What I mean to say is that seasoned professionals typically engage when they’re confident of the endeavor’s value.
My advice? Document your idea thoroughly and systematically, then share it on GitHub. And consider developing basic prototypes to demonstrate its viability.
What is a “logic modelling structure” and what is a “reasonable NN”?
If you can describe your elements from basic building blocks then you have something that could potentially be built.
The idea may work perfectly in your head, but if you cannot describe that idea to another or build it yourself, then it will remain there.
I’m not quite sure how the “if” operation works here. What’s the underlying mathematical operation that represents “if” in this scenario?
The neurons exhibit it logically with their structure… There is no significant logical advantage from just those three neurons but rather from the two functions combined. I meant “if” logically in the general sense of the word logic. It creates, through the flow of data, a logical split modeling an if (generally logically, in the flow of data). Not sure how I would describe that mathematically, but I can describe it as a structure with logic inherent in it.
A neuron won’t execute an “if” function unless it’s mathematically programmed to do so. Moreover, introducing conditionals can make training unstable, often complicating convergence. I recommend you tackle this foundational aspect before moving forward.
It seems you’re misunderstanding what I’m trying to describe. One neuron isn’t intended to execute an “if” function (like an IF operator) or any other mathematical function. The “if” I am referring in this case simply refers to the logical and structural splitting between the top three neurons, and I am not referring to a mathematical function when I refer to this “if”, but rather the logic (in the general English sense of the word logic) that is modeled between the three neurons is what I’m calling an “if” logical function.