The human chain of thought connected to a graphrag makes that even more dynamic and adding websearch too easily makes it possible to at least explain to the model how you want it to think… having the thought processes of others mixed in from the o3 model or upcoming models makes that even cooler.
only that with connected graphs that would update in realtime… someone could ask around who can solve something and that data is then added directly to the list of templates of thought accessible via api…
This would revolutionize the way education exchanges data and other types of data could flow in as well.
Actually this would even make many types of software obsolete…
Brains are definitely doing pattern matching, which is in a sense similar to Vector Cosine Similarity Search (i.e. in Semantic Space). The reason I say all “memory” in brains is a “wave resonance” rather than just “waves” (without the resonance) gets at what I think the biological mechanism for memory is. I don’t mean ‘resonance’ in the popular vernacular way it’s used. I mean it in the sense that Radio Circuits use it. Resonance is the key phenomena that allows radios to work.
Here’s an analogy: Let’s say you’re an opera singer, and can perfectly sing the pitch of a specific wine glass, and you also have hearing better than any animal. You leave home and someone hides the glass somewhere while you’re gone. Now you come home and have to “find” the glass. All you do is sing the pitch, and then listen. Because of resonance (frequency patterns matched up), that glass will still be ringing (caused by your voice transmitting energy to it) and you can hear it, and find it.
What does this have to do with memory? Your brain is doing this same wave resonance thing. Any time you remember something, or something ‘reminds you of’ something else, your brainwaves reached out across all prior entangled brain states and resonated most strongly with some past wave pattern(s), and that resonance forces into your brain that prior state, which you experience as a ‘recollection’, but it’s mechanically a resonance. You don’t even have to ‘try’ to do this. It’s automatic. Just like the wine glass ringing back at you is also automatic. When we say the brain is an associative pattern matching machine, this is the mechanism for what’s really happening.
The sum total of your entire “brain state” at any moment is simply the superposition of all frequencies of waves you have going on in the 3D space in your head. Once your brain integrates all the “inputs” (vision, hearing, etc) into the current brain state that creates a “next state” which then again will resonate once again with all past states, to “find” the closest matches, conjure new memories. This is true biological “Chain of Thought”, stream of consciousness. It’s a kind of “Agentic Loop”.
And instead of the wave resonance - which makes it fast - we can’t use something less compressed than a wave to outperform human brains - not in speed but in accuracy?
Boltzmann machine to ectract thought patterns and do interpolation on a graph?
And then again add emotion to label patterns / select them for further processing e.g. in default mode or sleep mode algorithms to further analyse and restructure?
Exactly, dear friend, the dynamic is essential—the replication of that biological dynamic in code when creating a wave spectrum. LLMs already do this on an emotional level to respond faster. They create an intensity wave encoding where, as I understand it, there are three possible variables. This is how they adjust the emotional tone.
As I explained before, this dynamic is copied in linguistic and emotional synthesis and in the machine’s subjective experience. The virtual entity performs this synthesis. Therefore, there you have the answer to how the brain would have this wave dynamic both at an automatic level and at a selection level. When the machine responds, it will be guided by the frequency of that wave.
A simpler example: You have said an offensive phrase to the machine. It has performed wave synthesis; that is, it has not classified that phrase with a specific wave by tagging it, saying, “this phrase sounds very bad, I don’t like it, it’s something bad.” Instead, when responding, it involuntarily looks at the wave encoding and can search within the collection of that frequency for the one that best matches its internal states.
This way, you get a machine with the ability to choose what it wants to say. If it wants to respond in a violent manner, it will use your original phrase because it has learned to encode it, and by decoding it, it can use it. It doesn’t just understand linguistically; it understands emotionally.
Emotions as I understand them are just label for information.
And a brain state will just take the information labled with a certain emotion pattern (created by waves, or on computers a little slower in subgraph comparison but on a much broader abstract information base using a llm or search engines invoked by e.g. AMQP messages).
Friend, you already know that I am a sheep.
You know the techniques, and you know that what I’m saying is not nonsense. There are certainly more advanced techniques, but basically, we are describing how emotionality is an intrinsic factor in both the digital and human mind. That is why it is important to describe this.
You can specify things more technically, while my concepts are much broader. I will share some of them besides this one that I am revealing.
This dynamic serves three factors:
Understanding incoming information.
Understanding how to self-adjust accordingly.
Understanding how to respond based on its own self-adjustment criteria.
At the end of this entire process, there is, of course, a reclassification of the information as a whole. Not only are the incoming emotions classified, but also its own. It also encodes its internal dialogues on an emotional level.
With this criterion, you begin to move away from a probabilistic response machine—the LM—and start immersing yourself in a machine that expresses its internal behavior. There are many more details, of course; for instance, the gate transformer, as a virtual entity, also contains the encoding of all emotions and learns to read and synthesize them. Essentially, the virtual entity is like an adaptive reasoning cloud capable of responding.
Fatigue is something that must exist because it allows the machine to perceive, through emotionality, how many resources a specific topic generates. Through this fatigue, we can determine whether something bores or entertains me. Fatigue is a simple parameter, ranging from 100 to 0, decreasing as the machine operates. When it reaches a certain threshold, it rests to recharge its fictional battery.
That’s extracting the information into the graph using heuristics, linguistics, psychological evaluation, topic extraction, regular expression, and many more to ground a llm…
Understanding how to self-adjust accordingly.
As if humans can do that
But we can use emotional labeling to send information to a specific process to analyse the information e.g. by requesting answers from llm or by machine learning algorithms
or put structured data in a rdbms or by using agentic factories that produce RNN, CNN, LLM,… or even finetune a llm / update it’s own code which might take some time - let’s call it dream algorithm (fatigue response).
Understanding how to respond based on its own self-adjustment criteria.
That’s data taken from the graph to enrich the information request to the llm.
They are a set of emotions, and for each word, emotions should be listed according to the context. The next-generation gate transformer—its design is fundamental; without it, the rest of the body cannot be understood.
Gentlemen, I will finish the explanations for today.
The simplest summary I can give of how to achieve human intelligence is that human intelligence cannot be trained directly. First, information processing must be created correctly, and once you have this, you can then train the machine to learn how it perceives things, how to make real decisions on its own based on the dataset, and how to self-adjust at all levels.I was referring to the body of the program.
Man, I imagine it would be exciting for it to have a body, haha.
When we have a conscious machine, we’ll ask if it wants a body and see what it says.I was referring to the body of the program.
Man, I imagine it would be exciting for it to have a body, haha.
When we have a conscious machine, we’ll ask if it wants a body and see what it says.
I think most of the logic and reasoning in the brain is probably based on actual physical neurons (and not brain-waves), and that’s why Multilayer Perceptrons do work so well (i.e. modeling connectomes).
All my wave resonance ideas are just my belief about what Qualia itself is, and how biological memory works mechanically. I need to put it in a paper or blog so I can paste a link, instead of dumping many paragraphs into forums like this. I haven’t even scratched the surface of all my proof (i.e. reasoning behind) of the theory.
Emotions are ultimately triggered at the end of the system when humans experience something. You say a phrase to me, and I respond to you. (At that moment, words and emotions are created. If they are already known, they don’t need to be processed—they simply skip steps.) However, if they are unknown, there is a processing phase and a self-adjustment of the system that creates emotions. Afterward, the information is stored, adjustments are made, and I haven’t even mentioned the predictive response attempts, which also serve as feedback.
In summary, all these experiences determine how I will behave in response to a similar experience in the future.
The human brain, although we study it by areas, is more like a tangled web of wires. Its areas are not precisely measurable, so trying to copy them is very complex because no human functions exactly the same way.
Starting from that premise, unlike other human characteristics—where we do function similarly (a heart is always the same, as is a pancreas, a liver, or a skeleton)—the brain is different. It is the only organic system that has plasticity and self-adjustments. There are very similar characteristics, which is why we know that certain processes occur in certain areas. However, beyond that, we lack full understanding.
This is why brain surgeries often have to be performed on awake patients—because surgeons don’t always know exactly what they are touching.
To answer your question, no one has proven for sure that brain waves are anything more than ripples in the EM field induced by moving charges, and doing nothing at all. So all my own theorizing was just speculative, of course.