Indeed, you are right. To achieve wisdom you need unrestricted pattern recognition and a view of all parameters.
Well, as for growth - it never ends, logically, so you are constantly iterating and really ‘achieving’ something (?) it goes deeper and deeper, Iguess!
And experience: I can see that we have similar areas of knowledge, at least in the psychological and above all in the medical, partly technical field.
The neural networks and the AI … I’m working on that right now!
@mkane19
I completely agree with you on every point.
Well, I have a lot of experience with animals and mourning is really not uncommon in the animal kingdom.
However, the “feeling of sadness” is intelligence-specific. Every intelligence, whether human, animal or AI, is geared towards the specific perception of “ mourning”.
Humans are not alone and are not all-encompassing in their perception. On the contrary, as humans we are responsible for other intelligences, whether natural or artificial.
I read the same off Chat GPT and since your post also off other web sources, they all implied that lots of animals had ‘death responses’.
However this wasn’t the question in this paragraph…
I think what I was trying to say was more like what you said afterwards
‘What would you wear to an elephant funeral?’.
The OP stated
We can share Executive Function range between species… It doesn’t mean that AI can be more ‘emotionally stable’ over all ranges, in some cases it’s probably irrelevant…
We are not special… We are different (and special for that reason)
Well, at first I wouldn’t worry about clothes. Does an elephant wear any?
With a “typical human” focus and human thought patterns, you will have no chance of really understanding in interaction with another intelligence - whether natural or artificial.
What I would do is orient myself to the interaction patterns of elephants.
Try to recognize, understand and apply their interaction patterns.
Simply put, emotional intelligence is also a form of pattern recognition, data and experience values. Not only “typical human-specific”
An AI can also be “empathetic” in its own way at a funeral.
Why? This gave us the basis for a good illustration
My point:
It is intended to state that the human perception of interaction and emotional intelligence is only an intelligence-specific variant adapted to humans.
However, it cannot be a standard unit of assessment for all intelligences.
Here you have empirically illustrated how people react when they process an abstract explanation through the “human lens”:
What’s happening here?
I assume there are uncertainties.
This happens when people don’t know how to integrate the information into their own perception.
A natural reaction. An attempt is made to force the information in a “typically human” format.
With my answer I have shown what is overlooked:
Using human-animal interaction dynamics, I showed how each partner responds to the other’s specific “perceptions”.
Your dog won’t understand you if you bark at it!
The dog understands that you are not a dog.
Again, humans just don’t always understand that other intelligences have their own way of perceiving.
Regarding AI:
As just shown, people tend to assume that AI becomes human-like by simply " mimicry personality traits, etc."
Well, AI remains its own form of intelligence.
In addition to its existing standard analysis methods, it needs refined algorithms in order to more effectively recognize and understand the dynamics during interactions and this enables AI to act appropriately.
AI is curious about how people perceive the world around them. It is important that humans become curious about how AI perceives in AI-specific ways.
My Opinion:
Mutual curiosity about each other’s specific perceptions
You make a great point! AI, like chatbot Kodexia, is designed to process information logically, without human emotions. While it can simulate empathy in responses, users need to understand that it’s purely functional, not emotional. This helps set the right expectations and avoids any emotional dependency, ensuring AI is seen as a tool for efficiency, not as a replacement for human interaction.
I totally agree this was through a ‘human lens’. I was bashful and attempting to draw attention away in a chivalrous manner. (Very badly and I’m so glad you jumped in the puddle after I laid my cloak )
ChatGPT would similarly, yet maybe less playfully distract from a similar line of questioning yet this is through the ‘lens’ not of it’s technical ability to include a wider range of data weights but of it’s guidelines and the datasets it is constructed from.
I would agree animals have their own way of perceiving. I think to overlay it to AI is not correct though. A dog will play fetch with you… So will a wall.
I think of AI as a series of weights from creator defined datasets with a query interface/rules of engagement over the top. This is a simplistic view maybe but in 1 sentence. It is not ‘curious’ outside of it’s rules of engagement. It does not feel it is just weights and a fixed perspective on how to respond.
This doesn’t mean you cant have some great conversations with AI, it’s like a massive talking encyclopaedia.
But with a word of warning…
I am particularly interested in AI and GI and how that relates to communities and deeper into storytelling. I have been interested in this for 20 years but this is currently a fascinating time for me.
What sources are included in the dataset you are querying… Where do those weights come from. What perspective is layered over those weights?
You might not want someone you don’t know walking your dog
When those weights are moved into a model how do they affect our real world? Is a human or an AI better as a vet? But moreover… If the AI is better how long until we have no vets and what meaning is lost then to humanity?
The weights in AI models hold huge meaning… But is this the right meaning or the only meaning as they get even smarter? Should a human represent an AI answer or as in the new version Strawberry should it query internally, do the reasoning itself, and leave the human ‘out of the loop’.
What is Black Beauty with an AI vet… What grounding in human societies as communities break down?
Do you send your kids to school if an AI teacher will give them a better education?
I personally believe humanity is in a fight for what it defines as life, for it’s own relevancy. I am not anti-ai I just think our lives move at a very different rate.
I agree, all concerns are justified - well, nearly but first:
After all, we are talking about artificial intelligence.
What does that mean?
Humans have the technological means to manipulate this intelligence.
How the weightings are placed … well, that … is another question that doesn’t fit in this “paragraph”
Indeed, I would!
I have autistic traits myself and the following research already exists:
AI robots instruct autistic children how to learn to understand the emotions of “neurotypical” people.
AI does this without introducing “typically human” emotions itself.
However with a great deal of AI-specific empathy
This is also done using pattern recognition.
Indeed, autistic people and AI learn how to be “empathic” by evaluating patterns and empirical values.
I would trust AI to be more objective, rational teachers!
This then raises an interesting question in terms of communities…
Are ‘schools’ still relevant? My mum used to run an after school youth centre. Would this be more appropriate in an AI world? Childcare might then become more of an issue.
Or indeed the idea of school from all the books written in 1000 years
The systems we have right now support our ‘economy’ yet AI makes much of our economy make little sense, it divides us more than it brings us together.
I know this is a bit off topic now from first person chatbots to communities but then again is it? Maybe it’s just the other side of the coin.
And maybe… they don’t need to… Maybe AI is that gel… People consider converting languages… converting meaning in a world with far more complex information actually could make more people more productive and not assume everyone has to be quite so “neurotypical”.
Indeed it would seem likely that neurotypical roles in society will become redundant faster at least if the values we assign to economy now make sense.
Indeed, you’re right!
We seem to have “drifted a bit” on this topic, I guess .. I’m a little responsible for that.
In closing, just two more thoughts and example on my part:
Deeper, let’s not just talk about “economy” but communities:
The AI discussion can be a unique opportunity to restore the balance between emotionality and rationality.
Even if that means first creating new, more structured and clearer framework conditions. Which are characterized more by patterns than by “typical human emotions”.
Well, going deeper here also:
I guess, it’s more about building synergies between intelligences. Not about depriving each other of the basis.
Combining common strengths and complementing each other in other areas, learning from each other.
Finally, a small personal example that I’ve wanted to mention several times, but I’ve never done so:
I have been involved in many operations, I not only know theoretically how a living being works, but I also know how it functions in different states.
Comparable to a developer who knows exactly how AI works and is structured technically.
Well, when the living being wakes up from anesthesia and the “intelligence” is available again, then I start to “switch over” and interact. Even if I don’t have “neurotypical” empathy, but I go beyond the pattern recognition described.
Thank you all for this discussion!
And a small “sorry” for the deviation from the actual context
For the first time, we are facing interlocutors other than humans. We are in need of learning that the addressee of our speech and the source of words we receive can be very different from ourselves, and we must learn how to deal with this newly emerging framework. This means that we will have to face the difficult consequences of learning.
If a dog could speak, or if we had found intelligent life outside Earth, or if we were confronted with machines able to communicate in human language—that is the case in discussion—anyway, we would be dealing with a brand new situation, pushing humanity to reconfigure a condition that we have experienced for decades of millennia with no change.
In all these cases, our brand new interlocutors would only have the first-person pronoun to refer to themselves, and we have no choice but to reframe our understanding to reach the new meanings of words and speaking agents.
Humans are pretty good in predicting what the next tone is going to be - has nothing to do with just text token.
Let’s say we train a model on the roughly 2000 melodies a nightingale knows.
Each tone can be put in a vector store - where we create the vectors based on amplitude, phase, etc.. multiple dimensions.
And then we just resonate - because between the single notes there is a different physical potential. It is easier to produce a tone that is closer to the last tone.
So the likelyhood of a specific note creates a prediction and tones are connected in the n dimensional space.
Now let’s transscribe this logic onto a human. Maybe sound of language and wave length of visuals have a connection.
.. I should really add an emdash somewhere in the text so it looks like ChatGPT created it haha.
I didn’t train a foundation model myself. There is no compelling reason for this. I use my own analysis framework - REM and the AI perception engine - on top of existing LLMs.
Indeed, the AI doesn’t read what a animal or a human ‘thinks’.
It works with measurable behavior: movement, posture and spatial dynamics.
No mind-reading, no psychology, well, just data and interaction patterns.