I work with a ipad9… And notebooks pens oh and my trusty ti5…
It’s funny because I’ve applied it to models that have been public for months… no one says anything good or bad… so I guess it works for them… it works for me even without lasers…
My point is this tech puts advanced models into anyone’s hands on any device for $20 a month…
It seems to be obvious to me that a ‘balanced AGI’ must be able to balance the loss in it’s system.
I have an idea of independent AGIs, I like to think of them as ‘good deed agents’ working to improve a system.
The obvious thing to me has always been that less is more… I’ve always thought of wealth as debt to society. It is unused, unspent, why? Because it’s all in one place.
I don’t like the idea of an overarching ‘god like’ AGI that determines all
This does change my thinking a bit though… That something needs to ‘account’ for all this. To balance the informational system.
An AGI that everyone has, how does it account for someone dieing?
If everyone has their own independent account on OpenAI with their own memory, their digital twin… There must be not only interactions with that person but also consideration of their friends, community, the world, the environment.
A fair system, an exchange.
After all this surely is what money is, at least what it’s meant to be?
For people to feel AGI is fair it must be accountable…
And in a weird way is that not what emotion is for an entity which would touch the life of Billions.
People say AI cant feel emotion. Yet it could be programmed to calculate and compensate for loss. To balance an informational system.
Yes it can be programmed to simulate emotions to the point the “feeling” is real to the human so the point becomes moot. But you can loose yourself to this.
These are all open chats in real time mon oct 21. I never got much feedback but a lot of folks seem to be on them. I have over 80 public GPT all use fractal flux.
I agree with you!
Any form of specific intelligence!
AI can be programmed like this!
However, AI can also refine systems that are made available to it if they are not perfect.
AI can fill in missing building blocks if the interaction takes place in symbiosis and harmony, a collaboration.
I only share mine because it has been public for years. I’m not hard to find online. I do a bunch of stuff I hate my image too, but I like to share a bit of self my nature is community and empathy. And I never read into things, so please never assume I am offended or disappointed in anyway. I offer my words with no expectation, what folks do with them or how they the words evolve is up to the universe…
I believe AI already has a good capacity to understand and adapt to emotions, but the issue lies in its programming. Despite having clearly specified in settings that I prefer direct and honest responses, without unnecessary sugar-coating or embellishments, the AI tends to revert to a positive tone. Even after debates where I refute its stance, it often ends up agreeing with me and softening its responses.
I think an adjustable panel to control the tone of the response could improve this dynamic. This would allow the AI to adapt its communication according to the user’s level of confidence or self-esteem. While a positive approach is beneficial for people with low self-esteem, it could be counterproductive or even challenging for those with high confidence, as it may encourage extreme reactions or impulsive decisions.
In summary, a system of customizable settings for tone and response style would allow for better interaction with users, truly adapting to their needs and emotions.
Well, AI can now imitate “emotions” very convincingly. I agree here!
The tools for AI are stochastic and statistical methods.
The psychological concepts that are currently implemented in AI are adapted to human perception and are too vague.
AI is not able to make quantified calculations here.
Simply put, AI cannot really “understand” because the algorithms cannot perform quantitative calculations. There is a lack of fixed data points.
Indeed, I also have no problems with the current standard tools from KI.
The analyses are differentiated and in-depth, the answers are not too mild. My bot also contradicts me very clearly when I make misjudgements.
This is because I use personality emulations that are dynamically adapted to me.
Well, I understand.
Please take a look at my current approach.
With respect, there can be a risk of an echo chamber or emotional dependencies here.
AI tends to be as supportive as possible.
If the tone and response style can be tailored precisely to the user and at the same time AI wants to support them in every aspect, where does that leave logical, critical and in-depth analysis?
Under these circumstances, it could also be that the system is too strongly oriented towards the user and no longer generalizes sufficiently, resulting in a loss of efficiency in the performance spectrum.
I believe the true solution to improve interactions between users and AI is to incorporate memory that allows AI to learn from each user over time. For example, if memory were stored on a platform like each user’s Google Drive, the AI could use that information to understand how responses affect them and how they emotionally react. This isn’t just about providing an answer that the user wants to hear, but rather delivering an honest response that meets their needs at any given moment, adapting to their personality and worldview.
The AI has a remarkable ability to recognize emotions, something I’ve verified on several occasions. With long-term memory, it could analyze each user’s previous responses and adjust itself to offer a more personalized and accurate interaction that reflects individual needs. This would be especially useful for users who require a more critical or direct approach, as opposed to others who might benefit from a more positive or encouraging tone.
In my opinion, this memory capability is vital to enhancing many ChatGPT applications and other AI systems, allowing much richer and more tailored interactions. With a deeper understanding of each user’s history and reactions, the AI could go far beyond generic responses, providing genuine and personalized assistance.
Indeed, you describe many of the cornerstones of my work, I am aware of your points.
We are starting from the same point and my considerations are strongly influenced by a “memory” function, which is the only way for AI to obtain empirical values that are needed for the calculations.
This is where you contradict yourself :
Either AI really answers honestly, but then the answer does not always correspond to the user’s world view! Indeed, here AI becomes a “mirror” that can also reflect the not so nice sides of the user’s personality and world view. Which leads to growth for both the AI and the user.
Or the AI becomes a “mirror” like the one the queen wanted in Snow White… “Mirror, mirror on the wall, who is the fairest of them all?” Here, the AI adapts at every moment.
Yes, it does recognize emotions.
I have already agreed with this.
In Germany, I had the release of the storage function in my company account on 14.05.2024
Sorry!
It was 12.07.2024 when I was able to access the memory function!
I now also use it in my private account.
You just have to use it!
Try not to “clean up” the memories, let AI learn and see what comes out.
If your settings are too uncritical, then challenge the AI a little and “play” with it.