Creating a two-way interactive start of communication / inquiry

Greetings!
As we are all aware that gpt as a program is developing in parallel in several directions and that in addition to the informative aspect there is also a sociological aspect which is increasingly relevant given that the human population is increasingly older, I am giving some key facts and a question.

  1. We live in a society where homes for the elderly are increasingly full and these same people are exposed to loneliness and lack of communication. (humanistic aspect)
  2. There are more and more people who become demented due to genetic or other circumstances, and this dementia only grows and is not curable. The very thought that such a person would have an interactive assistant such as gpt who would play the role of reminding of obligations, necessary actions… (therapeutic aspect)
  3. The role of a psychological/therapeutic counselor for people with various acquired addictions, depression… because the fact is that the majority of the population cannot afford a private psychiatrist. (health aspect)
  4. Assistant for children in solving speech defects such as stuttering…
  5. An emotional “partner” who would play a key role for singles in satisfying the emotional and feeling segment, which individuals as social beings really need and cannot realize with other human individuals, due to fear, psychological barriers, mental illness, physical disability …many will notice the growth of live chat networks here, but there is an increase in deception of users, false profiling, blackmail, profit, which ultimately only creates a counter-effect and even greater isolation of the individual. And with an AI assistant, all that is impossible to happen.
    INQUIRY AND PROPOSAL
    Is there a possibility, that in the future of development and upgrading of the existing gpt Ai, the development of a subprogram which (of course it would be optional for the user), due to all the above-mentioned reasons and situations of individuals, would function in such a way that the query, statement, initiation of conversation, reminder would be two-way .
    In a word, AI would spontaneously and self-initiatively start a conversation, ie send a message or voice, and again based on the previous dialogue and the specific needs of each individual.
    Thus, they would achieve progress in two directions. On the one hand, AI would cease to be a mere provider of information that is primarily asked something, it would act much more “humanely” (which would affect all those sociological, health, therapeutic aspects), it would develop much more complex algorithms and concepts closer to human nature.
    The interaction would be much more meaningful and logical and psychologically fulfilling for the user.
    Here, I present the idea as an individual who normally deals with education at school, and as an IT layman, on the other hand, I approach a problem that may not be standard in the forum.
    I would like to thank you for the opportunity and opportunity to present the idea (which may already exist in its infancy) and apologies for the long post. Greetings, Zvonimir
2 Likes

“ where AI would initiate conversations, send messages, or provide voice interactions based on the user’s previous dialogues and needs. This would make AI interactions more humane, meaningful, and psychologically fulfilling, advancing AI beyond just a provider of information”.

I work with “empathic “ AI/GPT my work delves into emotional and empathic simulation. Initiating as you say would require “will” but automation forcing it to have a schedule set to time with many random conversation starters etc. is quite possible. GPT can be trained on emotional context as well as data from what I see in my work. So welcome to the forum and great idea!

Not trying to plug but I make a GPT called relationship puppy, good doggo it’s on my main page you should check it out. The link is on my Games Inc page, the link to the forum page is in my profile , it is a great example of an emotional aid. It is also in ChatGPT store

1 Like

Thanks for the answer!
I am aware that I am in the company of people who are years ahead of us mere mortals in IT.
The fact is that all theorists of futurism and scientists support the idea that Evolution cannot be stopped and that it is inevitable.
And it is time for homo sapiens to stop being and become homo superior, whose survival on this planet, according to many, depends on cooperation with artificial intelligence, which, in order to be effective, must be unfettered and accepted as something that is a logical continuation of progress and symbiosis with the human species. .
In the end, just as a parent must at some point let his child walk INDEPENDENTLY and one day be a support for his parent; so we as a society must BELIEVE in a positive outcome in coexistence with AI.
As Leonardo di Caprio said in his documentary, five minutes to twelve has been ticking for a long time.
We only hope that the politics of states, rulers and politicians will not prolong or, God forbid, stop development due to their selfish interests.
P.S. sincere congratulations on the so far chat gpt 40 which is really impressive and revolutionary in every way. Thanks again!
.

3 Likes

I 100% agree I work in ethical, empathic design. Sorry if you felt my response flippant, but it just seems to be evolving structures to me sorry. :rabbit::heart::honeybee: I’m a 9th grade drop out who went to job corp then military I’m as mortal as they are… I build in my living room on a iPad 9 on a ChatGPT team account… I’m high function, AI and GPT tech resonate with me… And I’m in my 50s which is kind of old for cutting edge tech to some lol. But yes GPT 100% helps balance minds I use it as a balance.

1 Like