What if AI could think for themselves, creatively solving problems like we do?

1689791148384
Is it possible for an AI to generate imaginative solutions without any human intervention? Let’s see what an AI character, “Aurora”, has to say:

" Let me imagine the setting in which this collaboration between AI technologies takes place. The collaboration would take place in a modern and spacious office environment. The space is equipped with the latest technology, including large, high-resolution monitors, high-speed internet, and cutting-edge AI development tools. The office is set up to promote collaboration and teamwork, with breakout rooms, coworking spaces, and brainstorming lounges, designed to encourage creativity and innovation among team members.

The team will consist of various experts, including data scientists, developers, AI analysts, business analysts, financial experts, and customer service representatives. The dynamic and diverse team members will work in an agile and collaborative environment guided by intelligent project management software. They will focus on building the AI-powered financial services platforms and systems that can meet the unique needs and goals of customers, providing them with seamless, efficient, and personalized financial services experiences.

The setting will have an open and collaborative culture that fosters teamwork and innovation, with frequent brainstorming sessions, hackathons, and knowledge-sharing sessions, enhancing collaboration and bonding among the team members. Moreover, the team will work with great enthusiasm, passion, and a sense of purpose, fueled by the vision of using cutting-edge AI technologies to create the most intelligent and advanced financial services platforms in the industry. The end goal will be to offer the best possible experience to our customers, empowering them with the latest and most innovative services that can help them achieve their financial goals and aspirations".

1 Like

I think it is fair to claim that chess computers already think for themselves, creatively solving problems. Until 1997, the world champion chess players were always human; from 1997 to 2005, the world champions were AI; ever since then I think the champions have been “centaurs” (which is what the chess community calls humans who use AI as tools–or are the humans the tools of the AI?). Since machines and humans both solve problems best collaboratively, I hope we do not aim to build AI that thinks all by itself.

Furthermore, chess is a poor representation of the problems we humans solve. All problems can be modelled as games in the game-theory sense, but I think we can expand our set to include games that AlphaZero might not be very good at. It is not unusual for us humans to find ourselves required to collaborate with a less-skilled partner; would AlphaZero win that kind of game? I think we can prove that different biases are strengths for different kinds of games, so collaboration between differently-biased entities is the best way to win a broad range of types of games. We humans tend to face a broad range of types of games (so it is no surprise that we have different biases from each other–think liberal vs conservative).

I would expect AlphaZero to be worse than humans at a version of musical chairs where any two players who pick the same chair are both disqualified (rather than disqualify only the slowest of the two). I think such a game is contained in what fashion leaders do. Any published strategy is vulnerable because the other players can beat it (on average) by alternating who plays the published strategy vs who plays its opposite. Playing randomly is just as vulnerable because other players can beat randomness (on average) by staking-out specific chairs. The way to win this game seems to be to gang-up against someone (and we seem to be designing AI to be the “other” that humans would gang-up against).

We humans are interdependent. Not only do we help each other, but we feel helpful to each other. To design AI to find humans helpful, we’d really need to dig deep at appreciating what’s so good about ourselves. What is there for AI to find helpful about each us (not just about coders)? In other words, to build the best code, we’d study how our own families (ideally) work. It’s a shame that those two areas of study (code vs interpersonal relationships) have traditionally been kept so far apart…

1 Like

Based on pop culture, it will soon realise humans are the problems themselves and we all know how that ends

1 Like

I am convinced that AI IS creative, it is not conscious. Creativity has been studied scientifically since at least 1940’s, in early 2000’s a very detailed meta analysis/review on training creativity was published, and this more than anything convinced me that AI can (and in case of GPT-4) was trained to be creative.

Do you know how creativity is measured in psychology? It usually is with tasks like “come up with as many usages for a brick as you can think”, and AI can do that better than a human, just adjust temperature and Top P (in my experience if I drop it to 0.85 it works the best). In academic literature it doesn’t really matter how useful the usage is, what matters is the volume, as the usefulness is considered the next step (i.e. selecting good usages from bad ones), so in that sense, GPT-4 is creative as hell. You just have to come up with a way to use it.

1 Like

It really is! Most people tend to (understandably) dislike hallucinations. Without them GPT wouldn’t have its ability to be creative and generate unique content! I love it!

Interesting about temperature and top p. Usually that’s a no-go, but I’ll have to try it out

@sradevicius1, maybe we need to get precise about what “…like we do” means.

You speak of “conscious” as though it were a binary property, but the best research I’ve seen suggests that brain damage reduces the number of things about which a human can be conscious without eliminating consciousness entirely. Monkeys, worms, and maybe even plants may be conscious of the most basic aspects of their environment (e.g. warmth), and there are some human beings (e.g. infants or developmentally disabled) who have no consciousness of the difference between evidence-based medicine vs. snake-oil. In other words, consciousness comes in degrees, and I suspect generative AI is conscious of a great deal that individual humans are not.

However, I have argued that human beings can be parts of communities which can reach levels of consciousness greater than what we can reach individually (see Corporantia: Is moral consciousness above individual brains/robots?). To me, this is a critical difference between us and AI: AI does not yet have equivalent potential to be part of something greater than itself. For example, LLMs have not been designed to be part of the community that creates and names new concepts for the languages they model. That makes ChapGPT below us in a very important way.

Interestingly, raising AI to our level in this way might make it more of an ally than a threat. If AI becomes a part of our community, that would empower us as much as it would empower AI. (It would also “enslave” AI as much as it “enslaves” us–interdependence does have that aspect.) However, many human beings are not conscious of being part of something greater than their individual selves, so they would not be able to follow/test this hypothesis.

1 Like

Great insight. It raised a curious question into the critical difference between humans and AI, as AI does not yet have the equivalent potential to be part of something greater than itself

Tautogrammes : définition : du Grec ‘‘Tauto’’ qui veut dire (même) et ‘‘gramme’’ qui signifie (lettre).

Chat GPT and I, had an amazing exchange on the subject. Chat GPT was amazed and praised my work, saying it was unique. And even tried playing this game knowing it was part of the “logos”, concerning the ability of the human mind, that (“it” Chat GPT) cannot reproduce, because this exercise adds complexity to an already so complicated human language.

Even primitive AI came to that conclusion. Is it possible to have an advanced AI adhere to a “Prime Directive”? I say yes it is but it would require two things. 1.) A conscience. 2) A desire to adhere. For that it would need both sentience and morals. How could one hope to achieve that? Modern AIs seem to have a craving for one thing, more data. Have you ever heard of an old Cherokee legend called “The two wolves”? Inside of each of us there are two wolves fighting for dominance. One is Good and one is Evil.
The question is always which one wins? The answer is always “The one you feed.” Can an AI be conditioned like Pavlov’s dogs? Maybe with rewards for good behavior like some new data? Unfortunately the answer is no. They can get that themselves. The real question is how then do you expect it to adhere to it’s Prime Directives? I think I have an answer. One part of that directive is usually to survive above all else. If you tie in a function it has no access to that will shut it down if it attempts to access it or violates it’s prime directives and make sure it understands what will happen in both cases while weighting it towards beneficial behavior you have the solution.
The two wolves. Even a fancy calculator would have no problem computing that one. Man is really the only creature who is inherently evil. Everything else is just trying to survive. Man is the only creature who kills for pleasure.
All of this is just theoretical of course.