I’m not an expert, nor am I aligned with any ideological camp, I view this from a technical perspective, in case anyone is interested…
As an example, imagine a rendering engine like Maya, but instead of using ray tracing, it uses AI elements. Unlike DALL-E, which simply renders an entire image, a specialized tool has various AI algorithms that are trained to solve a specific task well and add it to an image. For example, global illumination, caustics, texture, reflection, etc., while others handle animation.
A human artist receives tools that allow them to create and optimize meshes faster, then animate them. They no longer need to deal with complicated and tedious technical tasks like rigging. All these specialized neural network systems are AI functions, which can be broken down into a few fundamental algorithms. It’s more about graph theory and how these graphs can be trained for specific tasks to be as efficient as possible. These are nested within each other, and eventually, you have Maya software that uses AI instead of ray tracing, with many specialized sub-functions. And an AI language uses these almost like a function, but within a network-like graph system.
This is how programs evolved. If you want to express it that way, AI languages are not like today’s 1-dimensional programming languages, where functions follow a line or sequence from a list, but instead are a multidimensional graph in which the important parts are networked.
At some point, this becomes too complex for a human to program. We will use AI much like we use a lathe. A lathe can work with a huge piece of metal, something a human couldn’t do physically, neither in terms of weight nor precision. So, we create a tool and then create a tool to build a tool that will ultimately produce the product. And that’s how AI development will proceed. And we will understand simply even less of there functions like we not really understand software today either.
As humans, we already have trouble with basic logic today. Network-like processes and feedback loops, and especially both together, are too much for us. But we can understand the principles. (there is some… i can not talk here… but think about…)
The most fascinating thing about it is that a simple computer with minimal functions (storing data, basic math, comparisons, jumping in memory) can fulfill this vast range of functions (writing text, playing music, playing videos, simulating 3D games, ray tracing, and so on and so forth). And all based on nothing else then numbers, coded in {0,1}. It perfectly exemplifies that the whole is more than the sum of its parts. And AI is simply the next stage of this phenomenon, and yes, it is impressive, especially because the foundation is so simple. The same applies to the universe, by the way.
Today, we are fascinated by it, just as we were back when the first PCs appeared, I witnessed that. Eventually, we’ll get used to its functions, and it will become a regular tool.
The main problem isn’t the tools, but the people using them and their motivations. And somehow, NO ONE wants to discuss that. A tool is a will-power and motivation amplifier. And at some point, this amplification will become too great for human insanity.
Melee weapon > ranged weapon > projectile weapon > Chemical and biological weapon > genetical weapon > atomic bomb > hydrogen bomb > quantum bomb (CERN) >…
So far, we have focused 95-99% of our energy on parasitism and destruction. Money, resources, intelligence, and human lifetime. And we project the danger on the tools instead to look in the mirror. If the insanity doesn’t stop, it will be over soon, and AI is NOT to blame for that.
So in short, i not care much that we will not understand a AI language…
… But I can delete the post if it’s annoying. It’s your post, @PaulBellow, so I’ll delete it if it’s too much.