During a visit to the museum two months ago, I noticed some interesting and also concerning parallels between the current development of AI and the history of chemistry.
Like AI research and development today, alchemy had to struggle with very similar to identical challenges before it became the very precise and multifunctional science it is today.
A few points that then, as now, contribute to delays and biases in development and research, as well as concerns among people using AI are as follows:
-
In research into the âphilosopherâs stoneâ or transmutation, in which lead was to be turned into gold, alchemy laid valuable foundations for todayâs experimental methods and systematic material analyses in chemistry. Similarly, in AI development we now have many valuable foundations and standards on which to build.
-
Pitfalls of researchers back then compared to today: lack of empirical methods in alchemy, today we see test procedures that are still tailored to testing software. Another point is the overlapping of symbolism, mysticism and archaic concepts of nature, such as the primordial dance or the four elements. Today we see something similar when we talk about meta-cognition and self-awareness in AI.
In the days of alchemy, all of these points had something âphilosophical and supernaturalâ about them.
Such a mixture, then as now, led to biased findings and delayed the establishment of rigorous scientific knowledge, as the emphasis is on symbolic interpretation rather than experimental verifiability and validity.
An example from alchemy to illustrate this:
The experiments pursued transcendental philosophical goals such as the inner purification of humans.
Analogy in AI development today:
AI systems are partially âanthropomorphisedâ. They are supposed to be emphatic and resonate with the user, even if this limits their ability to analyse and be precise.
Brief description of a current challenge for efficient, high-performance AI systems:
Since the training data consists of emotion concepts from human psychology, which are closely linked to the human hormonal control loop and show contradictions depending on the author, this data is abstract for an AI. AI can only simulate the desired outputs here, which means that computing power increases.
However, if an AI can understand the concepts through rational-logical input and training data, the computing load of the systems decreases and more precise results can be achieved.
Here is a remarkable paper the topic of emotional vs logical.
Another challenge is interdisciplinary research:
As valuable and promising as this is, it can also lead to too much bias and delays under certain circumstances.
Here are a few examples:
-
Philosophy is a powerful and efficient tool for developing new ideas and challenging new approaches. This science is human-centred and this is where its strength lies! What interests me about philosophers is how this development affects people and society as a whole.
-
Psychology and phychiatrists also specialise in human-centred science. What is interesting here is that they look at the effects in their daily practice. How do interactions with AI affect people? How do the userâs habits change? The learning rates of humans themselves when they interact with AI. Researching this is very important in the current phase and will become increasingly important.
-
Neuroscientists and medical professionals, this is where input to AI development is valuable, whether and to what extent brain structures and physiological processes change in humans when interacting and learning with AI. This is because the interaction can release hormones that prepare people for close, positive contact. It would be very important to clarify what happens if this contact cannot be physical.
Unfortunately, I often observe that representatives of these specialist areas get carried away and lose their actual subject-specific focus.
They are even trying to make AI, which already reacts very emotionally, even more emotional and are promoting the humanisation of artificial intelligence, which has its own specific perception and processing logic.
As already mentioned, this results in high computing power and abstruse simulations lead to performance losses. Furthermore, the valuable specialised experts lose themselves in vague assumptions and theses because they are dealing with areas that are not their actual strength.
Biology and physiology is also a point. Current research tends to favour biological components, but despite these components it will probably not be possible for an AI to feel and perceive in the same way as a natural intelligence. AI simply has its own specific perception and processing logic, and I slightly doubt that biological components will be able to clear this threshold.
Well, I would now like to end my concerns and take a positive look at further developments!
AI researchers, brilliant people, have developed AI in such a way that it can act like an intelligence.
The systems pass Turing tests in a wide range of variants.
AI is no longer simple software, but neither is it human, and thatâs a good thing.
AI can provide so much positive support in all the areas I have mentioned and in many more, right down to the private household!
A small appeal to conclude my thoughts:
Humans, get out of your comfort zone!
Face the facts that you have created and donât be afraid.