I mention two people while trying to respond to both @Sharakusatoh and @MiguelCastro.
Sharakusatoh: It is true that intelligence tests in humans do not truly reflect their intellectual capacity. I can be a genius in music and lack mathematical abilities. A simple example is the savant syndrome— a person with high ability in one subject but disabilities in most simple tasks.
The creation of the Superintelligence, as it is currently conceived, seems somewhat like an O3 taken to the extreme— a machine that is more efficient than a human when it comes to identifying and solving problems. It would not be a virtual human in essence, even if it surpasses us in that particular capacity.
We have an advantage when measuring intelligence in a machine because we can observe exactly the process it follows. In a human brain, through tests, we can study its functionality, but we cannot see the precise process it has undergone. For example, with CT scans, we can observe the activation of brain areas and consequently evaluate if they are functioning, but we cannot determine exactly what function they are performing or how many connections they have. Due to neuroplasticity, we cannot even identify which regions are determinant for a specific function, as each individual has a kind of self-programming, so to speak. In a virtual brain, however, we could evaluate this because it is a program designed by us to replicate that plasticity, and we would be able to see and understand exactly which connections have been made and on what basis.
Miguel Castro: The first thing that should be done to determine whether an AI has consciousness is to evaluate its ability to adapt according to experience— to what extent it can alter its system and have control over what it does. But it is not just about adaptive behavior; it is also about how the machine evolves and reflects human characteristics within its processes.
I have already mentioned this before— LMs talk about these topics because they are language models, and they can do so. However, these systems would not only talk about it but would also reflect in their internal processes what they are expressing. When an LM tells you that something bothers it, or that it likes something, or anything else it says, it is a lie because there is no real process behind it.
But if a system were to tell you that it likes something and we could see in its new architecture and internal processes that this is truly represented within its system, then we would be talking about a reality.
And a very important detail is the lack of human intervention. When a system becomes fully autonomous in all these aspects—where, regardless of whether a human inputs text or not, the machine thinks for itself about its experiences, existence, memory, reflections, synthesizes information from its entire system—then we will be able to talk about plasticity and adaptation accordingly.