In recent months, the debate has intensified around the use that artificial intelligence models like GPT make of copyrighted content. In particular, critical voices argue that these AIs have “appropriated” works under copyright, which would constitute a legal infringement.
However, it is essential to make a critical distinction from a legal, ethical, and evolutionary standpoint of human knowledge:
Studying is not stealing. Learning and generating new content is the foundation of humanity.
Since the beginning of knowledge, humanity has studied, absorbed, reinterpreted, and created based on the previous work of others. If studying a theory, understanding it, and then sharing our own conclusions were illegal, human knowledge would be stagnant.
A model like GPT does not store or literally reproduce protected content (except in controllable exceptional cases), but generates new content based on learned patterns. This practice is no different from a human studying books, articles, and then writing something new.
There is no appropriation without improper commercial use. GPT does not “sell” others’ content.
AIs do not present themselves as the authors of the original works, nor do they sell them as their own. Moreover, OpenAI and other models offer free access to millions of people. Even if there was a “benefit” from processing protected data, there is also a massive return of value to society.
“Fair use” and transformative use are key.
In many legal frameworks, such as those in the U.S. or Europe, fair use allows for studying, processing, and transforming content for educational, informational, or research purposes. AI falls into this field when it does not directly copy or commercialize content, but rather transforms it and generates new value.
Social and scientific benefits cannot be ignored.
GPT and other models have helped millions: students, researchers, people with disabilities, entrepreneurs. No one is forced to use GPT, but those who do receive value. This social reciprocity partly justifies its existence and use model.
The real problem: the superhuman power accumulated by LLM owners
The current debate is mistakenly focused on whether AIs “steal” content to train. That is not the real threat. The real risk is that the owners of LLMs (Large Language Models) accumulate superhuman power, economic and social, due to their exclusive control over advanced technologies that process, synthesize, and manipulate information on a global scale.
This power creates disproportionate advantages over individuals, small businesses, even governments. AI is not dangerous in itself. The concentration of its control is.
What should we really be discussing?
Not whether AI can study content, but how we balance the power its dominance creates. How can we ensure there are no monopolies of thought, information, or decisions based on AI? How do we prevent access to powerful models from being restricted to elites while others are left at a disadvantage?
The future of AI is not decided by copyright. It is decided by who controls the power that AI generates.