For hundreds of years, human beings have studied and tried to figure out what separates them from animals. Biology, sociology, anthropology and even philosophy are nourished by this existential question. Even the law, where it was established that certain groups of animals and in certain circumstances can be considered “legal person”.
Will artificial intelligence (AI) have rights, then? Will it have the right to… life?
From the hypersonic development of artificial intelligence, there is a new element, perhaps the fifth element, which is made neither of earth, nor of fire, nor of air, nor of water. It is the anti-life, the artificial intelligence that forces humanity to confront a superpower of its own creation.
Artificial intelligences pass the Turing test and do so without blinking an eye. This Turing test is the classic tool for evaluating a machine’s ability to exhibit intelligent behavior.
In Blade Runner, it was already hard to tell humans from robots. Emotion has almost always been the human factor that has made robots and machines fall into the trap and give themselves away. Although the tears in the rain of the replicant Roy Batty are the most emotional in science fiction cinema in all of history.
But what will happen from now on? What will be human when artificial intelligences are everything? What test are we going to invent to detect them?
1. Spontaneous generation
One of the notable aspects that separates us humans from artificial intelligence is the spontaneous generation of actions and knowledge. That is the momentum.
The human being is a spontaneous creator of everything. A person can wake up one day and imagine an idea, a story or a poem, a creative thought. From personal history, the human being creates new knowledge, new stories and new experiences.
There is no artificial intelligence that generates knowledge or performs actions spontaneously. In an article published in the journal Nature, scientists from the University of Zaragoza Miguel Aguilera and Manuel Bedia concluded that it is possible to reach an intelligence that generates mechanisms to adapt to circumstances.
This might resemble spontaneous action, but it is far from being an act of will. Every action carried out by an artificial intelligence is designed and programmed by a person.
2. The rule of ethics
This brings us to the second big difference: ethics. Artificial intelligence and machines do not have ethics per se, they must be inculcated. They only follow pre-established parameters, clear and precise rules of what they must do.
The human being has a regulation (Constitution, laws, religion, etc.) of what he must do, and he is also clear about what he must not do. But ethics is more than a regulation, it goes beyond a guide.
Ethics is, nothing more and nothing less, the discernment between good and evil. It is so important in our species that 5-month-old babies have already been found to make moral judgments and act accordingly.
Those who do have ethics are the people who program the machines and artificial intelligences. A machine is not good or bad. It is effective. It does what it is ordered to do and what it was programmed to do.
Although ethics can certainly be programmed. The physicist José Ignacio Latorre explains it in his work “Ethics for machines”. VaticinaLatorre: “Artificial intelligence will sit in the Council of Ministers”.
Today, ChatGPT is programmed not to broadcast sensitive content and does not give access to the deep web. Thus, one can program according to ideas of being and should be.
However, as time passes and ethical parameters change, they must be corrected so that the normative basis of artificial intelligence correlates with that of the human being.
3. The intention can only be human
Another important aspect is intention, and the intention of human action is intrinsically related to morality. In her book “Intention”, the philosopher Elizabeth Anscombe argues that intention cannot be reduced to mere desires or internal psychological states.
She holds that intention is an essential feature of action and that it is intrinsically related to moral responsibility. So you cannot separate the intention from the action itself when determining whether an act is morally right or wrong.
Anscombe criticizes ethical theories that focus solely on the consequences of an action and do not consider the intention that anticipates them. Lacking ethics and morality, artificial intelligence lacks intention. The intention remains circumscribed to the programmer. Each of these 3 aspects discussed up to here requires rivers of ink in order to achieve an understanding.
4. No regrets or psychological problems
It is almost provocative to ask what are the differences and not what are the similarities. The differences are clear. AIs have no experiences. They have no history. They have no psychology or psychological problems.
They have no remorse for their actions (a fundamental aspect of the ethics and morals section). They do not love nor are they loved. They do not suffer or feel pain. They have no opinion of their own, because nothing is their own.
If ChatGPT goes out of date (which I doubt it) and is not consulted, its existence is useless. It only exists if it is useful to the human being. It has no identity; their identity is a human construction.
AI can also be destructive. It can lead not only to the end of millions of jobs around the world, but also to a tiny position in the productive world, without getting into sci-fi apocalyptic speculations. After all, it depends on the human being himself. It is in our hands to use it as a constructive or destructive tool.
But, in case in the near future someone can doubt his nature, let us include a trap in his synthetic soul, a wink that, when necessary, reminds us that we are dealing with a fifth element, a non-human.