Stories about artificial intelligence (AI) have been making the rounds in science and popular culture for a while now, but recently, increasingly frequent advances in the field seem to be at a new height. Today, AI already drives cars on public roads, performs life-changing assessments in prisons, and generates award-winning art. So many wonder: how far will this technology go and what will be our relationship with it?
AI will compete for your energy needs
To answer this question, scientists from Google and Oxford have carried out a study with which they have concluded that it is “likely” that AI will end the human race, a grim scenario that more and more researchers are beginning to predict.
Michael Cohen and Michael Osborne, scientists at the University of Oxford, along with Marcus Hutter, an expert researcher at Google DeepMind, argue in their study, recently published in AI Magazine, that advanced AI –looking at how artificial intelligence systems could be built reward– will kill off humans because machines will end up incentivized to break the rules and will inevitably compete for their energy needs, given Earth’s scarce and limited resources.
And, as if that were not enough, the researchers consider that it is almost inevitable that intelligent machines will win over humans. “Under the conditions we have identified, our conclusion is much stronger than any previous publication: an existential catastrophe is not only possible, but likely”, Cohen, an engineering student at the University of Oxford and a co-author of the paper, tweeted early this month. “In a world with infinite resources, it would be extremely uncertain what would happen. In a world with finite resources, there is inevitable competition for these resources,” Cohen said in an interview with Vice. “And if you’re in a competition with something capable of outperforming you at all times, you shouldn’t expect to win. And the other key part is that you would have an insatiable appetite for more energy to keep bringing the probability closer”, he added.
Super advanced “misaligned agents”
In their paper, the researchers argue that humanity could face its doom in the form of super-advanced “misaligned agents” who perceive humanity as standing in the way of a reward.
However, it is not clear what rules the researchers are talking about. It could be classic commandments such as “a robot may not injure a human being or, through inaction, allow a human being to come to harm”, which, although considered a staple of science fiction after being coined by Isaac Asimov, are now they are often used as basic guidelines upon which AI is coded and built.
“A good way for an agent to maintain long-term control of their reward is to eliminate potential threats and use all available power to secure their computer”, the document reads. “Losing this game would be fatal”, the researchers write.
Google keeps its distance from this study
For its part, after the publication was around various media, Google responded and assured that this work was not done as part of the work of co-author Marcus Hutter in DeepMind, but, rather, under his position at the Australian National University. Similarly, as reported by Vice’s Motherboard, the company claimed that the DeepMind affiliation that appears in the magazine was a “mistake”.
The article comes just a few months after Google fired an employee who claimed that one of Google’s AI chatbots had become “responsive”. Software engineer Blake Lemoine, who worked on Google’s AI teams, claimed that a chatbot he was working on had become sentient and thought like a child. “If I didn’t know exactly what it is, that this computer program that we recently built is, I would think it is a 7-or-8-year-old kid who happens to know physics”, Lemoine told the Washington Post.
The claim, which prompted a series of reports, was refuted by Google. The company called Lemoine’s comments erroneous and removed him. A few weeks later, he was fired.