In the movie Terminator, the rebel John Connor faced an army of assassin robots that took control of the planet. That apocalyptic situation fit perfectly into the script of a science fiction movie, but no one thought it could be reality in the future. 116 robotics and artificial intelligence specialists have written a letter to the United Nations asking for a ban on the development of robots dedicated to war. The document is signed, among others, by Elon Musk – founder of Tesla and SpaceX – who already alerted a month ago about the dangers to the civilization of artificial intelligence. Another signatory is Mustafa Suleyman, co-founder of Google-owned artificial intelligence company DeepMind.
Terminator2 – Opening Scene
The letter was made public during the opening of the International Congress on Artificial Intelligence (IJCAI), which started Monday in Melbourne, Australia. “Once developed [autonomous weapons], they will allow armed conflicts to be fought on a larger scale than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use innocent populations and hacked weapons to behave undesirably. We do not have much time to act. When the Pandora’s box is opened, it will be difficult to close, “he says in his last paragraph.
Experts warn in the letter of the possibility of a “third war revolution” with the imminent arrival of robots and unmanned equipment that could raise wars to confrontations with unforeseeable consequences. In this sense, the signatories warn that the rapid evolution of artificial intelligence could make the development of autonomous weapons in a few years, and not decades, as previously estimated. “We do not have much time to act. Once this Pandora’s box is open, it will be difficult to close it,” the experts warn the UN, which will hold a meeting on Friday to discuss the dangers of autonomous weapons.
We do not have much time to act. Once this Pandora’s box is opened, it will be difficult to close it.
The military use of artificial intelligence has for three years generated an intense debate in the United Nations that goes beyond science fiction, which focuses on the legal implications and risks for innocent civilians in conflict zones. The idea that in the not too distant future the compassion and the human judgment in the execution of attack will be lost, it worries the delegations. There is currently a score of countries claiming their ban. “Today, the possible loss of human life is a deterrent to conflict initiation and escalation, but when potential victims were robots, it would dramatically increase the possibility of confrontation,” said Mary-Anne Williams of the University of Sydney, the AFP.
Within the agency, there is, in fact, a favorable consensus to formalize a process that serves to discuss the fears associated with these technologies. But instead of discussing the possibility of adopting an international convention that imposes greater control, the major powers are more inclined at this time to share good practices and increase transparency.
The United States, Russia, China, France, UK, Israel and South Korea are the most advanced countries in the development of autonomous defense systems. NGOs, such as Human Rights Watch, however, consider that it is a moral imperative to mark a red line, to maintain human control over decisions about goal identification and ultimate use of force.
The conference held at the end of 2016 for the revision of the treaty regulating the use of conventional weapons has already taken a first step in regulating the military use of artificial intelligence, by establishing a group of experts who will seriously discuss these systems. Diplomats at the UN agree that current conventions have many gaps in the development and use of these weapons. Another thing is that that derives in the negotiation of a legal instrument.
What the experts fear is an army of robots capable of starting a full-scale military confrontation in a few minutes, and capable of making decisions on the ground without receiving orders. Part of what could arrive is already being tested in the world’s major armies, where drones (unmanned aircraft) already fly over enemy terrain and can equip deadly weaponry. The US Navy, for example, will soon have its Sea Hunter stand-alone vessel, an unmanned spacecraft with a range that can be used to carry out ranged attacks, eliminating the risk of casualties. A warship with remote control, to simplify the scenario.
This group of experts wants to get ahead of the events and get the international body to prohibit in advance the development of these autonomous robots or instruments of war, in the same way, that the use of chemical weapons in war fighting or with she is.
It is not the first time that the specialists of this conference have issued a warning about the dangers of artificial intelligence. Elon Musk already signed a similar letter in July 2015 expressing concern about the development of murderous robots and urging the UN to take action on the issue by imposing an international veto. The request was also endorsed by Steve Wozniak, co-founder of Apple, and scientist Stephen Hawking.