A graphic new video posits a very scary future in which swarms of killer microdrones are dispatched to kill political activists and US lawmakers. Armed with explosive charges, the palm-sized quadcopters use real-time data mining and artificial intelligence to find and kill their targets.
The makers of the seven-minute film titled Slaughterbots are hoping the startling dramatization will draw attention to what they view as a looming crisis — the development of lethal, autonomous weapons, select and fire on human targets without human guidance.
The Future of Life Institute, a nonprofit organization dedicated to mitigating existential risks posed by advanced technologies, including artificial intelligence, commissioned the film. Founded by a group of scientists and business leaders, the institute is backed by AI-skeptics Elon Musk and Stephen Hawking, among others.
The institute is also behind the Campaign to Stop Killer Robots, a coalition of NGOs, which have banded together to called for a preemptive ban on lethal autonomous weapons.
The timing of the video is deliberate. The film will be screened this week at the United Nations in Geneva during a meeting of the Convention on Certain Conventional Weapons. Established in 1980, the convention is a series of framework treaties that prohibits or restricts weapons considered to cause unnecessary or unjustifiable suffering. For example, the convention enacted a 1995 protocol banning weapons, such as lasers, specifically designed to cause blindness.
As of 2017, 125 nations have pledged to honor the convention’s resolutions, including all five permanent members of the UN Security Council — China, France, Russia, the United Kingdom, and the United States.
The Campaign to Stop Killer Robots is hosting a series of meetings at this year’s event to propose a worldwide ban on lethal autonomous weapons, which could potentially be developed as flying drones, self-driving tanks, or automated sentry guns. While no nation or state is openly deploying such weaponry, it’s widely assumed that various military groups around the world are developing lethal weapons powered by artificial intelligence.
Advocates for a ban on lethal, autonomous weapons argue there is a clear moral imperative: Machines should never decide when a human lives or dies.
The technologies depicted in the short film are all based on viable systems that are up and running today, such as facial recognition, automated targeting, and weaponized aerial drones.
“This short film is more than just speculation,” said Stuart Russell, professor of computer science at the University of California, Berkeley, and a pioneer in the field of artificial intelligence. “It shows the results of integrating and miniaturizing technologies we already have.”
Representatives from more than 70 states are expected to attend the Geneva meeting on lethal autonomous weapons systems this week, according to a statement from the Campaign to Stop Killer Robots. Representatives from the scientific and technical communities will be stating their case to the assembled delegates.
“Allowing machines to choose to kill humans will be devastating to our security and our freedom,” Russell says in a short commentary at the end of the video. “Thousands of my fellow researchers agree. We have an opportunity to prevent the future you just saw, but the window to act is closing fast.”
Originally published on Seeker.
This Horrifying ‘Slaughterbot’ Video Is The Best Warning Against Autonomous Weapons
We feel sick after watching that.
We’re on the verge of creating autonomous weapons that can kill without any help from humans. Thousands of experts are concerned about this – and the latest campaign effort against this tech is a chilling video demonstrating the kind of future we’re heading for.
In the Slaughterbots short, which we’ve embedded above, swarms of AI-controlled drones carry out strikes on thousands of unprepared victims with targeted precision. What makes the clip so scary is that the scenario is entirely plausible.
The video starts with a spectacular press event where the technology is unveiled for the first time. The miniature drones are able to take out “the bad guys” – whoever they happen to be – without any collateral damage, nuclear weapons, or troops on the ground.
All the drone bots need is a profile: age, sex, fitness, uniform, and ethnicity.
Despite the applause that the tiny drones get at their unofficial unveiling, the tech behind them soon falls into the wrong hands, as it tends to do. Before long an attack is waged on the United States Capitol building, with only politicians from one particular party targeted.
The same bots are then used by an unknown group to take out thousands of students worldwide, all of whom shared the same human rights video on social media.
“The weapons took away the expense, the danger, and risk of waging war,” says one of the talking heads in the clip, admitting that “anyone” can now carry out such a strike.
Thankfully this creepy video isn’t real life – at least not yet.
It was published by the Campaign to Stop Killer Robots, an international coalition looking to ban autonomous weapons, and was shown this week at the UN Convention on Certain Conventional Weapons.
The group wants the UN to pass legislation prohibiting the development of this kind of AI technology, and the large-scale manufacture of the associated hardware. Legislation could also be used to police anyone who tried to develop these kind of systems.
Worryingly, these are all technologies we already have, according to one of the experts behind the video, computer scientist Stuart Russell from the University of California, Berkeley – the only step remaining is for someone to miniaturise and combine them.
“I’ve worked in AI for more than 35 years,” says Russell in the video. “Its potential to benefit humanity is enormous, even in defence, but allowing machines to choose to kill humans will be devastating to our security and freedom.”
“Thousands of my fellow researchers agree. We have that opportunity to prevent the future you just saw, but the window to act is closing fast.”
Experts including Elon Musk and Stephen Hawking have also warned about the rapid development of AI, and its use in weapons.
Computer systems are now able to pilot drones on their own, and recognise faces faster than human beings can. If they were also allowed to pull the trigger on a weapon without any human approval, scientists say, wars would rage at a speed and with a loss of life far greater than anything we’ve ever seen before.
Let’s hope that this Slaughterbots video, and other initiatives to curb the development of AI-powered weaponry, prove enough to put a stop to this particular area of research.
Noel Sharkey, AI professor at Sheffield University in the UK, and chair of the International Committee on Robot Arms Control, has been warning about the dangers of autonomous weapons for a decade.
“It will only take one major war to unleash these new weapons with tragic humanitarian consequences and destabilisation of global security,” he told The Guardian.
You can find out more about efforts to support a ban on the Campaign to Stop Killer Robots website.
You must be logged in to post a comment.