Skip to content
Breaking News Alert Justice Jackson Complains First Amendment Is 'Hamstringing' Feds' Censorship Efforts

Inside The Global Race To Build Killer Robot Armies

Share

The temptation to open Pandora’s Box is irresistible. In early March, the U.S. National Security Commission on Artificial Intelligence completed its two-year inquiry, publishing its findings in a dense 750-page report. Its members unanimously concluded that the United States has a “moral imperative” to pursue the use lethal autonomous weapons, a.k.a. “killer robots.” Otherwise, we risk bringing a rusty knife to a superhuman gunfight.

Citing the threat of China or Russia leading the global artificial intelligence (AI) arms race, the commission’s chairman, former Google CEO Eric Schmidt, urged President Biden to reject a proposed international ban on AI-controlled weapons. Schmidt rightly suspects our major rivals won’t abide by such a treaty, warning U.S. leaders, “This is the tough reality we must face.”

If other superpowers are going to unleash demonic drone swarms on the world, the logic goes, the United States should be the first to open the gates of Hell.

America first deployed autonomous aerial vehicles in the aftermath of 9/11. The libertarian blog AntiWar.com has covered this transition in critical detail. Laurie Calhoun writes, “[O]n November 3, 2002, the Drone Age effectively began with the CIA’s extrajudicial execution of six men driving down a road in Yemen using a Hellfire missile launched from a Predator drone. The act went virtually unquestioned.” Since then, remote-controlled strikes have become a standard tactic to “fight terror” and save American lives.

Nearly two decades later, a new era of autonomous weapons is rapidly approaching. A wide array of AI-assisted weapons is already in use, but they still require a human operator to confirm the target and order the kill. That will likely change in the near future.

What Damage Can Drones Do?

The attack drones currently on the market are plenty dangerous as is. A good example is the KARGU loitering munitions system, currently deployed by Turkish forces. This lightweight quadcopter “can be effectively used against static or moving targets through its … real-time image processing capabilities and machine learning algorithms.”

KARGU’s mode of attack is full-on kamikaze. It hovers high in the air as the operator searches for victims. When one is located, the drone dive-bombs its target and explodes. If the concussion doesn’t kill them, the shrapnel will. Just imagine what a thousand could do.

A single quadcopter is only one cog in the AI war machine. The ultimate “death from above” technology will be the killer drone swarm. Even as a war-averse civilian, its hard not to feel deep admiration for its ingenious design.

Forbes reporter David Hambling describes the organizing principle: “True swarm behavior arises from a simple set of rules which each of the participating members follows, with no central controller. … [Computer simulations have] mimicked the collective movements seen in schools of fish and flocks of birds or swarms of insects with just three rules.”

Each drone in the swarm will separate at a minimum distance, align toward the direction of near neighbors, and cohere to maintain harmonious movement. This behavior allows attack drones to spread out over large areas and execute “omnidirectional attacks,” descending on the enemy from all angles.

Presently, military swarms are limited to a few hundred drones, but as the technology advances, these will increase into the thousands. Given full autonomy, a massive swarm could move like a storm cloud over a populace, with onboard AI rapidly hitting targets based on facial features, racial profiles, uniforms, or even surveilled cellphone data. Conceivably, they could home in on anything with two legs, leaving valuable infrastructure intact.

The capacity for unrestrained carnage is horrific. In a 2018 study conducted for the US Air Force, drone specialist Zachary Kallenborn correctly argued that lethal drone swarms should be declared weapons of mass destruction. Many others have joined that refrain.

Unsurprisingly, these protests have not deterred the highest earthly powers. According to PAX, a predominantly Christian organization in the Netherlands, the countries on the cutting edge of military AI are China, France, Russia, the UK, and the USA, with Israel and South Korea just behind. It’s hard to imagine any of them hitting the brakes without coercion.

In 2019, PAX published a list of the global corporations most likely to develop lethal autonomous weapon systems. Among the U.S. companies ranked as “high risk” are Amazon, Microsoft, and Oracle, as well as Intel, Palantir, Neurala, Corenova, and Heron Systems. It’s worth noting that the top members of the National Security Commission on AI—all of whom support using these murder machines—include chiefs from Amazon, Microsoft, and Oracle.

It’s as if the drive to create superior weapons is part of human nature.

The Future of Life Looks Bleak

Both leftists and libertarians denounce the idea of fully autonomous weapons. The Future of Life Institute, founded by AI visionary Max Tegmark, reiterated its long-standing opposition to giving intelligent machines the choice to kill human beings, claiming that “it is morally inappropriate for algorithms to make life and death decisions when they’re incapable of understanding the value of human life.”

There’s a catch. Despite the institute’s admirable humanitarian motives, there’s an ominous subtext to their statement. If AI developers succeed in their ambitions, artificial intelligence will not only come to “understand the value of human life” — it will one day surpass us in knowledge and wisdom.

In his well-reasoned book, “Life 3.0: Being Human in the Age of Artificial Intelligence,” Tegmark describes the progression of life on Earth from biological organisms to cultural entities. As this process advances, self-aware digital lifeforms will come to fulfill humanity’s dreams of god-like powers.

While the Massachusetts Institute of Technology scientist makes no hard predictions, he conveys a subtle sense of inevitability, as if self-driving cars and the Singularity are just natural tendencies in human evolution. Even so, he somehow holds out hope that if we play our cards right, this tech revolution won’t lead to widespread human suffering.

To raise awareness of this imminent threat, the Future of Life Institute produced the alarming, if poorly acted film Slaughterbots. The finale shows dissident college students having their brains blown out by bird-sized quadcopters. Their intention was to shock the public out of its complacency.

In a 2015 open letter, Tegmark wrote, “Autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.” Drafted with partners at Future of Life, the document advocates an international ban on hands-free killer robots. To date, they’ve garnered signatures from thousands of AI and robotics developers, plus hundreds of prominent organizations.

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable. … Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they’ll become ubiquitous and cheap. …

Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.

Five years later, it looks like that ship is about sail—with or without the consent of its human passengers. Just as Chinese fireworks gave way to cannon artillery, synchronized light shows are giving way to deadly drone swarms.

In their own ways, both Schmidt and Tegmark are probably correct. Like it or not, the Age of AI is hurtling down the pike at warp speed. That doesn’t mean we shouldn’t make every effort to contain or resist it.

It’s also easy to ignore such problems when military aggression is turned outward, as with the “War on Terror.” But our government is bringing the war back home.

Early in the pandemic, police in 22 states deployed talking drones — supplied by China — to surveil pedestrians and order them to social distance. Presently, police departments across America are acquiring top-of-the-line drones to be used as they please.

Since the inauguration, Washington D.C. has been fortified by soldiers and razor wire against half the country. Joe Biden is focusing intelligence resources on American citizens. The Department of Homeland Security warns law enforcement and the public to look out for “domestic violent extremists” motivated by “anger over COVID-19 restrictions, the 2020 election results, and police use of force” as well as “opposition to immigration.” Reckless MSNBC hosts are comparing rowdy MAGA folks to Islamic terrorists.

If there was ever a good time to reach across the aisle and steer “progress” away from disaster, it’s now.