World

From Land Mines to Drones, Tech Has Driven Fears About Autonomous Arms

Swarms of killer drones are likely to soon be a standard feature of battlefields around the world. That has ignited debate over how or whether to regulate their use and spurred concerns about the prospect of eventually turning life-or-death decisions over to artificial intelligence programs.

Here is an overview of how the technology has evolved, what types of weapons are being developed and how the debate is unfolding.

How new are these weapons?

Eventually, artificial intelligence should allow weapons systems to make their own decisions about selecting certain kinds of targets and striking them. Recent developments in A.I. tech have intensified the discussion around such systems, known as lethal autonomous weapons.

But in a way, autonomous weapons are hardly new.

Land mines, which are designed to discharge automatically when a person or object passes on top of them, were used as early as the 1800s during the Civil War in the United States, apparently first invented by a Confederate general named Gabriel J. Rains, who called them a “subterra shell.”

Soldiers handling land mines before a controlled detonation in Sa Kaeo, Thailand.Credit…Lillian Suwanrumpha/Agence France-Presse — Getty Images

While they were first used long before anyone could even conceive of artificial intelligence, they have a relevance to the debate today because once put in place they operate with no human intervention — and without discriminating between intended targets and unintended victims.

The Pentagon began expanding automated weapons decades ago.

Starting in the late 1970s, the United States began to expand on this concept, with a weapon known as the Captor Anti-Submarine Mine. The mine could be dropped from an airplane or a ship and settle on the bottom of the ocean, sitting there until it automatically detonated when sensors on the device detected an enemy target.

Starting in the 1980s, dozens of Navy ships began to rely on the AEGIS weapon system, which uses a high-powered radar system to search for and track any incoming enemy missiles. It can be set on automatic mode so that it will fire off defensive missiles before a human intervenes.

Homing munitions were a next step.

The next step in the progression toward more sophisticated autonomous weapons came in the form of “fire and forget” homing munitions like the AIM-120 Advanced Medium-Range Air-to-Air Missile, which has a radar seeker that refines the trajectory of a fired missile as it tries to destroy enemy planes.

Homing munitions generally cannot be recalled after they are fired, and act like “an attack dog sent by police to run down a suspect,” wrote Paul Scharre, a former senior Pentagon official and author of the book “Army of None.” They have a certain degree of autonomy in refining their path, but Mr. Scharre defined it as “limited autonomy.” Harpoon anti-ship missiles operate in a similar fashion, with limited autonomy.

American troops handling AIM-120 missiles aboard the aircraft carrier U.S.S. Kitty Hawk in the Persian Gulf in 2003.Credit…Steve Helber/Associated Press

‘Loitering munitions’ can be highly automated.

The war in Ukraine has highlighted use of a form of automated weaponry, known as loitering munitions. These devices date to at least 1989, when an Israeli military contractor introduced what is known as Harpy, a drone that can stay in the air for about two hours, searching over hundreds of miles for enemy radar systems and then attacking them.

More recently, American military contractors like California-based AeroVironment have sold similar loitering munitions that carry an explosive warhead. The Switchblade 600, as this unit is called, flies overhead until it finds a tank or other target and then fires an anti-armor warhead.

Human sign-off is still requested before the weapon strikes the target. But it would be relatively simple to take the human “out of the loop,” making the device entirely autonomous.

“The technology exists today that you could say to the device, ‘Go find me a Russian T-72 tank, don’t talk to me, I’m going to launch you, go find that,’” said Wahid Nawabi, chairman of AeroVironment. “And if it has 80 percent-plus confidence that’s the one, it takes it out. The entire end-to-end mission could be all autonomous except firing it to begin with.

AeroVironment’s Switchblade 600 on display at a Navy exposition in National Harbor, Md.Credit…Jason Andrew for The New York Times

Unleashing drone swarms could be a bigger shift.

There is no question about where this is all headed next.

The Pentagon is now working to build swarms of drones, according to a notice it published earlier this year.

This end result is expected to be a network of hundreds or even thousands of A.I.-enhanced, autonomous drones carrying surveillance equipment or weapons. Drones would most likely be positioned near China so they could be rapidly deployed if conflict broke out, and would be used to knock out or at least degrade the extensive network of anti-ship and anti aircraft missile systems China has built along its coasts and artificial islands in the South China Sea.

That is just one of a blitz of efforts now underway at the Pentagon aiming to deploy thousands of inexpensive, autonomous and at times lethal drones in the next year or two that can continue to operate even when GPS signals and communications are jammed.

Some military contractors, including executives at Palantir Technologies, a major artificial intelligence military contractor, had argued that entirely autonomous A.I.-controlled lethal attacks could still be years away, as the most advanced algorithms are not yet reliable enough, and so cannot be trusted to autonomously make life or death decisions, and may not be for some time.

A.I., Palantir argues, will instead allow military officials to make faster and more accurate targeting decisions by quickly analyzing incoming waves of data, Courtney Bowman, a Palantir executive told British legislators during a hearing this year.

But there is widespread concern within the United Nations about the risks of the new systems. And while some weapons have long had a degree of autonomy built into them, the new generation is fundamentally different.

“When this conversation started about a decade ago, it really was kind of science fiction,” Mr. Scharre said. “And now it’s not at all. The technology is very, very real.”

Back to top button