You are here

The looming challenge of AI-empowered robotics

There might seem to be little in common between the Pentagon and the San Francisco chapter of the Society for the Prevention of Cruelty to Animals, but that was before the advent of autonomous robots.

On Friday, the Washington Post reported that a San Francisco SPCA shelter has retired "K-9", an R2-D2-like autonomous security robot leased to patrol around the shelter, record security footage, and alert shelter employees and police to possible break-ins.

The neighbors weren't impressed, believing the robot actually was intended to discourage homeless encampments in the vicinity of the shelter. "ROBOT WAGES WAR ON THE HOMELESS," shouted Newsweek. Soon the hapless K-9 was under merciless assault by everything from poop to barbecue sauce.

According to the Post, the SPCA's K-9 wasn't the first of its breed to suffer such indignities. Last July in the nation's capital, an identical security robot ended up half submerged in a fountain.

Meanwhile, across the river in the Pentagon, exploding robotic technology is producing something of an existential debate about the impact of artificial intelligence on the battlefield.

As many readers will recall, developments in artificial intelligence (AI) and the potential emancipation of AI-empowered devices from human control have prompted dire warnings from celebrities as diverse as Tesla and SpaceX founder Elon Musk and Nobel Prize-winning physicist Stephen Hawking.

One needn't buy into predictions that robots will enslave humanity, however, to recognize that fielding autonomous weapons on the battlefield introduces some unique challenges, and presents risks to both their users and their potential victims.

That the technology is coming isn't in question. With drone aircraft already ubiquitous and self-driving cars soon to be on the road, autonomous tanks, ships, and aircraft can't be far behind. What's in question is the impact of such autonomous systems on the practice and ethics, such as they are, of war-fighting.

Every true science fiction fan is familiar with sci-fi guru Isaac Asimov's three laws of robotics, the first and most important of which reads, "A robot may not injure a human being or, through inaction, allow a human being to come to harm." The problem, of course, is that in war and even in law enforcement, what constitutes acceptable harm is conditional.

From the perspective of its victim, of course whether a munition comes from a robot or a manned weapon system may be irrelevant. From an ethical and legal perspective, however, the decision to launch that munition in the first place is anything but irrelevant, which is why military rules of engagement have become the focus of so much attention in recent years.

The expanded use of armed drones - more properly remotely piloted vehicles - already has aroused concern among soldiers and politicians alike about the "video game" syndrome - the worry that distancing even legitimate combatants from the consequences (and penalties) of their decisions threatens to make war too easy and attractive.

In practice, the well-documented appearance of post-traumatic stress syndrome among armed drone operators suggests that those concerns may be overblown. Unlike human drone operators, however, AIs aren't vulnerable to battle fatigue. Stanley Kubrick's HAL notwithstanding, there's little likelihood that an autonomous tank, ship, or airplane might suffer a PTSD-induced nervous breakdown.

Moreover, the technology potentially permitting such autonomous operation is rapidly miniaturizing. Last month, You-Tube posted a terrifying new video produced at UCal Berkeley entitled "Slaughterbots." Funded by a group including Musk and Hawking, the video visualizes a near-future in which miniature drones programmed with facial recognition software are able to seek out and kill specifically targeted individuals.

According to The Economist, which reported at length on the Berkeley video as well on as more near-term Pentagon-sponsored research on AI-controlled robotic "swarming," U.S. military leaders are far from comfortable with that prospect and are committed to insuring that "the decision to pull a trigger will always be taken by a person rather than a machine."

The problem, of course, is that not all potential beneficiaries of robotic autonomy are likely to be as scrupulous. Indeed, assuming the resources with which to acquire them, such weapons would likely be even more attractive to terrorists and other non-state belligerents than to conventional armed forces.

In fact, the weaponization of small commercial drones already is becoming a serious concern for law enforcement and the military alike. It doesn't require a great deal of imagination to recognize how much more dangerous such weapons would be empowered by Slaughterbot-like AI.

During the past year, the nation's security concerns have focused largely on nuclear weapons and the wholesale killing that they threaten.

But the more serious threat may be retail killing by the same technology that produced K-9. Only, its successors won't be defeated with barbecue sauce.

The Lawton Constitution

102 SW 3rd, Lawton, OK
Classifieds: (580) 357-9545
Circulation: (580) 353-6397
Switchboard: (580) 353-0620