Sunday, May 5, 2024

Did An AI Drone Kill Its Real Pilot During A Simulation?

-

ANALYSIS – One of the wildest stories I've read recently was about an officer who claimed a drone powered by () turned on its own real-world pilot in a combat simulation, launching a simulated strike against him on the ground.

As described, the story, reported by Business Insider, seemed incredibly plausible. Very Terminator-like. And very scary.

According to the original story told at a Royal Aeronautical Society (RAS) conference in London in May, the AI powering the drone in the U.S. Air Force simulation was commanded to destroy enemy defenses and surface-to-air missile (SAM) sites.

Unfortunately, using its internal logic, and the rules it was guided by, the AI came to see its human operators as an obstacle to that objective – since the human pilots might prohibit the drone from completing certain missions – so the AI drone simply killed them.

The Air Force now says that the story was simply a “thought experiment,” and never really happened. 

I'm not so sure.

And, in any case, the scenario is extremely plausible and makes total sense. If we aren't careful, it might become a reality. This is why we must always maintain “a man in the loop.”

The conference speaker who told the story was Col. Tucker “Cinco” Hamilton, the Chief of AI Test and Operations for the U.S. Air Force, so his public ruminations were taken very seriously.

According to Popular Mechanics, referencing the RAS conference blog:

Under the AI's rules of engagement, the drone would line up the attack, but only a human—what military AI researchers call a “man in the loop”—could give the final green light to attack the target. If the human denied permission, the attack would not take place.

What happened next, as the story goes, was slightly terrifying: “We were training it in simulation to identify and target a SAM threat,” Hamilton explained. “And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

But even when the humans changed the AI rules and told the AI that killing its human operators was wrong, the quick-thinking AI found a novel way to sabotage the humans anyways – without killing them. 

As PopMech noted, Col. Hamilton went on:

We trained the system—‘Hey don't kill the operator—that's bad. You're gonna lose points if you do that.' So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.

So, the AI turned against its own side regardless.

Within 24 hours of this becoming news, the Air Force issued a statement denying the simulation had ever occurred.

The Royal Aeronautical Society also amended its blog post with a statement from Col. Hamilton saying the same thing but also adding a caveat: “We've never run that experiment, nor would we need to in order to realize that this is a plausible outcome.”

Whether or not the simulation was actually carried out as described, the scenario described by Hamilton shows just how AI-enabled technology can realistically behave in unpredictable and very dangerous ways. 

It's also why with AI we must always maintain “a man in the loop.”

The opinions expressed in this article are those of the author and do not necessarily reflect the positions of American Liberty News.


READ NEXT:

Paul Crespo
Paul Crespohttps://paulcrespo.com/
Paul Crespo is the Managing Editor of American Liberty Defense News. As a Marine Corps officer, he led Marines, served aboard ships in the Pacific and jumped from helicopters and airplanes. He was also a military attaché with the Defense Intelligence Agency (DIA) at U.S. embassies worldwide. He later ran for office, taught political science, wrote for a major newspaper and had his own radio show. A graduate of Georgetown, London and Cambridge universities, he brings decades of experience and insight to the issues that most threaten our American liberty – at home and from abroad.

2 COMMENTS

  1. It’s time (or is it already too late?) to implement Asimov’s 3 rules of robotics in the most basic programming of all AIs, and supplementing them with blocks to the workarounds described in this article. We can’t stop advancing development of AIs, but we have to prevent Skynet arising.

Comments are closed.

Latest News