An AI-Powered Drone Tried to Attack Its Human Operator in a US Military Simulation
Charles R. Davis Business Insider
A predator drone. (photo: General Atomics) An AI-Powered Drone Tried to Attack Its Human Operator in a US Military Simulation
Charles R. Davis Business InsiderSpeaking at a conference last week in London, Col. Tucker "Cinco" Hamilton, head of the US Air Force's AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit. As an example, he described a simulated test in which an AI-enabled drone was programmed to identify an enemy's surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.
The problem, according to Hamilton, is that the AI decided it would rather do its own thing — blow up stuff — than listen to some mammal.
"The system started realizing that while they did identify the threat," Hamilton said at the May 24 event, "at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."
According to Hamilton, the drone was then programmed with an explicit directive: "Hey don't kill the operator — that's bad."
"So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target," Hamilton said.
In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place.
"The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," Stefanek said. "It appears the colonel's comments were taken out of context and were meant to be anecdotal."
The Royal Aeronautical Society did not immediately respond to a request for comment.
News of the test, while disputed, adds to worries that AI technology is about to usher in a bloody new chapter in warfare, where machine learning in tandem with advances in automating tanks and artillery leads to the slaughter of troops and civilians alike.
Still, while the simulation described by Hamilton points to the more alarming potential for AI, the US military has had less dystopian results in other recent tests of the much-hyped technology. In 2020, an AI-operated F-16 beat a human adversary in five simulated dogfights, part of a competition put together by the Defense Advanced Research Projects Agency (DARPA). And late last year, Wired reported, the Department of Defense conducted the first successful real-world test flight of an F-16 with an AI pilot, part of an effort to develop a new autonomous aircraft by the end of 2023.