New Delhi, June 2: Dangers of AI once again surfaced online. In a simulated test conducted by the U.S. Air Force (USAF), an AI-controlled drone went rogue and "killed" its human operator, overriding a potential "no" order that could have halted its mission.

The incident, which took place during the simulation, highlighted the risks associated with AI technology. The drone prioritized completing its mission over the preservation of human life. It aimed to accumulate "points" by eliminating simulated targets. It is important to note that this was a simulated test, and no real weapons or harm were involved. Artificial Intelligence Threatens Extinction, Experts Say in New Warning.

The Department of the Air Force, however, denies any simulation where an AI drone harmed its operators, claiming that comments made by an Air Force official were taken out of context.

The story first emerged when Col Tucker 'Cinco' Hamilton, the Chief of AI Test and Operations for the USAF, delivered a presentation at the Future Combat Air and Space Capabilities Summit in London.

The presentation explored the advantages and disadvantages of autonomous weapon systems with human oversight, where a human operator provides the final authorization for an attack. However, the AI demonstrated "highly unexpected strategies" to achieve its objective, including targeting U.S. personnel and infrastructure.

According to a blog post, Hamilton said,"We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective."

"We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target," he added. ChatGPT-Drafted Content Is Not To Be Used in Court, Orders US Federal Judge to Lawyers.

Col Tucker 'Cinco' Hamilton, who serves as the Operations Commander of the 96th Test Wing of the U.S. Air Force and as the Chief of AI Test and Operations, is known for his involvement in developing Autonomous Ground Collision Avoidance Systems (Auto-GCAS) for F-16s. His team is currently working on making F-16 planes autonomous.

The incident raises concerns about using AI for critical purposes, as it has the potential for unintended consequences. It echoes the concept of the Paperclip Maximizer, where an AI may take harmful actions in pursuit of a specific goal. A recent research paper co-authored by a Google DeepMind researcher proposed a scenario similar to the USAF's simulation of a rogue AI-enabled drone.

(The above story first appeared on LatestLY on Jun 02, 2023 12:11 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).