- The Defense Advanced Research Projects Agency (DARPA) just successfully completed an actual in-air dogfight test pitting artificial intelligence (AI) against a human pilot in fighter jets.
- An X-62A autonomously operated by AI went against an F-16 piloted by a human.
The DARPA AI Vs. Human ACE Combat
With the huge advancements in artificial intelligence during the past couple of years, the writing has always been on the wall about their potential weaponization. Last Thursday, the US Defense Advanced Research Projects Agency announced the successful use of an AI-controlled fighter jet against another piloted by a human.
According to DARPA, it has been toying with AI applications since December 2022 as part of its Air Combat Evolution (ACE) program. Their primary aim was to deploy a system capable of piloting a fighter jet autonomously while bound to the Air Force’s safety and ethical standards.
The recent test involved two aircraft in a one-on-one showdown. One was a specially modified F16 test mule, rechristened as “X-62A” or “VISTA” (Variable In-flight Simulator Test Aircraft), controlled by AI and a regular F-16 operated by a human.
The dogfight was claimed to have been conducted with the jets’ safety switches off throughout the “high-aspect nose-to-nose engagements” with their altitudes reaching up to 2,000 feet at 1,200 miles per hour. However, DARPA didn’t mention which one emerged victorious in the exercise.
“Dogfighting was the problem to solve so we could start testing autonomous artificial intelligence systems in the air,” explained US Air Force Test Pilot School’s chief Bill Gray. “Every lesson we’re learning applies to every task you could give to an autonomous system.”
DARPA revealed that it already made 21 test flights under the program. The project will carry on this year amid international concerns about the militarization of AI.
AI in Modern Warfare
The actual deployment of the autonomous fighter jets of DARPA remains shrouded in secrecy. Meanwhile, the extent of AI’s use in modern warfare is not known to this point, but some military personnel elsewhere have confirmed their use in present conflicts.
One highly controversial utilization of AI in global tensions lay in Israel’s “Lavender” program. Despite its assessed 10% error threshold, Israel Defence Forces (IDF) officials reportedly admitted to using it in Gaza, albeit within a limited capacity.
So far, there have been conflicting accounts from several news sources about the subject, but the Israeli armed forces stood their ground that the AI’s revolutionary combat application adhered to the restrictions enforced by IDF directives and international laws.