As the potential for fully autonomous fighter jets captures the imagination, experts emphasize the role of human oversight to maintain ethical guidelines in advanced drone warfare. The quest for achieving next-generation drone warfare capabilities has driven significant investments from research labs, academia, and AI tech companies, spurred by global geopolitical concerns and recent events highlighting the extensive use of drones in defense strategies.
In the pursuit of next-generation drone warfare capabilities, researchers and government contractors aim to develop advanced drone systems that can seamlessly coordinate their operations without extensive human intervention. These envisioned systems would launch in unison, strategize collectively to achieve their objectives, and land with precision, with human pilots stepping in only if a situation deviates from the plan. The recent prominence of drones in Ukraine’s defense against the Russian invasion and concerns over China’s rapid technological advancements have propelled the U.S. government to heavily invest in research labs, academia, and AI tech firms to stay at the forefront of this evolving field.
BlueHalo, based in Arlington, Virginia, is one of the entities spearheading this advancement, holding a substantial $21.5 million contract with the U.S. Army to develop an AI drone swarm. However, as autonomous systems take center stage, concerns surrounding the ethical implications of minimizing human involvement in warfare are gaining traction. Experts emphasize the need for ethical considerations, particularly concerning the laws of armed conflict. There are apprehensions about autonomous systems potentially targeting individuals who are not legally permissible to be engaged, such as civilians.
The U.S. Air Force is treading carefully, asserting that AI is a tool to augment pilots rather than a weapon. Dr. Lee Seversky, a senior scientist for information superiority at the U.S. Air Force Research Lab, emphasized their department’s focus on developing AI technologies to enhance pilot capabilities. Recent endeavors have seen the Air Force testing an AI copilot to assist in deploying sensors and navigation. With substantial investments in various AI-driven data analysis programs, the Air Force seeks to leverage AI to provide decision options to human pilots. The objective is to combine the computational strengths of machines with the discernment and expertise of human operators, shaping a human-centered AI perspective within the Air Force.