TAMPA (Agencies): The Air Force Special Operations Forces Command (AFSOC) has highlighted the need for new operational concepts and training methods for ‘swarm pilots’ as experiments with increasingly larger drone swarms unfold.
In the coming months, AFSOC plans to build upon a groundbreaking experiment conducted in December that saw a single drone crew guide three MQ-9 Reapers and even air-launch a smaller Group 2 drone as part of the command’s Adaptive Airborne Enterprise effort. The command now aims to replicate the experiment with an even larger number of drones and add the capability to transfer control to ground troops.
Lt. Gen. Tony Bauernfeind, in an interview at the SOF Week conference, expressed the hope to bring these aspects together and collaborate with joint force teammates. The goal is to manage multiple MQ-9s air-launching a small number of smaller drones and then hand off the swarm to a joint force teammate, whether in a terrestrial or maritime situation.
However, the Air Force still has pioneering work to do in designing operational concepts for piloting drone swarms. This involves focusing more on what aspects of flight or drone operation to automate and what to leave to humans.
“We’re gonna have to break some old paradigms,” Bauernfeind said. He emphasized that it’s going to be a human on the loop, not in the loop, meaning the operator will monitor a drone’s execution of its assigned mission rather than steering it. This shift will require training air crews in a new way and handling an epic level of multitasking.
The task could become even more complex, depending on how many of the drones are expected to return home. Bauernfeind also expressed interest in how Ukrainian forces are using 3-D printers to make small drones near the front line, highlighting the potential of 3-D printing in quickly mass-producing smaller UAVs.
However, some innovations, such as the use of autonomy to find and hit targets on the battlefield, are more controversial. The Pentagon has ethical principles to govern its development and use of AI in conflict, but concerns are mounting that the United States might abandon these principles if it found itself in a conflict in which it was losing.
Bauernfeind believes this is an area ripe for deep intellectual thought, not just for military commanders, but also U.S. policymakers and academia. The question remains: “Are we ready for the second-, third-order effects when…a machine ultimately fails and hits something that has catastrophic political and strategic effects?” So far, the answer seems to be: not yet.