WASHINGTON — Artificial intelligence and autonomy often conjure images of killer robots for the general public and even some in government.
But development underway by the Army Research Laboratory toward “mobile intelligent entities” is actually far more practical, said Chief Scientist Alexander Kott.
Kott pitched the term, along with the new ARL concept of fully autonomous maneuver, at the second annual Defense News Conference on Wednesday during a panel on artificial intelligence that kept circling back to underlying questions of great power competition.
“Fully autonomous maneuver is an ambitious, heretical terminology,” Kott said. “Fully autonomous is more than just mobility, it’s about decision-making.”
If there is a canon against which this autonomy seems heretical, it is likely the international community’s recent conference and negotiations over how, exactly, to permit or restrict lethal autonomous weapon systems. The most recent meeting of the Group of Governmental Experts on Lethal Autonomous Weapons Systems took place last week in Geneva, Switzerland, and concluded with a draft of recommendations on Aug. 31.
This diplomatic process, and the potential verdict of international law, could check or halt the development of AI-enabled weapons, especially ones where machines select and attack targets without human intervention. That’s the principle objection raised by humanitarian groups like the Campaign to Stop Killer Robots, as well as the nations that called for a preemptive ban on such autonomous weapons.
But to Kott, in the context of the Army’s own development, the challenge of autonomy is a practical one.
“All know about self-driving cars, all the angst, the issue of mobility … take all this concern and multiply it by orders of magnitude and now you have the issues of mobility on the battlefield,” Kott said. “Mobile, intelligent entities on the battlefield have to deal with a much more unstructured, much less orderly environment than what self-driving cars have to do. This is a dramatically different world of urban rubble and broken vehicles, and all kind of dangers, in which we are putting a lot of effort.”
Throughout the panel, where Kott was joined by Jon Rambeau, vice president and general manager of Lockheed Martin Rotary and Mission Systems; Rear Adm. David Hahn of the Office of Naval Research; and Maj. Gen. William Cooley of the Air Force Research Laboratory, the answers skirted around the edges of lethal autonomy — focusing instead on the other degrees of autonomy that will be developed in accordance with the Defense Department’s own policy guidelines mandating human-in-the-loop control.
“As industry then takes on developing some of these highly capable AI-enabled systems, our responsibility is to make sure that we develop within those boundary conditions,” Rambeau said. “This is likely to be a continuous process, as AI is a continuously updated medium and will likely need regular evaluation to make sure it doesn’t develop on its own in malicious or unexpected ways.”
“We don’t know if other participants in the worldwide great power competition will follow suit,” Kott said. “There’s strong suspicion that they may not, but for us, this is the policy.”
However AI develops, whether tightly controlled and regulated, or allowed to more organically process information and reach its own conclusions without constant checks, the presence of AI on battlefields of the future is likely to change how nations fight wars, and maybe even how people understand war itself.
“For the first time in human history, the war-fighting profession, the battlefield of the future will be populated not only by intelligent humans but also by intelligent and largely autonomous entities that are no longer humans,” Kott said. “How are we going to adjust to the introduction of new intelligent beings into battlefield life?”