Panel: U.S. Military Artificial Intelligence Effort Underfunded, Understaffed

October 23, 2019 9:48 PM - Updated: October 24, 2019 10:10 AM

ANNAPOLIS, Md.— When speaking of the Department of Defense’s artificial intelligence research and development, a panel of academics and the Pentagon’s top A.I. officials agreed the effort is underfunded and understaffed.

The threat, however, is ever-present and adversaries are devoting significant amounts of money and personnel to develop A.I., Air Force Lt. Gen. John Shanahan, the director of the Pentagon’s Joint Artificial Intelligence Center, told USNI News after a panel discussion Tuesday at The Promise and The Risk Of the A.I. Revolution conference, hosted by the U.S. Naval Institute at the U.S. Naval Academy.

“At its core, we are in a strategic competition,” Shanahan said. “We’re in a strategic competition against a peer adversary — not near-peer — but peer.”

Russia and China are devoting vast amounts of resources to develop A.I. capabilities, Shanahan said. Meanwhile, Shanahan is constantly recruiting both military and private sector experts to join the Pentagon’s A.I. research and development efforts while his office, like the rest of the Department of Defense, struggles for predictable, reliable funding streams.

Operating under a continuing resolution, the stop-gap spending bill Congress approves when it delays passing a fiscal year budget, further challenges Shanahan’s efforts because CRs cap current spending at the previous year’s funding levels and limit new contracts.

“Peers do not have a CR,” Shanahan said. “I can’t speak exactly on the quality of their talent, but here’s the way I talk about this: If you put enough people and enough money at the problem, eventually you solve the problem, even if you’re not going to get it right for the first five years.”

The military A.I. efforts are not just falling behind the work of Russia and China, but also the advances made by U.S.-based industry, said Missy Cummings, a Duke University professor and director of the Humans and Autonomy Laboratory and Duke Robotics, during the panel discussion. Cummings is a U.S. Naval Academy graduate and was among the Navy’s first female fighter pilots.

“There’s no question that the bulk of A.I. development is happening in the commercial sphere,” Cummings said. “The military is way, way, way behind. I appreciate that the general says he’s got some talent, but he doesn’t have enough. The military so badly needs to up its game in the A.I. field.”

Shanahan did not dispute Cummings’ take on where his office stands in the A.I. development realm. A.I. technology is developing very rapidly, quicker than the Pentagon’s ability to plan for and fund research according to the standard budgeting process, he said.

“There are a lot of people who want to go very quickly with A.I. but we live by five-year budget cycles,” Shanahan said.

Shanahan sees the military’s effort to develop A.I. as involving a combination of military and commercial researchers. To start, he sees A.I. as being used for logistical help, such as anticipating when parts are needed to repair or maintain aircraft. As talent and technology improves, the uses will become more complex, such as for missile defense systems or even anticipating medical treatments, he said.

“It will take us another generation to build that talent, but I also don’t want to outsource it entirely, we have to get it internal to the department cycles,” Shanahan said.

Cummings warns the Pentagon and to a larger extent, academia must be mindful of overpromising the uses of A.I.

“I’m a techno realist,” Cummings said. “A.I. is important, it is going to be important for future developments, but any of the hysteria that is going along about that it is overly capable is not true.”

For instance, any talk of A.I. being used to guide missiles or to replace human warfighters is far away from reality, Cummings said. Even a rudimentary version of autonomous A.I. vehicles in Tesla electric cars doesn’t work, Cummings said. The Tesla Smart Summons feature is supposed to allow a Tesla to find it’s owner, but she said it is an example of overpromised, underperforming or non-performing A.I.

Missy Cummings and Lt. Gen. John Shanahan speaking at the U.S. Naval Academy. US Naval Institute Photo

“The only killer robot in the world today that could affect you is… a Tesla on the highway at high speeds. That is A.I.,” she said.
“There is a ton of A.I. imbeded in Teslas and they do not work.”

Cummings also is concerned with the ability to test A.I. developments. Researchers in places such as Silicon Valley don’t follow the same rigorous testing, development and certification standards as the Pentagon employs for its systems. At the same time, she also doesn’t see the Pentagon as having enough people now who can adequately evaluate A.I. developments from the private sector.

“I flew F-4s and F-18s. So I’ve been there, I know what it’s like and I know what we don’t need are technologies that we don’t know what they’re capable of,” Cummings said.

What the military needs, Cummings said, is for red team/blue team exercises to test A.I. The problem, she said, is testing and certification programs in the military typically get little funding. The danger, she said, is if the military, and society, rushes to field A.I. before it’s fully developed.

“I think the bigger problem the world faces is we’re going to have a lot of really bad mediocre A.I. in all facets of society and then how are going to develop systems that can detect, defend and mitigate bad A.I.,” Cummings said.

Ben Werner

Ben Werner

Ben Werner is a staff writer for USNI News. He has worked as a freelance writer in Busan, South Korea, and as a staff writer covering education and publicly traded companies for The Virginian-Pilot in Norfolk, Va., The State newspaper in Columbia, S.C., Savannah Morning News in Savannah, Ga., and Baltimore Business Journal. He earned a bachelor’s degree from the University of Maryland and a master’s degree from New York University.

Get USNI News updates delivered to your inbox