In February, the Pentagon unveiled an expansive new artificial intelligence strategy that promised the technology would be used to enhance everything the department does, from killing enemies to treating injured soldiers. It said an Obama-era advisory board packed with representatives of the tech industry would help craft guidelines to ensure the technology’s power was used ethically.
In the heart of Silicon Valley on Thursday, that board asked the public for advice. It got an earful—from tech executives, advocacy groups, AI researchers, and veterans, among others. Many were wary of the Pentagon’s AI ambitions and urged the board to lay down rules that would subject the department’s AI projects to close controls and scrutiny.
“You have the potential of large benefits, but downsides as well,” said Stanford grad student Chris Cundy, one of nearly 20 people who spoke at the “public listening session” held at Stanford by the Defense Innovation Board, an advisory group established by the Obama administration to foster ties between the Pentagon and the tech industry. Members include executives from Google, Facebook, and Microsoft; the chair is former Google chair Eric Schmidt.
Although the board is examining the ethics of AI at the Pentagon’s request, the department is under no obligation to heed any recommendations. “They could completely reject it or accept it in part,” said Milo Medin, vice president of wireless services at Google and a member of the Defense Innovation Board. Thursday’s listening session took place amid tensions in relations between the Pentagon and Silicon Valley.
Last year, thousands of Google employees protested the company’s work on a Pentagon AI program called Project Maven, in which the company’s expertise in machine learning was used to help detect objects in surveillance imagery from drones. Google said it would let the contract expire and not seek to renew it. The company also issued guidelines for its use of AI that prohibit projects involving weapons, although Google says it will still work with the military.
Before the public got its say Thursday, Chuck Allen, a top Pentagon lawyer, presented Maven as an asset, saying AI that makes commanders more effective can also protect human rights. “Military advantages also bring humanitarian benefits in many cases, including reducing the risk of harm to civilians,” he said.
Many people who spoke after the floor opened to the public were more concerned about that AI may undermine human rights.
Herb Lin, a Stanford professor, urged the Pentagon to embrace AI systems cautiously because humans tend to place too much trust in computers’ judgments. In fact, he said, AI systems on the battlefield can be expected to fail in unexpected ways, because today’s AI technology is inflexible and only works under narrow and stable conditions.
Mira Lane, director of ethics and society at Microsoft, echoed that warning. She also raised concerns that the US could feel pressure to change its ethical boundaries if countries less respectful of human rights forge ahead with AI systems that decide for themselves when to kill. “If our adversaries build autonomous weapons, then we’ll have to react,” she said.
Marta Kosmyna, Silicon Valley lead for the Campaign to Stop Killer Robots, voiced similar worries. The group wants a global ban on fully autonomous weapons, an idea that has received support from thousands of AI experts, including employees of Alphabet and Facebook.
The Department of Defense has been bound since 2012 by an internal policy requiring a “human in the loop” whenever lethal force is used. But at UN discussions the US has argued against proposals for similar international-level rules, saying existing agreements like the 1949 Geneva Conventions are a sufficient check on new ways to kill people.
“We need to take into account countries that do not follow similar rules,” Kosmyna said, urging the US to use its influence to steer the world toward new, AI-specific restrictions. Such restrictions could bind the US from switching its position just because an adversary did.
Veterans who spoke Thursday were more supportive of the Pentagon’s all-in AI strategy. Bow Rodgers, who was awarded a Bronze Star in Vietnam and now invests in veteran-founded startups, urged the Pentagon to prioritize AI projects that could reduce friendly-fire incidents. “That’s got to be right up on top,” he said.
Peter Dixon, who served as a Marine officer in Iraq and Afghanistan, spoke of situations in which frantic calls for air cover from local troops taking heavy fire were denied because US commanders feared civilians would be harmed. AI-enhanced surveillance tools could help, he said. “It’s important to keep in mind the benefits this has on the battlefield, as opposed to just the risk of this going sideways somehow,” Dixon said.
The Defense Innovation Board expects to vote this fall on a document that combines principles that could guide the use of AI with general advice to the Pentagon. It will also concern itself with more pedestrian uses of AI under consideration at the department, such as in healthcare, logistics, recruiting, and predicting maintenance issues on aircraft.
“Everyone gets focused on the pointy end of the stick, but there are so many other applications that we have to think about,” said Heather Roff, a research analyst at Johns Hopkins University’s Applied Physics Laboratory who is helping the board with the project.
The board is also taking private feedback from tech executives, academics, and activists. Friday it had scheduled a private meeting that included Stanford professors, Google employees, venture capitalists, and the International Committee of the Red Cross.
Lucy Suchman, a professor at Lancaster University in the UK, was looking forward to that meeting but is pessimistic about the long-term outcomes of the Pentagon’s ethics project. She expects any document that results to be more a PR exercise than meaningful control of a powerful new technology—an accusation she also levels at Google’s AI guidelines. “It’s ethics-washing,” she said.