Jane Pinelis participates in a news conference at the briefing room of the Pentagon Sept. 10, 2020.

Jane Pinelis participates in a news conference at the briefing room of the Pentagon Sept. 10, 2020. Alex Wong/Getty Images

The Pentagon Is Closing in on 'Ethical' AI Implementation

The Defense Department released guidance for using AI responsibly last year.

The Defense Department is still finalizing an implementation plan for its artificial intelligence ethical principles, according to Jane Pinelis, the chief of AI assurance for the Defense Department's Joint Artificial Intelligence Center, said at an event on Tuesday. 

"So we are the first military to adopt the ethical principles for AI. Since then, multiple other nations have done so, and where we stand now with [chief digital and artificial intelligence office] is we're trying to move into implementation," Pinelis said during a panel discussion at the Atlantic Council on May 17.  

"So we have the five ethical principles at this point. We have [gotten] direction from the deputy secretary to advance them across six different tenets. But now we're moving into … implementation." 

The Defense Department released guidance for using AI responsibly in May 2021 after announcing a set of ethical principles the year before.  

Pinelis said the implementation plan, which is awaiting the deputy defense secretary's signature, would be a "formal pathway forward" that tasks "various organizations in the Department of Defense with very specific actions as far as actually putting these principles into practice."

Many of those tasks, she continued, overlap with testing and evaluation but there are many pieces that require everyone across DOD to take some responsibility.

"Responsible AI is, kind of, everybody's job in the department," Pinelis said. "And so there are pieces of it that have to do with international allies. There are pieces of it that have to do with responsibly acquiring these systems and responsibly developing these systems, and kind of again, crafting all of those arguments and evidence that go into responsible AI."

Michael Horowitz, the Defense Department's director of emerging capabilities policy, said faster implementation of artificial intelligence and autonomous technology solutions requires budget support and centralized leadership – both of which the Pentagon is working to address with the standing up of its chief digital and artificial intelligence office. 

"If data is the fuel that makes AI go essentially – what is an algorithm without the data that you would use to train it in one way or another – then bringing those together under the [chief digital and artificial intelligence office] construct, I think will be reflected in what a new strategy will likely look like as well," Horowitz said during a keynote panel at the event. "What's necessary now is to turn those thoughts into reality and to do it faster." 

Horowitz, who has been in the brand new role for about a month, said he was "pretty optimistic" about the Pentagon's direction and emphasis on AI and autonomy thanks to the creation of the emerging capabilities policy office, the CDAO, and innovation steering group that the undersecretary of defense for research and engineering. 

"I think all of those things make me optimistic that, as we enter the sort of FY '24 budget cycle, that we're going to start seeing that payoff as the department becomes  -- it's not a question of just more, but smarter at thinking about AI and autonomous systems and investments in a way that really pays off for the joint force."