Defense Department CIO Dana Deasy, right, and Air Force Lt. Gen. Jack Shanahan, director of the Joint Artificial Intelligence Center, host a roundtable discussion on the enterprise cloud initiative with reporters at the Pentagon, Aug. 9, 2019.

Nov 21 2019
Software

DOD Board Lays Out Ethical Principles for AI

The Defense Innovation Board has offered guidance for how the Pentagon should use and govern artificial intelligence systems.

In a sign that artificial intelligence is not just any average emerging technology, the Department of Defense is in the market for an ethicist to watch over the development of AI.

“I think we have to do a better job, quite honestly, to provide a little more clarity and transparency about what we’re doing with artificial intelligence,” Lt. Gen. Jack Shanahan, director of the DOD’s Joint Artificial Intelligence Center, said at an Aug. 30 press briefing.

This is just one of the actions that the DOD is taking to stave off concerns about the unintended consequences of AI. On Oct. 31, the Defense Innovation Board (DIB), a DOD advisory group, approved a draft document outlining recommendations on the ethical use of AI.

VIDEO: Find out how feds think AI will impact cybersecurity. 

DOD Should Carefully Manage AI Systems

The document lays out five key principles for the department’s use of AI. AI should be: 

  1. Responsible: “Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes” of DOD AI systems, the document says.
  2. Equitable: The Pentagon “should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons.”
  3. Traceable: DOD’s AI engineering discipline “should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation.” 
  4. Reliable: DOD AI systems “should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use,” the document says.
  5. Governable: Finally, the Pentagon’s AI tools “should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.”

The document also recommends enhanced training for DOD employees and service members who will be working with AI; refining reliability benchmarks as well as testing and evaluation techniques for AI; developing risk management methods for the use of AI; and the formal adoption of the board’s ethics principles as DOD policy.

“AI is expected to affect every corner of the Department and transform the character of war,” a supporting draft document states. “The DIB proposes the adoption of this set of AI Ethics Principles to help guide, inform, and inculcate the ethical and responsible use of AI — in both combat and non-combat environments.”

The DOD is moving forward on AI in multiple domains and with a variety of approaches. 

The Pentagon’s AI strategy, released in February, calls for the use of standardized processes in areas such as data, testing and evaluation, and cybersecurity. The JAIC plans to work with the National Security Agency, U.S. Cyber Command and numerous DOD cybersecurity vendors to standardize data collection across the department.

A key part of the Air Force’s digital strategy is to “leverage the power of data as the foundation of artificial intelligence and machine learning to enable faster decision-making and improved war fighter support.”

And the National Geospatial-Intelligence Agency, which collects and examines geospatial intelligence and distributes data to the DOD and national security community, has been an enthusiastic proponent of computer vision technology. NGA says in a technology focus areas document that its analysts need computer vision tools to “preprocess imagery, automatically identify features and objects, and assess change in remotely sensed overhead imagery and full-motion video.”

Keeping human employees informed about and involved in AI-related processes can minimize the negative impact of these disruptions, noted a report published earlier this year by the Professional Services Council Foundation.

“The Defense Department is a leader within government when it comes to developing ethical principles for AI,” the report states, “but other agencies will likewise need to give careful thought to the ethical dimensions of AI within the specific contexts of their missions and operations.”

Air Force Staff Sgt. Andrew Carroll/U.S. Defense Department
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT