Jun 12 2024
Software

Government’s AI Plans Come into Focus with DHS, OMB Roadmaps

Artificial intelligence experts say there is more work to be done in the policy arena.

The Department of Homeland Security’s Artificial Intelligence Roadmap and the Office of Management and Budget’s first governmentwide artificial intelligence policy establish essential frameworks and guidelines for responsible use of the technology.

The policies prioritize ethical standards, privacy and innovation through documents that concisely lay out the government’s positions on regulating and using AI in its executive operations.

Both documents align with the White House’s 2023 AI executive order, which tasked DHS with managing the technology in critical infrastructure and cyberspace and OMB with advancing its governance and innovation while managing resulting risks. By establishing guidelines in these areas, the government removes uncertainty, says Callie Guenther, senior manager of cyberthreat research at Critical Start. 

“This allows agencies to plan their operations more effectively in support of American citizens,” Guenther says. “The documents indicate a shift in procurement, setting principles requiring AI technologies to meet standards of fairness and safety.”

DHS’ AI roadmap outlines the technology’s integration across its various missions, enhancing operations and adhering to ethical standards. Meanwhile, OMB’s policy establishes guidelines for AI development and use while respecting privacy and civil liberties and supporting coordinated agency approaches.

Click the banner below to learn how IT modernization supports digital government.

 

AI Governance and Oversight Are Federal Focuses

The DHS roadmap was developed to explain just how closely the department has coordinated its governance and oversight efforts for the responsible and trustworthy use of AI, says DHS CTO David Larrimore.

“We require rigorous testing and clear guidelines for oversight and transparency, while principles from the National Institute of Standards and Technology’s AI Risk Management Framework and the AI Bill of Rights are incorporated to reinforce responsible AI development,” Larrimore says.

The roadmap offers a comprehensive strategy to address risks across various sectors such as cybersecurity and infrastructure, where the Cybersecurity and Infrastructure Security Agency collaborates with international partners to identify and mitigate risks associated with AI misuse.

In transportation and maritime domains, the Transportation Security Administration and U.S. Coast Guard conduct thorough assessments of AI risks, using nonintrusive inspection technologies and advanced analytics to enhance border security and operational efficiency.

Through partnerships with organizations such as NIST, the Department of Justice and the Department of Defense, DHS facilitates cross-government coordination to identify sector-specific risks and establish guidelines for foundational AI models to ensure alignment with global AI safety standards.

Nicole Isaac
As AI technology matures and evolves, procurement and contractual obligations with government vendors, contractors and other partners will need to keep pace.”

Nicole Isaac Vice President of Global Public Policy, Cisco

“I am a firm believer in full transparency, accountability and fairness,” Larrimore says. “These concepts are foundational for fostering trust in AI technologies and ensuring responsible use.”

The roadmap has also been written to prioritize collaboration between DHS’ Privacy Office and the Office for Civil Rights and Civil Liberties.

“This allows us to ensure our department’s use of AI transparency, explainability, trustworthy accountability and fairness,” Larrimore says. “We absolutely avoid any inappropriate biases and safeguard privacy, civil rights and civil liberties.”

Future AI Guidance on Use and Global Collaboration Is Needed

AI will allow government and industry to combat threats at machine scale, not just human scale, says Nicole Isaac, vice president of global public policy for Cisco.

“This will dramatically improve data protection, business operations and government functions,” Isaac says. “We are eager to see the federal government also propose common-sense policies and ideas to address the development, deployment and use of AI.”

RELATED: Agencies must lay the groundwork for generative artificial intelligence success.

OMB’s policy calls for strengthening AI governance through the appointment of a chief AI officer across all agencies, a role that includes managing risk, advancing responsible AI innovation and coordinating use of the technology.

The policy is an “important step in the right direction,” placing public trust first and encouraging transparency, fair use, safety and innovation, Isaac says.

“As AI technology matures and evolves, procurement and contractual obligations with government vendors, contractors and other partners will need to keep pace,” she says.

Guenther agrees that, while these documents lay a strong foundation, they should be seen as starting points for ongoing development.

“Although the document outlines DHS’ efforts to work with other countries, it is a minor part of the document and a gap that probably should be addressed in the future,” she says.

Suriya Phosri/Getty Images
Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.