Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.
Aug 01 2024
Management

Federal CAIOs Outline Early Use Cases and Challenges with the Emerging Technology

AI-focused leaders from four federal agencies share their plans for preparing and using the emerging technology.

Listen in on a conversation among federal IT leaders, and it won’t be long before artificial intelligence, with all its promise and challenges, comes up. Such discussions, in fact, are now mandatory: Last October, the White House issued an executive order on the federal government’s role in the development and use of AI.

A few months later, the Office of Management and Budget released a memorandum directing agencies in the areas of AI governance, innovation and risk management. One key directive required agencies to designate a chief AI officer. Some organizations, including the Department of Homeland Security, already had a CAIO in place; others have since named people to the position with variations on the title. To help FedTech readers understand what’s in store for AI deployment across the government space, we recently spoke with five of these leaders and asked them to describe their mission and plans. Here’s what they had to say.

Click the banner below to learn more about continuous app modernization.

 

What Does a CAIO Do?

  • “As CAIO, I’m responsible for operational AI implementation — using AI to improve the efficiency of our internal operations.” — Dorothy Aronson, Chief Data Officer and CAIO, National Science Foundation
  • “My job is to coordinate across our externally facing AI initiatives, including the support we provide for AI research, the application of AI in science and engineering, and workforce education and development.” — Tess deBlanc-Knowles, Special Assistant to the Director for Artificial Intelligence, National Science Foundation
  • “Everything the department is involved in is going to be touched by AI in some shape or form. I’m focused on promoting responsible use of AI to spur innovation and benefit citizens’ health and well-being.” — Micky Tripathi, Acting CAIO, National Coordinator for Health IT, and Co-Chair, AI Task Force, Department of Health and Human Services
  • “We’ll be building off of decades of work in the Intelligence Community to use and deploy machine learning to provide relevant insights to policymakers in actionable time frames.” — John Beieler, Assistant Director of National Intelligence for Science & Technology and CAIO, Office of the Director of National Intelligence
  • “My responsibilities align with those detailed in the OMB memo: coordinating internal use of AI, promoting AI innovation and, last but not least, managing the risks.” — Mangala Kuppa, CTO and CAIO, Department of Labor

RELATED: USDA’s chief AI officer has big plans.

Areas of Focus for AI-Enabled Tools

  • “There are four areas where I think NSF can make advances with AI. One is our ability to simplify and streamline our business processes. Second, we could improve our ability to search across domains, including files and databases, all at the same time. Third, we’re looking at using AI to better understand the impacts of our funding. And fourth, and most important, we want to use it to facilitate citizen access to the foundation.” — Aronson
  • “We’re not pursuing AI simply for the sake of AI integration. It’s really all about improving the efficiency and effectiveness of our operations to better serve the research community.” — deBlanc-Knowles
  • “There are so many examples, and we already have a long list of use cases: drug discovery and research, diagnostics and therapeutics development, electronic medical records, human services. We’re really looking at everything.” — Tripathi
  • “The fundamental challenge that we have in the IC is the amount of data that we collect and have to analyze and get into the hands of the right people quickly. That’s the core use case for us: focusing on the information triage problem.” — Beieler
  • “Ultimate success for us will entail using AI as a force multiplier for our services. We already have use cases for things like data categorization and natural language processing, and we’re looking at chatbots and other tools that can bring about efficiencies internally so we can do more as a workforce.” — Kuppa
Mangala Kuppa
I’m confident that in the coming months and years, we’re going to see that our services have improved because of AI.”

Mangala Kuppa CTO and CAIO, Department of Labor

Grappling with AI Ethics, Security and Risks

  • “The federal zero-trust initiative is a critical part of AI security in that it requires information be secured at the data element level. User training is also important: We’re developing standards of operation that people can follow easily so they can detect and understand AI and know how to recognize things like AI-driven social engineering.” — Aronson
  • “Given our programmatic support for science, engineering and education, we’ve been supporting AI research and development for decades. We’ve also been thinking about AI ethics and risks for a really long time, and we’re carrying this through to our operational assessment of AI tools.” — deBlanc-Knowles
  • “We’re paying very close attention to the risks and thinking through ways to address them. Quality assurance, for example: How do we discern a high-quality AI model from a low-quality model that might pose a safety risk? How do you make sure that these models aren’t perpetuating or contributing to the inequities that are already present in our healthcare system? Fortunately, I think the list of risks of AI is shorter than the list of opportunities.” — Tripathi
  • “Everything we do with AI at the IC is governed by our AI Ethics Framework and our Principles of Artificial Intelligence Ethics for the Intelligence Community. The technology is a top priority for us, and we’re moving aggressively to meet this moment, but we’re equally focused on doing so responsibly and ethically.” — Beieler
  • “AI is an extremely powerful technology, unlike anything we’ve seen in our lifetime, but it also poses risks that need to be managed very carefully. For the Labor Department, whenever we look at a use case for AI, we start with a risk assessment and analysis to categorize it as low, medium or high. Then we put the appropriate practices in place based on that level of risk.” — Kuppa

 

With AI Comes New Infrastructure and Support Needs

  • “We need powerful computing, but we also need data because AI doesn’t work without a lot of accurate information. And then there’s the cultural component to AI adoption, because tools are useless if you don’t embrace them. So just as important as the technologies that are involved are the technicians who create them and the people who use them.” — Aronson
  • “There will certainly be new infrastructure requirements as we adopt AI at HHS. For compute and software and other technologies that we need to access at scale, we plan to leverage the shared resources available through the National Artificial Intelligence Research Resource Pilot.” — Tripathi
  • “Because we’ve used machine learning for years, we already have significant compute infrastructure. The question is whether it’s the right infrastructure for the frontier AI models we want to leverage in the future.” — Beieler
  • “For the cloud services we use, like Amazon and Azure, everything has to be FedRAMP-certified. We’re taking a multicloud architecture approach for AI because different platforms offer different advantages.” — Kuppa

DISCOVER: Government’s AI plans are coming into focus.

What’s on CAIOs’ Minds Today?

  • “AI implementation is a team sport, and many agencies, including NSF, are relying on each other to share their areas of expertise. If other parts of the federal government are leveraging AI in ways that might be helpful to us, we want to know about those use cases so we can learn from their experience.” — Aronson
  • “We’re developing an AI strategic plan for HHS that we will have completed by early 2025. We’ve been active with AI in a lot of places, so it’s about taking a step back, looking at we’ve done, and then developing a strategy and a set of core principles that all of our agencies can follow moving forward.” —Tripathi
  • “We’re taking AI very seriously. It’s one of our top priorities here, both at the ODNI and across the IC. But we’re also moving forward very carefully as we keep in mind the things that are foundational to how we work, like civil liberties and privacy. There’s a little bit of tension there, but we’re committed to doing both at the same time.” — Beieler
  • “I’m confident that in the coming months and years, we’re going to see that our services have improved because of AI.” — Kuppa

UP NEXT: Agencies can make AI work for their missions in six ways.

Another Perspective: CAIOs Face an ‘Uphill Battle’

“AI is all about applying algorithms to data, and the U.S. government has unique data that no one else has anywhere in the world. That means we have a lot of potential to do amazing things with AI. But on the flip side, for every good use of AI, there is a nefarious use. The same AI that can be used to detect cancer can be used to build biological weapons. The same AI that can be used to defend our computer networks can be used as an offensive tool. I hope that CAIOs can become fixtures at their agencies and make sure that we stay on track with AI, but I think it will be an uphill battle. We’ll have to wait and see how successful they will be.” — Kevin Walsh, Director of IT and Cybersecurity, Government Accountability Office

Eugene Mymrin/Getty Images