Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Jan 16 2025
Security

4 Principles to Help Federal Agencies Adopt AI Ethically and Securely

In a white paper, SolarWinds frames the challenge in terms of privacy, accountability, transparency and simplicity.

Federal civilian agencies understand that artificial intelligence will be increasingly useful in streamlining and automating complex workloads. They hope to use data-driven insights to enhance decision-making and address thorny problems. The Government Accountability Office reported that 20 of 23 agencies surveyed have “about 1,200 current and planned AI cases — specific challenges or opportunities that AI may solve.”

Yet, there are significant barriers to developing and deploying AI solutions. Data silos can bury insights in a mountain of data that is difficult to analyze, so agencies need workflows that will standardize, clean and validate the data. Evolving regulatory and compliance issues make it imperative to anonymize and pseudonymize personal data before training AI models. Above all, agencies must ensure that personal data will not be breached, leaked or otherwise misused. All of this speaks to the need for trust.

Click the banner below to begin developing a comprehensive cyber resilience strategy.

 

Guiding Principles for Implementing AI

The issues of security, trust and ethical deployment of AI are top of mind for government agencies. An in-depth analysis by SolarWinds sheds light on how to deploy AI in ways that serve both the agency and its constituents. The company’s white paper, “Navigating the AI Revolution,” shares strategies for the ethical implementation of AI. It outlines four guiding principles, discussed here in depth, along with suggestions specifically tailored to civilian agencies.

1. Privacy and Security

AI systems protect privacy and security when steps are taken to ensure that data is appropriately collected, used and stored. This means fully safeguarding personal and mission-critical data. A 2023 executive order on AI directs agencies to address AI systems’ most pressing security risks, including those that could present biotechnology, cybersecurity, critical infrastructure and other national security risks. Among other issues, the executive order requires that agencies create safeguards for the ethical collection and use of citizens’ personal data for AI.

Gartner recommends that organizations adopt a comprehensive AI trust, risk and security management (TRiSM) program to help them “integrate much-needed governance upfront, and proactively ensure AI systems are compliant, fair, reliable and protect data privacy.” There are many commercially available and open-source products that can help with this, but they rely on clean, standardized, validated data that has been duly anonymized or pseudonymized.

1,200

The number of planned AI use cases under examination by federal agencies in 2023

Source: SolarWinds, “Navigating the AI Revolution,” April 2024

2. Accountability and Fairness

AI models need to be evaluated for fairness, and their decisions regulated. This means keeping a human in the technology loop. As Krishna Sai, senior vice president for technology and engineering at SolarWinds, explains, “Feedback and validation mechanisms should be built in so that any negative experiences are proactively captured and addressed. Regularly evaluating various AI models for fairness is essential, especially because many of these models are trained on existing data and have built-in biases.”

Sai points out that it is important to identify biases in the model and work to eliminate them. An effective way to gradually remove bias is to start with basic use cases where evaluation is simple, using feedback and validation mechanisms to record and address negative user experiences. Budget-constrained civilian agencies might struggle with resource issues but should not skimp when it comes to human oversight of AI decisions and ensuring model fairness and accountability. Sai recommends choosing tools that are built with security and accountability in mind. SolarWinds’ AI by Design framework provides guidance for integrating AI into IT management solutions.

Krishna Sai
Regularly evaluating various AI models for fairness is essential, especially because many of these models are trained on existing data and have built-in biases.”

Krishna Sai Senior Vice President for Technology and Engineering, SolarWinds

3. Transparency

Forrester defines transparency as “the perception that an AI system is leading to decisions in an open and traceable way and is making every effort to share verifiable information on how it operates.” Transparency involves providing the individuals tasked with monitoring AI with clear, comprehensive visibility into exactly how the organization uses AI, along with explanations of AI-driven decisions.

As with anything related to IT, a single-pane-of-glass overview of the system is critical. This ensures that there are no visibility gaps, an especially critical issue when dealing with the complex environment of networks, infrastructure, databases, third-party applications and workloads that are everywhere — on-premises and in the cloud. A solution such as SolarWinds Observability provides a full-stack view, ensuring there are no visibility gaps. The offering provides real-time monitoring, alerting, logging of alerts, auditing and reporting, giving the team both a macro and micro view. This provides the critically important ability to drill down to find out why the AI component made a specific decision.

RELATED: AI looks backward so people can move forward.

4. Simplicity to Build Trust

AI experiences must build on existing behaviors to make the transition organic. This means building trust in these tools gradually, not just flipping a switch and hoping for the best. Sai recommends starting with back-end systems such as HR, with an eye to ensuring a smooth, user-friendly experience that can then be translated into public-supporting systems — especially important when dealing with mission-critical applications in civilian agencies.

Gartner VP Analyst Dean Lacheca agrees with this gradual approach: “Government organizations can accelerate GenAI adoption by focusing on use cases that predominantly impact internal resources, avoid perceived risks associated with citizen-facing services and build knowledge and skill associated with the technology.”

Invest for Lasting Benefits

The guiding principles of AI development and deployment might appear to impose a burden. However, the long-term, lasting benefits far outweigh any additional work. AI-based solutions can improve decision-making and insights, and Sai has seen instances of dramatically improved response time and optimized resource management. The key is to ensure trust and mitigate risk through the four guiding principles.

saifulasmee chede/Getty Images