Apr 23 2024

Building the Foundation for Public Sector Use of Generative AI

If agencies want to benefit from generative artificial intelligence, they'll need to lay the groundwork for success.

Generative artificial intelligence services, from Bing’s AI-powered chatbot to Google Gemini (formerly Bard) and a growing range of software solutions, are transforming how individuals engage with technology.

Users are turning to generative AI services to create images of Pope Francis sporting a puffer coat and to pick March Madness winners.

Businesses have embraced generative AI too. Gartner predicts that about 80 percent of enterprises will have used generative AI applications or application programming interfaces by 2026.

Government agencies might be on a lower trajectory, however. Gartner anticipates that less than 25 percent of agencies will offer generative AI-enabled citizen services by 2027.

There are reasons for the slower pace: Agencies are concerned about using sensitive data in AI training and outputs. They fret about potential AI inaccuracies and a resulting erosion of public trust. They worry about running afoul of federal mandates, including last year’s AI executive order.

Generative AI is still suitable for many use cases, making it highly appealing to agencies. Procurement can produce demand forecasts, create purchase requests and screen proposals for compliance. Human services can simplify policy guidance, communicate with customers, extract insights from case notes and help make case decisions. Emergency management can glean insights from public health data, combine data feeds with forecast models, rapidly predict risks and automate alerts.

To achieve these and other benefits, agencies should invest in four foundational practices and processes for safe and widespread AI adoption.

Click the banner for information on how to leverage DevSecOps with platform engineering.


Establish an Ethical AI Framework

No matter how powerful a generative AI system is, it won't deliver on its promise if it can’t be trusted. Generative AI solutions must be accurate, fair and transparent in the data they consume and the content they generate.

Ethical AI ensures that data used in model training is secure and free of bias. It also governs user prompts after the system is deployed. Ethical AI frameworks ensure that AI outputs are equitable and that human decision-makers, not machines, control how content is used.

Agencies should create and communicate sets of ethical principles to steer the design and use of generative AI. These principles must be aligned with federal guidelines such as the AI Risk Management Framework of the National Institute of Standards and Technology.

Developing ethical AI principles calls for a multidisciplinary AI ethics committee. The committee should establish general AI ethics and oversee ongoing impact assessments to identify and respond to evolving use cases and potential concerns.

DISCOVER: AI is helping advance robotic process automation.

Build Advanced, Scalable Architectural Models

Developing and deploying generative AI solutions requires architectural models that match agencies' use cases. Technical teams should become familiar with autoregressive models, generative pre-trained transformer models, variational autoencoders and generative adversarial networks. All have strengths and limitations that make them suitable for different use cases.

Whether building a generative AI system internally or deploying a commercial solution, models must be trained and tuned with internal data. Because training can use significant computing resources, organizations will want to consider techniques such as distributed computing frameworks and model parallelism, which divides a model across multiple types of processors.

Generative AI is also being advanced by hardware innovations that bring more robust data security, governance and privacy to the edge. For example, a new AI PC is the first computer to combine a central processing unit, graphics processing unit and neural processing unit on a single chip. The result is a dedicated AI engine that efficiently processes AI directly on a PC.

It’s also important to consider how a generative AI system will respond to expanding usage. Many AI systems work fine during development but may fail to scale across the enterprise in a cost-effective way. Agencies should test scalability up front, before investing resources in a system that won’t grow with their needs.

LEARN MORE: Ask these five questions about AI.

Establish Data Governance

To make generative AI valuable and safe, agencies need effective data governance models, starting with privacy and security. No generative AI project should be undertaken without robust solutions to ensure access control, data encryption, data anonymization and other zero-trust methodologies.

Standardization can ensure consistency across data feeds and normalize disparate data formats. Agencies must maintain thorough documentation and metadata for these data sets with details about sources, collection methods and usage rights. These protections must extend across the lifecycle, from data acquisition to storage, analysis, sharing, archiving and disposal.

Finally, agencies should establish a governance team to define policies, roles, responsibilities and procedures to enforce AI-specific data protections.

READ MORE: The NAIRR democratizes access to computing resources.

Engage the Right Stakeholders

Developing a successful generative AI framework requires a collaborative effort from various stakeholders including data scientists, software engineers, ethics experts, legal advisers, agency leadership and end users. More is needed for an IT department to spin up generative AI models, as practical use cases must be carefully crafted with input from a diverse group of experts. By involving all of these stakeholders in the development process, organizations can ensure that their generative AI frameworks are practical and efficient but also ethical, fair and safe.

Most important, agencies must engage constituents who will benefit from the generative AI system’s outputs. Use their input to build a functioning AI model with data from high-quality sources and transparent and explainable results. For the government to employ generative AI successfully, all stakeholders must trust the data that goes in and the content that comes out.

Generative AI is still in its infancy, but solutions are rapidly improving in performance and capability. Now is the time for agencies to explore how generative AI can benefit their operations and the people they serve. With the right foundational elements in place, your organization has a better opportunity for success with generative AI.

Laurence Dutton/Getty Images

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT