Establish an Ethical AI Framework
No matter how powerful a generative AI system is, it won't deliver on its promise if it can’t be trusted. Generative AI solutions must be accurate, fair and transparent in the data they consume and the content they generate.
Ethical AI ensures that data used in model training is secure and free of bias. It also governs user prompts after the system is deployed. Ethical AI frameworks ensure that AI outputs are equitable and that human decision-makers, not machines, control how content is used.
Agencies should create and communicate sets of ethical principles to steer the design and use of generative AI. These principles must be aligned with federal guidelines such as the AI Risk Management Framework of the National Institute of Standards and Technology.
Developing ethical AI principles calls for a multidisciplinary AI ethics committee. The committee should establish general AI ethics and oversee ongoing impact assessments to identify and respond to evolving use cases and potential concerns.
DISCOVER: AI is helping advance robotic process automation.
Build Advanced, Scalable Architectural Models
Developing and deploying generative AI solutions requires architectural models that match agencies' use cases. Technical teams should become familiar with autoregressive models, generative pre-trained transformer models, variational autoencoders and generative adversarial networks. All have strengths and limitations that make them suitable for different use cases.
Whether building a generative AI system internally or deploying a commercial solution, models must be trained and tuned with internal data. Because training can use significant computing resources, organizations will want to consider techniques such as distributed computing frameworks and model parallelism, which divides a model across multiple types of processors.
Generative AI is also being advanced by hardware innovations that bring more robust data security, governance and privacy to the edge. For example, a new AI PC is the first computer to combine a central processing unit, graphics processing unit and neural processing unit on a single chip. The result is a dedicated AI engine that efficiently processes AI directly on a PC.
It’s also important to consider how a generative AI system will respond to expanding usage. Many AI systems work fine during development but may fail to scale across the enterprise in a cost-effective way. Agencies should test scalability up front, before investing resources in a system that won’t grow with their needs.
LEARN MORE: Ask these five questions about AI.
Establish Data Governance
To make generative AI valuable and safe, agencies need effective data governance models, starting with privacy and security. No generative AI project should be undertaken without robust solutions to ensure access control, data encryption, data anonymization and other zero-trust methodologies.
Standardization can ensure consistency across data feeds and normalize disparate data formats. Agencies must maintain thorough documentation and metadata for these data sets with details about sources, collection methods and usage rights. These protections must extend across the lifecycle, from data acquisition to storage, analysis, sharing, archiving and disposal.
Finally, agencies should establish a governance team to define policies, roles, responsibilities and procedures to enforce AI-specific data protections.
READ MORE: The NAIRR democratizes access to computing resources.
Engage the Right Stakeholders
Developing a successful generative AI framework requires a collaborative effort from various stakeholders including data scientists, software engineers, ethics experts, legal advisers, agency leadership and end users. More is needed for an IT department to spin up generative AI models, as practical use cases must be carefully crafted with input from a diverse group of experts. By involving all of these stakeholders in the development process, organizations can ensure that their generative AI frameworks are practical and efficient but also ethical, fair and safe.
Most important, agencies must engage constituents who will benefit from the generative AI system’s outputs. Use their input to build a functioning AI model with data from high-quality sources and transparent and explainable results. For the government to employ generative AI successfully, all stakeholders must trust the data that goes in and the content that comes out.
Generative AI is still in its infancy, but solutions are rapidly improving in performance and capability. Now is the time for agencies to explore how generative AI can benefit their operations and the people they serve. With the right foundational elements in place, your organization has a better opportunity for success with generative AI.