Mar 31 2023
Software

AI Bill of Rights: What the Federal Government's 'Blueprint' is

As it deploys artificial intelligence in more use cases, the federal government offers guidelines for development and use. Will they be enough for agencies to achieve ethical AI?

As artificial intelligence services like ChatGPT’s language model and the Lensa AI image generator move into the mainstream, many are questioning whether their outputs could involve bias.

Lensa AI received criticism for renderings considered biased on the basis of race and gender. ChatGPT has also been accused of responses perceived to be biased.

Facebook parent company Meta ended a public demo of its science-focused Galactica language model in November after only three days. The model was criticized for generating outputs that sounded plausible but were inaccurate and potentially biased.

The stakes are even higher for government organizations, which are deploying AI to drive decisions that affect the public. In these use cases, the concern is that baked-in bias could cause AI inputs to consider different groups of people unfairly.

In response to this concern, the Biden Administration issued the Blueprint for an AI Bill of Rights in October. The National Institute of Standards and Technology followed in January with the release of the AI Risk Management Framework. Both documents aim to protect individuals and society from AI-related risks. The principles these documents describe and the actions they recommend are necessary considerations.

Still, while understanding and reducing the negative impact of bias in AI solutions is important to achieving ethical AI, it’s only one part. There are additional steps agencies should take to ensure their use of AI is truly responsible and ethical.

Click on the banner and become an Insider in order to access exclusive content.

Understanding AI’s Impacts on Today's World

Many government agencies are already using machine learning (ML) and other forms of AI.

The U.S. Postal Service uses AI to plan delivery routes. The IRS depends on AI to detect fraudulent tax returns. The Department of Energy is looking at AI to optimize agricultural crop yields. The National Institutes of Health is investing in AI for biomedical and behavioral research.

In these use cases, it’s imperative that ML models and their outputs be as free from bias as possible.

The White House’s Blueprint for an AI Bill of Rights establishes key principles to help guide the design and use of AI. The guidelines described in the bill are core protections, from safeguarding individuals against algorithmic discrimination to allowing people to opt out of automated systems.

However, there are broader AI issues that agencies also should consider. The energy required to run a large language model like ChatGPT has been , along with associated greenhouse gas emissions. That impact might be justified if the service is advancing scientific research, but is it warranted if the primary purpose is not?

The potential merits of ChatGPT notwithstanding, its overall cost and its impact should be considered.

A broader consideration of AI deployments includes fairness and inclusion, sustainability and human rights. In fact, these are some of the concerns addressed by the United Nations’ Principles for the Ethical Use of Artificial Intelligence in the United Nations System, released in September.

DIVE DEEPER: End-to-end artificial Intelligence helps supports federal mission sets.

Six Tenets of Responsible AI

With these issues in mind, here are six tenets of responsible and ethical AI that should guide agencies in their use of automated systems:

  1. Human rights: AI should never be used to impinge on fundamental human rights, including dignity, respect, fairness, autonomy and freedom in its various forms. Consideration of human rights also should include issues like sustainability.
  2. Human oversight: Use of AI should accommodate human considerations and allow for human-in-the-loop approaches. Data inputs and outputs should be subject to regular human reviews that include diverse perspectives. Individuals should be able to opt out of automated systems when possible, with access to human alternatives.
  3. Explainable use of AI: Constituencies should be informed about when and how automated systems are being used and how outputs might affect them. Organizations that use AI shouldn’t take a “black box” approach, in which ML models and data inputs are hidden from the public. While ML algorithms can be mathematically complex, their use should be transparent and explainable.
  4. Security, safety and reliability: AI systems should operate securely and reliably. They should never cause unsafe conditions that place individuals at risk of harm. Systems should be developed with input from diverse domain experts to identify potential issues.
  5. Personal privacy: AI should include safeguards to prevent the exposure of personally identifiable information and prohibit the misuse of personal data. Individuals should have input into and control over how their data is used.
  6. Equity and inclusion: ML algorithms should be trained and regularly updated with fair and diverse data sets to ensure that bias isn’t baked into ML models. Likewise, ML models should be deployed in ways that maximize the equity of their outputs. Just as important, the design and deployment of AI across use cases should include the viewpoints of diverse stakeholders, not just data scientists. As broader swaths of the public are affected by AI, the involvement of diverse constituencies will be essential to the fair and inclusive use of AI.

REVIEW: The intelligence community is developing new uses for AI.

Trust Is Imperative for the Future of AI

The potential downsides of less-than-responsible use of AI are many, including erosion in government trust. At a time when only 20 percent of Americans trust the government, agencies can’t afford to get these AI systems wrong.

By taking steps now to follow the six tenets of responsible and ethical AI, the federal government can ensure its deployment of AI has the best chance of serving the public fairly and effectively.

Ethical AI principles within government organizations help ensure citizens' trust, foster government legitimacy and improve AI adoption. Lack of accountability in government AI use can undermine the prospects of AI uses for good.

Government agencies must ensure ethical AI principles are followed to ensure accountability and reliability for all future AI projects.

jittawit.21/ Getty Images
Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.