Understanding AI’s Impacts on Today's World
Many government agencies are already using machine learning (ML) and other forms of AI.
The U.S. Postal Service uses AI to plan delivery routes. The IRS depends on AI to detect fraudulent tax returns. The Department of Energy is looking at AI to optimize agricultural crop yields. The National Institutes of Health is investing in AI for biomedical and behavioral research.
In these use cases, it’s imperative that ML models and their outputs be as free from bias as possible.
The White House’s Blueprint for an AI Bill of Rights establishes key principles to help guide the design and use of AI. The guidelines described in the bill are core protections, from safeguarding individuals against algorithmic discrimination to allowing people to opt out of automated systems.
However, there are broader AI issues that agencies also should consider. The energy required to run a large language model like ChatGPT has been , along with associated greenhouse gas emissions. That impact might be justified if the service is advancing scientific research, but is it warranted if the primary purpose is not?
The potential merits of ChatGPT notwithstanding, its overall cost and its impact should be considered.
A broader consideration of AI deployments includes fairness and inclusion, sustainability and human rights. In fact, these are some of the concerns addressed by the United Nations’ Principles for the Ethical Use of Artificial Intelligence in the United Nations System, released in September.
DIVE DEEPER: End-to-end artificial Intelligence helps supports federal mission sets.
Six Tenets of Responsible AI
With these issues in mind, here are six tenets of responsible and ethical AI that should guide agencies in their use of automated systems:
- Human rights: AI should never be used to impinge on fundamental human rights, including dignity, respect, fairness, autonomy and freedom in its various forms. Consideration of human rights also should include issues like sustainability.
- Human oversight: Use of AI should accommodate human considerations and allow for human-in-the-loop approaches. Data inputs and outputs should be subject to regular human reviews that include diverse perspectives. Individuals should be able to opt out of automated systems when possible, with access to human alternatives.
- Explainable use of AI: Constituencies should be informed about when and how automated systems are being used and how outputs might affect them. Organizations that use AI shouldn’t take a “black box” approach, in which ML models and data inputs are hidden from the public. While ML algorithms can be mathematically complex, their use should be transparent and explainable.
- Security, safety and reliability: AI systems should operate securely and reliably. They should never cause unsafe conditions that place individuals at risk of harm. Systems should be developed with input from diverse domain experts to identify potential issues.
- Personal privacy: AI should include safeguards to prevent the exposure of personally identifiable information and prohibit the misuse of personal data. Individuals should have input into and control over how their data is used.
- Equity and inclusion: ML algorithms should be trained and regularly updated with fair and diverse data sets to ensure that bias isn’t baked into ML models. Likewise, ML models should be deployed in ways that maximize the equity of their outputs. Just as important, the design and deployment of AI across use cases should include the viewpoints of diverse stakeholders, not just data scientists. As broader swaths of the public are affected by AI, the involvement of diverse constituencies will be essential to the fair and inclusive use of AI.
REVIEW: The intelligence community is developing new uses for AI.
Trust Is Imperative for the Future of AI
The potential downsides of less-than-responsible use of AI are many, including erosion in government trust. At a time when only 20 percent of Americans trust the government, agencies can’t afford to get these AI systems wrong.
By taking steps now to follow the six tenets of responsible and ethical AI, the federal government can ensure its deployment of AI has the best chance of serving the public fairly and effectively.
Ethical AI principles within government organizations help ensure citizens' trust, foster government legitimacy and improve AI adoption. Lack of accountability in government AI use can undermine the prospects of AI uses for good.
Government agencies must ensure ethical AI principles are followed to ensure accountability and reliability for all future AI projects.