US Proposes AI Bill of Rights to Regulate Artificial Intelligence

AI Bill of Rights

The White House just released its Blueprint for an AI Bill of Rights. The blueprint contains an ambitious and sweeping set of guidelines for how tech companies, citizens, and the government can build more trustworthy AI systems that are in line with US approaches to fairness, privacy, and equal opportunity. The AI Bill of Rights is only a guideline for now, but it may well serve as the inspiration for future US legislation, as the name “blueprint” suggests.

What Is in the US AI Bill of Rights?

The blueprint document (PDF) proposes five principles to “guide the design, use, and deployment of automated systems to protect the American public. Each principle is accompanied by a section called “From Principles to Practice” to assist those tasked with implementing them on a more technical level.

The blueprint goes into further detail for each principle in its “From Principles to Practice” section. Each principle comes with explanations and elaborations in these areas:

  • Why the principle is important, with references to a wide range of real use cases
  • What should be expected of automated systems
  • How these principles can move into practice

To show how it fits into the existing legal landscape, the authors of the blueprint link their recommendations to existing US laws and regulations.

The Five Principles

We explain each of the blueprint’s five principles in more detail below.

Safe and Effective Systems

You should be protected from unsafe or ineffective systems. 

This principle focuses on the robustness and safety of the system, proposing that safety and effectiveness be considered over the entire life cycle of the AI system. Particular focus is paid to “unintended, yet foreseeable, uses or impacts” that negatively affect individuals and communities.

Algorithmic Discrimination Protections

You should not face discrimination by algorithms and systems should be used and designed in an equitable way. 

This principle links potential harm from AI systems to definitions of discrimination in US law: “race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, [and] genetic information.” Designers of AI systems should take proactive and continuous measures to protect individuals and communities from harm, including algorithmic impact assessments.

Data Privacy

You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.

Data collected and used by AI systems must be guarded against misuse, and consent for data collection must be “meaningfully given.” Continuous surveillance and monitoring should not be used “in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access.”

Notice and Explanation

You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. 

This principle recommends that AI systems include clear documentation and notice to the user so that they can understand the impact that the systems have on them. Systems “should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and [be] calibrated to the level of risk based on the context.” The text stresses that explanations and reports must be “in plain language.”

Human Alternatives, Consideration, and Fallback

You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.

The final principle focuses on empowering users to opt out of AI systems, flag errors, and escalate decisions to a human following clear procedures. This ability is stressed particularly for sensitive applications in areas such as criminal justice, employment, education, and health.

Scope and Use Cases

The blueprint explicitly lists areas where the US AI Bill of Rights “should be considered”:

  • Civil rights, civil liberties, or privacy, including but not limited to:
    • Speech-related systems such as automated content moderation tools;
    • Surveillance and criminal justice system algorithms 
    • Systems with a potential privacy impact 
    • Any system that has the meaningful potential to lead to algorithmic discrimination.
  • Equal opportunities, including but not limited to:
    • Education-related systems 
    • Housing-related systems 
    • Employment-related systems
  • Access to critical resources and services, including but not limited to:
    • Health and health insurance technologies
    • Financial system algorithms 
    • Systems that impact the safety of communities
    • Systems related to access to benefits or services or assignment of penalties 

These areas have a broad overlap with the EU AI Act’s definition of “high-risk AI systems” currently being finalized. While the EU AI Act introduces several risk categories — from “minimal risk” all the way to “unacceptable risk” — the US AI Bill of Rights does not have any delineation of risk categories.

The blueprint uses the more general term “automated systems” (as opposed to “AI”) to make it clear that it covers a wide range of technologies:

“Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure.”

US AI Bill of Rights Blueprint

The EU AI Act uses a similarly broad definition but retains the term “artificial intelligence.” Other proposed US regulations use variations of the term, such as “automated decision system.” Companies need to be aware that many of the computer-based systems they operate today will fall under these broad definitions of AI and automated systems.

Why Is the US Proposing an AI Regulation Now?

Over the last few years, the European Union has been taking strong steps to regulate artificial intelligence from a consumer protection perspective. The EU AI Act is about to pass the EU Parliament and will begin implementation within EU and member state regulations in 2023. As the regulatory superpower, the EU is aiming to have its view of how AI should be regulated become the de facto global standard, similar to how the GDPR has become the standard for privacy on the web even outside the EU. 

The US may be concerned that it is lagging behind the EU in setting the regulatory agenda in the digital economy. The US AI Bill of Rights could be seen as an attempt to propose AI regulation from an American perspective.

What Other AI Regulations Are Being Considered in the US?

While the US government has not been active at the federal level on AI regulation, state and local governments in the US have taken their own initiatives.

New York City: The Automated Employment Decision Tool Law

The Automated Employment Decision Tool Law (AEDT), which comes into effect on January 1, 2023, regulates the use of automated systems in employment decisions. The AEDT places compliance obligations on employers in New York City that use AI tools rather than the software vendors who develop the tools. 

As with the EU AI Act, the AEDT defines automated decision tools very broadly, beyond deep learning systems, including “any computational process … derived from machine learning, statistical modeling, data analytics, or artificial intelligence.” 

The AEDT’s requirements broadly align with the US AI Bill of Rights, signaling that further state and local legislation may take similar shape. Its requirements include:

  • Conducting an independent, annual bias audit
  • Providing disclosures
  • Notifying candidates or employees
  • Providing an accommodation or alternative selection process

Penalties do not approach the severe 6% of global turnover envisioned by the EU AI Act but may add up over time because each day a non-compliant tool is used counts as a new violation.

California Fair Employment and Housing Council

Earlier in 2022, the California Fair Employment and Housing Council proposed regulations to oversee the use of AI in employment decisions. As with the AEDT, the regulation defines automated decision systems broadly, imposes recordkeeping requirements, and specifically outlaws discrimination based on characteristics protected by the Fair Employment and Housing Act (FEHA). While this regulation has not passed yet, it further underscores the growth in AI regulation in the US with a focus on fairness and discrimination. 

Key Takeaways

  • The blueprint for the US AI Bill of Rights outlines how the US federal government sees AI regulation. 
  • While the US AI Bill of Rights may not have the force of law like the EU AI Act soon will, it sets the framework for AI regulation at the local and state level in the US.
  • Given the focus of the AI Bill of Rights on discrimination tied to existing US law, many software applications impacting citizens’ lives may soon have to be revised or replaced to comply with upcoming laws.