The White House’s Office of Science and Technology Policy has identified a framework of five principles, also known as the “Blueprint for an AI Bill of Rights,” that is intended to guide the design, use, and deployment of automated systems and artificial intelligence (AI). The Blueprint defines automated systems broadly as any system, software, or process that uses computation to determine outcomes, make or aid decisions, inform implementation, collect data, or otherwise interact with individuals or communities.
The framework is intended to apply to automated systems that have the potential to impact the public’s rights, opportunities, or access to critical resources or services. The White House specifically indicated that the framework should apply to housing, credit, employment, and financial services. While the Blueprint does not yet create mandatory new requirements for developers, designers, and deployers of automated systems, it may overlap with existing anti-discrimination laws. “Deployer,” as used in the framework, appears to mean any entity that deploys or uses an automated system such as an AI interface or automated calling system. The Blueprint may also be a signal of executive orders and regulations soon to come.
The framework’s principles consist of:
1. Safe and Effective Systems
The first principle provides that the public should be protected from unsafe or ineffective systems. To that end:
All automated systems should be developed with consultation from diverse communities to identify risks, concerns, and impact.
- The systems should undergo pre-deployment testing.
- Design should focus on preventing issues such as irrelevant data usage.
- Developers should use independent evaluation and reporting to confirm safety and effectiveness of systems to mitigate potential harms.
2. Algorithmic Discrimination Protections
The second principle is that automated systems should be used and designed in an equitable way to prevent algorithmic discrimination. For instance, measures should be taken to prevent unfavorable outcomes based on an individual’s:
[article_ad]
- race,
- color,
- ethnicity,
- sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation),
- religion,
- age,
- national origin,
- disability,
- veteran status,
- genetic information, or
- any other classification protected by law
Algorithmic discrimination is defined as unjustified different treatment which disfavors people based on any of these characteristics. The White House also called for ongoing assessments based on use of representative data to prevent proxies for demographic features, such as algorithmic impact assessments. Results for these assessments should be made available to the public when possible.
3. Data Privacy
The third principle calls for protection against abusive data practices. For instance, the White House stated the public has a reasonable expectation of privacy and data should be strictly used in the context it was collected. System deployers should seek permission for the collection, use, access, transfer, and deletion of data to the greatest extent possible. All consent requests should be brief, in plain language, and offer choices for specific contexts of use.
4. Notice and Explanation
The fourth principle calls for up-to-date notice of how automated systems are being used and their impact on the public to be provided regularly. It is unclear what methods of notice are necessary.
5. Human Alternatives, Consideration, and Fallback
The fifth principle provides that consumers should be able to opt-out of automated systems and have access to a person who can quickly remedy issues. Reasonable expectations should be considered to determine when a human alternative must be provided. Those expectations should focus on protecting the public from harmful impacts.
Reporting that includes a description of human governance processes, accessibility, outcomes, and effectiveness should be made publicly available when possible.