Overview: What the new federal agency AI rules require

The Biden Administration’s guidance on the use of AI by federal agencies, issued last week by the Office of Management and Budget, establishes an early foundation for the regulation of AI systems. The full document can be found here.

 Here’s an overview of what it contains.

Who’s in charge: Creating AI leadership and oversight, quickly

The new federal rules require every U.S. government agency (with a few exceptions) to appoint both a Chief AI Officer (CAIO) and create an Agency AI Governance Board. The CAIO and the chair of the Governance Board (not the same person) must be senior-level government officials. This officer and board must be in place by May 27, 2024.

“Agencies must ensure that AI issues receive adequate attention from the agency’s senior leadership,” the memo says. The Governance Board must be chaired by the agency’s Deputy Secretary or an official with equivalent seniority. In other words: This is not back-office work.

Non-tech representation is one of the emerging themes in AI oversight. Because of the profound impact AI is already having on personal economic, medical, employment, and housing decisions, there’s a rising call to include non-tech stakeholders in development and oversight positions.

The new fed rules enshrine that idea by requiring “appropriate representation” on AI Governance Boards, including senior officials responsible for cybersecurity, data, privacy, civil rights and civil liberties, equity, statistics, human capital, procurement, budget, legal, and customer experience. Agencies are encouraged to consult external experts to help “inject additional technical, ethics, civil rights and civil liberties, or sector-specific expertise” into the oversight process.

 

3 Tasks: Coordinate, innovate, and manage AI risk

The new federal rules bundle the CAIO’s responsibilities under three major rubrics:

·  Manage and coordinate the use of AI within the agency; sharing information, working with other agencies, and supporting standards-setting bodies such as NIST

·  Promote AI innovation by identifying and prioritizing appropriate uses of AI to advance the agency’s mission

·  Manage risks from the use of AI, with special attention paid to safety-impacting and rights-impacting AI

 

Training data: An early framework for oversight and risk management

Several sections in the new federal rules set forth general guidelines for the appropriate use of training, without getting overly specific. Some highlights:

·  “Agencies should develop adequate infrastructure and capacity to share, curate, and govern agency data for use in training, testing, and operating AI.”

·  All data used to help develop, test, or maintain AI applications, regardless of source, “should be assessed for quality, representativeness, and bias.”

That requirement to assess the quality of training data is critically important.

Garbage in, garbage out is one of the oldest truisms in computing. Flawed, biased, or poor-quality training data will result in equally faulty decision-making and outputs from the AI system. That’s why the appropriate regulation and use of training data is a top priority of the Transparency Coalition.

 

Federal AI code and models will be open source

The new federal rules establish an expectation of openness and transparency. This is hugely important.

The guidance says: “Agencies must proactively share their custom-developed code—including models and model weights—for AI applications in active use and must release and maintain that code as open source software on a public repository.” Data used to develop and test AI models will likely constitute a data set subject to federal transparency laws.

There are a few exceptions, such as when code is protected by patent or intellectual property law, or if code-sharing would create a risk to national security, individual privacy, etc.

There’s a lot of legal wiggle room in that exception clause, and it would not be surprising to see legal battles arise from the definition of “identifiable risk” covered by the rules. The legal foundation for this data transparency requirement can be found in the OPEN Government Data Act of 2018.

 

Risk management: Focus on ‘safety-impacting’ and ‘rights-impacting’ AI

The new AI guidance puts an early emphasis on scrutinizing AI systems and uses that are considered “safety-impacting AI” or “rights-impacting AI.” The two categories are described below.

Safety-Impacting AI

AI whose output produces an action or serves as a principal basis for a decision that has the potential to significantly impact the safety of:

·  Human life or well-being

·  Climate or environment

·  Critical infrastructure

·  Strategic assets or resources

Rights-Impacting AI

AI whose output serves as a principal basis for a decision concerning a specific individual or entity that has a legal, material, binding, or similarly significant effect on that individual’s or entity’s:

·  Civil rights, civil liberties, or privacy

·  Equal opportunities (to education, housing, insurance, credit, employment)

·  Access to critical government resources or services

A full list of “impacting” situations can be found on p. 29-33 of the March 28, 2024 memo.

 

Requirements for managing high-risk AI systems

The federal guidelines qualify new risk-management rules as “an initial baseline for managing risk from the use of AI,” which leaves much room for improvement.

At minimum, federal agencies must by Dec. 1, 2024, follow these practices before using new or existing rights-impacting or safety-impacting AI systems:

1.    Complete an AI impact assessment. (Details in the following section)

2.    Provide public notice and plain-language documentation about the use of AI

3.    Assess the AI’s impact on equity, fairness, and algorithmic discrimination

4.    Consult with affected communities and the public

5.    Maintain opt-out options that are “prominent, readily available, and accessible,” for people to decline the use of AI functionality in favor of a human alternative

6.    Test the AI for performance in a real-world context

7.    Independently evaluate the AI, including a documentation review by an independent reviewing authority not involved in the system’s development

8.    Ensure the existence of a fail-safe mechanism to minimize the risk of significant harm

9.    Conduct human reviews to evaluate the ongoing need and functionality of an AI system in operation

 

What does an AI impact assessment contain?

The use of impact assessments as an enforcement mechanism for AI models and systems is the subject of wide debate in policy circles. The EU and the Biden Administration have defaulted to the impact assessment model, but it’s still too early to know if this will become the global standard. Impact assessments are only as good as what they require. Documentation, transparency, and third-party verification should be minimum starting points.  

The new federal rules require an impact assessment to document:

·  The intended purpose for the AI and its expected benefit, supported by specific metrics and qualitative analysis.

·  The potential risks of using AI, including a risk-benefit analysis and identification of impacted stakeholders

·  The quality and appropriateness of the training data, including its collection and preparation, its quality, its relevance, its breadth, and whether it is publicly disclosable as an open government data asset

That’s a short list—which is why last week’s memo is considered by most to be a first step, and not the last word, on the appropriate regulation of AI within the federal government.

Previous
Previous

For AI firms, anything "public" is fair game

Next
Next

Biden’s new agency AI rules are a big deal. Here’s why