Biden’s new agency AI rules are a big deal. Here’s why

The Biden Administration last week released a guidance memo establishing the rules of the road for the use of artificial intelligence (AI)—including early steps toward the regulation of AI training data—within federal agencies. We have a full analysis of the guidance here.

You might have missed it. The release didn’t get much media coverage. A guidance memo from the head of OMB (Office of Management and Budget) usually doesn’t.

But it may prove to be a watershed document.

In this early stage of AI regulation, government policies—good or bad—can become standards simply by virtue of existing. They may set frameworks and enshrine language repeated in subsequent proposals.  

Regulation isn’t new to the tech world. The B2B software business doesn’t operate on blind trust, for instance. Developers use SOC-2 reports—audits conducted by independent third-party firms—to assure clients that their data and systems are protected by robust safeguards. 

The challenge with AI right now is to craft appropriate regulations that provide adequate safeguards while promoting innovation.

Two early regulatory frameworks: EU and USA

The world’s first major AI regulatory framework, the EU’s Artificial Intelligence Act, relies upon impact assessments and creates a classification system based on the scale of the risk posed by the AI system.

The Biden Administration’s new AI guidance memo similarly requires impact assessments, but grounds its risk concerns in potential threats to individual rights (“rights-threatening”) and risks to public safety (“safety-threatening”) including critical infrastructure, the environment, and human life. Expect those terms to come into common usage over the coming years.

The new federal agency guidance grows out of the AI in Government Act of 2020 and President Biden’s Executive Order 14110 signed in October 2023.

While significant, the new guidance is only one part of the regulatory foundation now under construction. Congress is preparing and considering a number of AI-related bills that would establish national standards and safeguards. Individual state legislatures are considering more than 500 proposals to embed guardrails at the state level, and many governors have begun to create state-based versions of the federal rules issued last week.

 

What the AI guidance covers

The new federal guidance applies to nearly all U.S. government departments and agencies. While smaller in scope than the EU AI Act (which covers all AI systems within all EU nations), its impact will be significant because any company contracting with a federal agency must abide by its rules. AI developers will need to build the internal practices, documentation, and reporting methods necessary to meet the new federal standards—and that lays the foundation for all other clients to demand similar practices and transparency.

The incentive is huge. Last year the federal government awarded $765 billion to more than 37,000 vendors.

AI systems aren’t small one-off projects. The federal government expects the new technology to become an everyday part of the work day. In other words: This isn’t a neat new trick like 3-D movies. This is a shift as fundamental as the coming of e-mail and smartphones.


Want to sell to the government? Use legal training data

Moving forward, all federal agencies that contract with private vendors for AI systems and services are required to “ensure transparency and adequate performance” for the procured AI. That includes obtaining documentation to assess the AI’s capabilities through the use of model, data, and system cards.

The new federal rules also require that any government-procured AI system complies with all applicable laws related to privacy, confidentiality, intellectual property, cybersecurity, human and civil rights, and civil liberties.

That simple rule could start a seismic shift within the biggest tech companies, many of which have been accused of illegally scraping copyrighted data to train their AI models. If OpenAI, Meta, or other companies want to contract with the feds, they will have to fully document the legality of their training data.

 

Urgency: Strict timelines, a Dec. 1 hard stop

To emphasize their importance and urgency, the new federal rules require every agency to name a Chief AI Officer with clear senior authority (GS-15 or a high-level appointee) and an Agency AI Governance Board (chaired by a Deputy Secretary or equivalent) by May 27, 2024.

By September 24, every federal agency must post on the agency website a plan to implement the new AI rules. The same plan must be submitted to the Office of Management and Budget.

 December 1, 2024, is a hard-stop date by which all AI systems in use by federal agencies must come under compliance. If any AI system hasn’t been cleared by that date, its use must stop immediately and not resume until it comes into compliance.

 The gears of government traditionally move slowly. These are laudably tough deadlines that will have agency employees working overtime in the coming months.

Previous
Previous

Overview: What the new federal agency AI rules require

Next
Next

White House unveils AI governance policy focused on risks, transparency