FAQ: The New Mexico Artificial Intelligence Act (HB 60) explained in plain language

Lawmakers in the New Mexico State Capitol, above, are considering one of the most important AI bills in the nation.

The New Mexico AI Act, one of the most significant AI bills of 2025, is making its way through the legislature in Santa Fe.

This FAQ is meant to explain what the bill does and doesn’t do, and why it’s necessary.

What is HB 60, the New Mexico AI Act?

New Mexico HB 60, aka the New Mexico Artificial Intelligence Act, is primarily a consumer protection and transparency bill.  

It places specific documentation and disclosure requirements on the developers and deployers of “high-risk” AI systems, i.e. systems that make consequential decisions about consumers and the products and services they are seeking. Consequential decisions affect people in areas like employment, health care, and housing.

The full text of the bill is available here. The LESC bill analysis is available here.

What are the risks the bill addresses?

It’s widely known that artificial intelligence systems are prone to producing errors. Github hosts an entire resource page devoted to the ongoing compilation of these hallucinations, mistakes, and improper uses of the technology.

Examples include:

  • An AI system that claims to determine a person’s sexual orientation based on a single photograph.

  • An AI-assisted recruiting tool used by Amazon that discriminated against women. In effect, Amazon's system taught itself that male candidates were preferable.

  • Officials in the UK scrapped an exam that based its results on an algorithm after accusations that the system was biased against students from poorer backgrounds,

As AI becomes more embedded in the systems used to evaluate school applications, housing rental applications, mortgage lending decisions, medical treatment plans, insurance adjustments, and other life-affecting decisions, it’s incumbent upon government to insure that these systems are used appropriately and fairly. Consumers should be aware of the use of AI in these processes. They should have the option to appeal to a live human when they believe AI has affected them in error or with bias.

Why is this bill needed now?

Artificial intelligence systems are in their infancy, and there are no ‘rules of the road’ for investors, entrepreneurs, technology developers, non-technology deployers, or consumers. Developers and deployers are seeking legal guidelines, standards, and norms in order to invest their resources wisely. They also seek ‘rules of the road’ so that legal and ethical companies are properly rewarded by the marketplace, and are not undermined by bad actors and harmful AI products. Now is the time to establish those rules of the road to protect New Mexico businesses and consumers and allow all to thrive.

What would be disclosed to consumers?

Under HB 60, any person or company using an AI system to make a consequential decision would need to provide the following information to consumers:

  • Notice that the AI system will be used or will be a substantial factor in making the decision;

  • Information describing the system, the purpose of the system, and the nature of the decision being made;

  • The deployer’s contact information;

  • If the decision is adverse to a consumer, additional information including the following:

    • A statement including the principal reason for the decision;

    • The degree and manner in which the system contributed to the decision;

    • The source and type of data that was processed by the system to make the decision;

    • An opportunity to correct any incorrect personal data the system processed to make its decision;

    • An opportunity to appeal the decision, provided that such appeal does not pose a risk of life or safety to the consumer.

would the AI Act hinder research or innovation?

No. The Act specifically (in Section 12) says HB 60 shall not be construed to restrict the ability of any person or organization to engage in public or peer-reviewed scientific or statistical research, including clinical trials.

The same section specifies that the Act shall not restrict the ability of persons or organizations to engage in pre-market testing, “including the development, research and testing of artificial intelligence systems.” (Quoting HB 60, Sec. 12, A (9).)

what’s the difference between a ‘developer’ and ‘deployer’?

An AI system developer is the person or company that creates an artificial intelligence system and makes it publicly available for use in New Mexico.

An AI system deployer typically purchases an AI system from a developer and then hosts or otherwise commercializes the AI system. Deployers may also be developers. A deployer is also a person or public entity that deploys or uses a high-risk AI system to make a consequential decision affecting a consumer in New Mexico.

What disclosures would be required from ai developers?

These transparency requirements include information on the data sources used to train the AI system, and periodic assessments of any known or potential risks of bias or discrimination by the AI system against consumers. In addition, they call for the disclosure of the intended uses for these systems, and mandate reporting of specific incidents of discrimination to impacted consumers and the New Mexico Attorney General’s office.  

Additionally, this bill calls for clear disclosure to consumers when an AI system is used as a basis or partial basis for these consequential decisions. The bill also states that consumers have the right to seek an explanation for, and appeal, such decisions.  

What is required of deployers under HB 60?

A company or person who deploys a high-risk AI system would be required to:

  • Use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination;

  • Implement and regularly review a risk management policy and program to govern the deployment (use) of a high-risk AI system, including steps to identify, document, and mitigate algorithmic discrimination;

  • Publish and regularly update a website listing a summary of the types of high-risk AI systems used and a detailed explanation of the nature, source, and extent of the information collected and used;

  • Conduct an impact assessment for high-risk AI systems annually and within 90 days of an intentional and substantial system modification.

What would that impact assessment include?

The required impact assessment includes:

  • The intended uses, contexts, and benefits of the AI system;

  • Analysis of risks of algorithmic discrimination and steps taken to mitigate discrimination;

  • A description of the categories of data system processes as inputs and outputs;

  • A summary of categories of any data used to customize a system;

  • The metrics used to evaluate the performance and known limitations of the system, including whether the system was tested, the locations where test data were collected, the demographic groups represented in the test data in terms of age, ethnic group, gender, or race, and any independent studies carried out to evaluate the system for algorithmic discrimination;

  • A disclosure of whether a system was used in a manner consistent with the AI system developer’s intent.

Who would be exempt from the impact assessment requirement?

Impact assessments would not be required when all of the following conditions are met:

  • The high-risk AI system impacts fewer than 50 consumers;

  • The deployer does not use the deployer’s own data to train the system;

  • The deployer uses the system solely for its intended uses as disclosed by the system’s developer;

  • The deployer makes any impact assessment provided by the developer available to consumers;

  • The system continues learning based on data derived from sources other than the deployer’s data.

Does HB 60 require the disclosure of trade secrets?

No. HB 60 specifies that nothing in the Act shall require a developer or deployer to disclose a trade secret or other information protected from disclosure by state or federal law.

When such information is withheld from an otherwise required disclosure, deployers and developers would need to notify consumers and provide a basis for the withholding.

who would enforce the ai act?

The New Mexico Department of Justice would promulgate rules to enforce the New Mexico AI Act. This rulemaking would be done in consultation with AI experts, academic researchers, civil rights organizations, deployers, developers, labor unions, and organizations representing the interests of consumers.

If a New Mexico resident is harmed by an AI system not in compliance with the AI Act, HB 60 would give them the right to hold companies accountable in court through a private right of action.

does the act contain an opportunity to cure?

Yes. An ‘opportunity to cure’ is essentially a limited time period during which a developer or deployer who is out of compliance with the AI Act is allowed to fix the problem.

HB 60 provides deployers and developers 90 days to correct violations of the Artificial Intelligence Act before facing action from the New Mexico Department of Justice (NMDOJ).

This ‘opportunity to cure’ provision will expire when NMDOJ promulgates rules to enforce the Artificial Intelligence Act.

For a period of one year after NMDOJ promulgates rules, defendants suspected to have violated the Artificial Intelligence Act may make an affirmative defense if all of the following are true:

1. The developer or deployer discovers the violation as a result of adversarial testing, red teaming, or an internal review process;
2. The developer or deployer cures the violation within 7 days of the violation;
3. The developer or deployer is in compliance with the risk management provisions of the Artificial Intelligence Act;
4. The developer or deployer requires documentation from a developer to cure a violation; and
5. The developer demonstrates the violation was inadvertent, affected fewer than 100 consumers, and could not have been discovered through reasonable diligence.

At the end of the one-year period, violators of the Artificial Intelligence Act may no longer make an affirmative defense.

Learn more

Previous
Previous

Pinterest changed User Terms to train AI on your photos and data. Here’s how to opt out.

Next
Next

How state lawmakers are acting to stop the harm of AI-generated deepfakes