The complete guide to TRAIGA, the Texas Responsible AI Governance Act
Texas’s 43-page AI governance proposal is the most-watched piece of AI legislation to emerge so far in 2025.
Texas Rep. Giovanni Capriglione officially filed his Texas Responsible AI Governance Act (TRAIGA, HB 1709) on Dec. 23, 2024. In the seven weeks since, the suburban Dallas Republican’s proposal has become the most-watched artificial intelligence bill in America—and it hasn’t even been scheduled for its first hearing yet.
Is the hype warranted?
In a word, yes.
There are generally two kinds of state AI bills. Comprehensive bills engage with many aspects of AI and often establish a framework for state regulation. Narrow bills focus on one specific aspect of AI, like transparency, disclosure, or deepfakes.
Last year saw the passage of one comprehensive AI bill, the Colorado Artificial Intelligence Act, while California’s comprehensive and controversial SB 1047 was passed by the legislature before being vetoed by Gov. Gavin Newsom.
This year there are two significant comprehensive AI proposals: Texas’ TRAIGA, and the New Mexico Artificial Intelligence Act. (We have coverage of the New Mexico bill here.)
This complete guide to TRAIGA is meant to be an objective overview of the bill, written in plain English. We will have further FAQs and more nuanced and opinionated coverage of the bill as it evolves in its journey through the capitol in Austin.
Why the Texas bill matters
Texas is garnering attention because of its sizeable population and its significant tech sector—and, notably, because Republicans control both houses in Austin.
HB 1709 is a Republican proposal, and Rep. Capriglione spent considerable time and effort getting input from stakeholders around the state over the past two years. (The Texas legislature meets only in odd-numbered years.)
Rep. Giovanni Capriglione:
“By balancing innovation with public interest, we aim to create a blueprint for responsible AI use that other states and nations can follow. Texas has always been at the forefront of technological progress, and with this bill, we are ensuring that progress is ethical and beneficial to all Texans.”
In other words, Texas matters because it offers a chance to craft a comprehensive red-state model for regulating AI. If Rep. Capriglione can move a balanced Republican-written and Democrat-acceptable proposal through the capitol, it could have wide national effect. In Congress, Sen. Ted Cruz (R-Texas), incoming chair of the Senate Commerce Committee, is keenly interested in artificial intelligence and will likely be watching TRAIGA with an eye toward future federal legislation.
WHat is covered by the bill: high-risk ai
The 43-page bill covers only “high-risk artificial intelligence systems.” A high-risk AI system is one that plays a substantive role in a consequential decision.
Consequential decisions involve a material effect on a consumer’s access to aspects of a criminal case; education enrollment or opportunity; a financial service; an essential government service; residential utility services; health care services; housing; insurance; legal services; transportation; elections; or other constitutionally protected services or products.
For purposes of this guide to TRAIGA, all AI systems referred to hereafter should be assumed to be high-risk AI systems.
who is covered by the bill: the three d’s
The bill identifies three categories of covered parties: AI developers, distributors, and deployers.
Developers like OpenAI or Meta create the original AI model.
Distributors are intermediary sellers, for instance packaging an OpenAI system for sale to and use by a commercial customer.
Deployers are people or companies that interact with end users (consumers) using an AI system.
If a deployer substantially modifies an AI system, they may then be legally considered the developer.
The bill contains exemptions for small businesses as defined by the federal Small Business Administration, and for testing and trialing innovative AI systems through a new state AI sandbox program (see details on that below).
Effective date of the Act
If passed by the legislature and signed by Gov. Greg Abbott this session, the Texas Responsible AI Governance Act would take effect on Sept. 1, 2025.
AI developer duties
Under the bill, AI developers would need to:
“Use reasonable care to protect consumers” from known or reasonably foreseeable risks of algorithmic discrimination.
Provide a High-Risk Report to deployers, which includes information about how the AI system should or should not be used; known limitations of the system; and a high-level summary of the data used to train the model.
Alert deployers if an AI system is substantially modified, and keep detailed records of synthetic data used to develop the AI system.
AI distributor and deployer duties
Distributors and deployers would need to:
Use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination.
Immediately withdraw, disable, or recall an AI system not in compliance with the developer requirements outlined above, and alert both deployers and developers.
annual impact assessments
HB 1709 would require AI deployers to complete an impact assessment for every AI system deployed, both annually and within 90 days after any substantial modification of the system.
The assessment may be conducted by the deployer or by a third-party contractor and must include:
A statement disclosing the purpose, intended use, and benefits afforded by the system.
An analysis of whether the system poses known or foreseeable risks of algorithmic discrimination, the nature of the risk, and steps taken to mitigate the risk.
A description of the categories of data the system processes as inputs, and the expected outputs.
An overview of any data used to customize the AI system.
disclosure of AI use to consumers
AI developers and deployers would be required to disclose, clearly and conspicuously, to the consumer:
The fact that the consumer is interacting with an AI system.
The purpose of the AI system.
The fact that the AI system may or will make a consequential decision affecting the consumer, and the nature of that decision.
The factors used in making a consequential decision.
Contact information for the deployer.
A declaration of the consumer’s rights under TRAIGA.
Consumers would also have the right to appeal a consequential decision made by an AI system, if that decision has an adverse impact on their health, safety, or fundamental rights. The deployer would be required to provide a clear and meaningful explanation of the role of the AI system in the decision-making procedure.
risk mitigation policy required
HB 1709 requires developers or deployers to implement a risk management policy to govern the development or deployment of the AI system.
duties of social media and digital platforms
Social media platforms and digital service providers (online stores like iTunes, or streaming platforms like Spotify) under HB 1709 most post terms of service that prohibit the deployment of an AI system that violates the rules set forth in the Act.
reporting requirements
HB 1709 requires AI deployers to notify, in writing, the appropriate state agency if the deployer discovers that its AI system has caused algorithmic discrimination of an individual or group.
The deployer must cease operation of the offending system as soon as technically feasible. The deployer has ten days to provide notice to the appropriate state agency.
subliminal or deceptive techniques not allowed
Section 551.051 of the bill quoted in full:
“An artificial intelligence system shall not be developed or deployed that uses subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or the effect of materially distorting the behavior of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing a person to make a decision that the person would not have otherwise made, in a manner that causes or is likely to cause significant harm to that person or another person or group of persons.”
‘social scoring’ is prohibited
HB 1709 bans the development and deployment of AI “for the evaluation or classification of natural persons…based on their social behavior” or personal characteristics.
This is a pre-emptive prohibition on social scoring, a practice that a 2022 MIT Technology Review article describes as “a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen.”
no use of biomarkers, either
The bill’s Section 551.053 prohibits the deployment of an AI system that was developed with biometric identifiers of individuals “and the targeted or untargeted gathering of images or other media from the internet,” for the purpose of identifying a specific individual.
For example, it would be illegal to deploy an AI system trained on publicly available Facebook or Instagram posts that contain photos of individuals, if that AI system would be used to identify an individual.
prohibiting EXPLICIT AI DEEPFAKES and CSAM
The bill prohibits the development or deployment of an AI system that “produces, assists, or aids in producing unlawful visual material” in violation of Texas law (Section 43.26) regarding the depiction of sexual imagery involving a minor, or an unlawful deepfake video (under Texas Penal Code, Section 21.165).
enforcement provisions of traiga
Potential violations of HB 1709 would be investigated and enforced by the Office of the Texas Attorney General. There is no private right of action created by the bill.
The AG’s office would be required to create and post a public online mechanism for complaint submissions.
The bill does contain a right to cure: The Attorney General must notify a developer, distributor, or deployer 30 days prior to bringing action. The respondent then has 30 days to cure the violation, or face an injunction and/or civil penalties.
Uncured violations not related to an unacceptable use are subject to a fine of $50,000 to $100,000 per violation. Violations related to unacceptable uses are subject to a fine of $80,000 to $200,000. A developer or deployer continuing to operate after being found in violation is subject to a fine of $12,000 to $40,000 per day they’re operating.
creation of a state ai council
HB 1709 would create an Artificial Intelligence Council, with rulemaking authority, administratively attached to the Governor’s Office. The Council’s purpose is to:
Ensure AI systems are ethical and in the public’s best interest and do not harm public safety or individual freedom.
Identify existing laws and regulations that impede innovation in AI development, and recommend appropriate reforms.
Analyze opportunities to improve state government efficiency through the use of AI systems.
Investigate potential instances of regulatory capture by tech companies and the censoring of competitors or smaller innovators.
creation of an ‘ai sandbox program’
In tech development a ‘sandbox program’ is a controlled environment separated from an existing system. It allows developers to test applications without affecting the rest of the system.
HB 1709 would task the Texas Department of Information Resources with creating an AI Regulatory Sandbox Program. This program would provide clear guidelines for AI developers to test systems while temporarily exempt from certain regulatory requirements. The idea is to promote the innovative use of AI by allowing developers room to create and innovate in a safe space prior to launching an AI system that would be subject to the rules of TRAIGA.
Participants would be allowed to work within the sandbox program for up to 36 months.
adding AI to existing law on the personal data rights of consumers
AB 1709 would slightly tweak the Texas Business & Commerce Code (Sec. 451) to add “artificial intelligence systems” into existing law regarding a consumer’s right to know if personal data is or will be used in any AI system, and for what purpose.
Consumers would have the right to opt out of the sale of personal data for use in AI systems prior to that data being collected.
creation of a ‘Texas AI workforce development’ grant program
HB 1709 would create an AI Workforce Development program that would offer grants to high schools and community colleges to implement technical education programs focused on AI skill development and job readiness.
The program would also partner with the AI industry to offer workforce development programs for workers looking to enter the industry.