The revised guide to TRAIGA 2.0, the Texas Responsible AI Governance Act
Texas’s 31-page AI governance proposal is the most-watched piece of AI legislation to emerge so far in 2025.
Editor’s note: Rep. Capriglione filed a substantially revised version of TRAIGA, the Texas Responsible AI Governance Act, on Friday, March 14. This overview of the bill has been updated to reflect the revised March 14 version of the proposal.
The full text of the revised TRAIGA (now HB 149) is available here.
Texas Rep. Giovanni Capriglione officially filed his Texas Responsible AI Governance Act (TRAIGA) on Dec. 23, 2024. In the weeks since, the suburban Dallas Republican’s proposal has become the most-watched artificial intelligence bill in America.
Is the hype warranted?
In a word, yes.
There are generally two kinds of state AI bills. Comprehensive bills engage with many aspects of AI and often establish a framework for state regulation. Narrow bills focus on one specific aspect of AI, like transparency, disclosure, or deepfakes.
Last year saw the passage of one comprehensive AI bill, the Colorado Artificial Intelligence Act, while California’s comprehensive and controversial SB 1047 was passed by the legislature before being vetoed by Gov. Gavin Newsom.
This year there are two significant comprehensive AI proposals: Texas’ TRAIGA, and the New Mexico Artificial Intelligence Act. (We have coverage of the New Mexico bill here.)
This complete guide to TRAIGA is meant to be an objective overview of the bill, written in plain English.
Why the Texas bill matters
Texas is garnering attention because of its sizeable population and its significant tech sector—and, notably, because Republicans control both houses in Austin.
TRAIGA is a Republican proposal, and Rep. Capriglione spent considerable time and effort getting input from stakeholders around the state over the past two years. (The Texas legislature meets only in odd-numbered years.)
Rep. Giovanni Capriglione:
“By balancing innovation with public interest, we aim to create a blueprint for responsible AI use that other states and nations can follow. Texas has always been at the forefront of technological progress, and with this bill, we are ensuring that progress is ethical and beneficial to all Texans.”
In other words, Texas matters because it offers a chance to craft a comprehensive red-state model for regulating AI. If Rep. Capriglione can move a balanced Republican-written and Democrat-acceptable proposal through the capitol, it could have wide national effect. In Congress, Sen. Ted Cruz (R-Texas), incoming chair of the Senate Commerce Committee, is keenly interested in artificial intelligence and will likely be watching TRAIGA with an eye toward future federal legislation.
Tech lobby pushback results in revised bill
Rep. Capriglione’s original bill, filed in late 2024 as HB 1709, was crafted in the pre-Trump era. That environment was marked by bipartisan concern over the fast-moving, no-holds-barred nature of AI development and deployment.
That changed on Jan. 20, 2025, when President Trump and Elon Musk swept into power. Trump’s newfound alliance with corporate tech giants like Meta, Google, Amazon, and X has resulted in a new determination on the part of the tech lobby to quash any and all rules proposed for AI.
That blowback hit TRAIGA in February as Capriglione’s fellow Republicans felt the weather change. The bill’s author adjusted accordingly, removing many of TRAIGA’s most substantial requirements. The bill was re-filed on March 14 as a slimmed-down HB 149.
WHat the bill covers: Mostly government-deployed ai
The original 43-page bill covered “high-risk artificial intelligence systems.” A high-risk AI system is one that plays a substantive role in a consequential life-affecting decision.
The new 31-page bill does not cover or mention high-risk AI systems.
Instead, TRAIGA 2.0 focuses mainly on AI systems developed and/or deployed by government agencies.
Disclosure of AI systems (described fully below) is required only by government-deployed AI systems interacting with consumers, not by commercial AI systems.
The “prohibited uses” section of the new bill does cover both commercial and government AI systems.
The original bill contained an exemption for small businesses. That exemption is not included in the revised version.
Effective date of the Act
If passed by the legislature and signed by Gov. Greg Abbott this session, the Texas Responsible AI Governance Act would take effect on Sept. 1, 2025.
AI developer & deployer prohibitions
The original bill’s duty of care provisions, requiring AI developers to exercise reasonable care to protect consumers from foreseeable risks of algorithmic discrimination, have been scrapped.
The new bill replaces those substantial protections with a brief section (551.058) that prohibits the development or deployment of an AI system “with the intent to unlawfully discriminate against a protected class in violation of the laws of this state or federal law. Disparate impact alone is not sufficient to show intent to discriminate.”
New prohibitions on manipulation
The new version of TRAIGA includes prohibitions against outright “manipulation of human behavior to incite harm or criminality.”
The new bill states: “An artificial intelligence system shall not be intentionally developed or deployed to encourage a person to commit physical self-harm, including suicide; harm another person; or engage in criminal activity.”
It’s important to note the phrase “intentionally developed.” In the case of AI, this means that a developer or deployer would be in violation of the Act only if it can be shown that they intended the AI system to cause harm. This would be very hard to prove.
Also prohibited in the new bill: Deceptive trade practices intended to manipulate human behavior to circumvent informed decision-making. Again, this section only prohibits the “intentional” use of deceptive trade practices.
‘social scoring’ is prohibited by government AI
The original version of the bill banned the development and deployment of AI “for the evaluation or classification of natural persons…based on their social behavior” or personal characteristics—for all AI systems.
The new version of the bill only prohibits the use of government AI for social scoring. It does not apply to AI systems developed for commercial purposes.
Social scoring is a practice that a 2022 MIT Technology Review article describes as “a practice in which authoritarian governments—specifically China—rank people’s trustworthiness and punish them for undesirable behaviors, such as stealing or not paying back loans. Essentially, it’s seen as a dystopian superscore assigned to each citizen.”
no use of biomarkers, either
The original bill’s Section 551.053 prohibited the deployment of an AI system that was developed with biometric identifiers of individuals “and the targeted or untargeted gathering of images or other media from the internet,” for the purpose of identifying a specific individual.
For example, it would be illegal to deploy an AI system trained on publicly available Facebook or Instagram posts that contain photos of individuals, if that AI system would be used to identify an individual.
In the revised bill, this prohibition on the use of AI systems for biometric capture is prohibited by government agencies only. The prohibition applies only to systems “designed for government entities to constrain civil liberties, not any artificial intelligence system developed or deployed for commercial purposes or any other government entity purpose.”
New prohibition on political viewpoint discrimination
The 2.0 version of TRAIGA includes a new section prohibiting the development or deployment of an AI system “that intentionally results in political viewpoint discrimination or otherwise intentionally infringes upon a person’s freedom of association or ability to freely express the person’s beliefs or opinions.”
An interactive computer service may not, through the use of an AI system, “block, ban, remove, de-platform, demonetize, debank, de-boost, restrict, or otherwise discriminate against a user based on the user’s political speech; or modify or manipulate a user’s content or posting for the purpose of censoring the user’s political speech.”
The new section on political speech does not apply to speech that is illegal under federal or state law, or “constitutes a credible threat of violence or incitement to imminent lawless action; contains obscene material,” contains “unlawful deep fake video or images,” or “violates intellectual property rights under applicable law.”
No annual impact assessments
The original version of the bill required AI deployers to complete an impact assessment for every AI system deployed, both annually and within 90 days after any substantial modification of the system.
The new version of the bill does not require impact assessments.
disclosure of AI use by gov’t to consumers
Government AI developers and deployers would be required to disclose, clearly and conspicuously, specific information to the consumer. Commercial (non-governmental) AI developers and deployers would not be subject to this requirement.
That information includes:
The fact that the consumer is interacting with an AI system.
These provisions no longer apply
The previous version of TRAIGA included a requirement to inform consumers of this information as well:
The purpose of the AI system.
The fact that the AI system may or will make a consequential decision affecting the consumer, and the nature of that decision.
The factors used in making a consequential decision.
Contact information for the deployer.
A declaration of the consumer’s rights under TRAIGA.
All of those original provisions have been deleted from the revised version of TRAIGA. In the new bill, government agencies deploying AI systems need only disclose that fact to consumers. Once that has been done, all disclosure duty under TRAIGA has been met.
the right to appeal an ai decision
The new version of TRAIGA gives consumers the right to appeal a decision made by an artificial intelligence system which has an adverse impact on their health, welfare, safety, or fundamental rights.
Consumers also have the right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure, and the main elements of the decision taken.
Surprisingly, this section on consumer rights (551.058) in the new bill applies to both governmental and non-governmental (commercial) AI developers and deployers.
no risk mitigation policy required
The original version of TRAIGA required developers or deployers to implement a risk management policy to govern the development or deployment of the AI system.
The new version of the bill has no risk mitigation policy required.
duties of social media and digital platforms
In the original bill, social media platforms and digital service providers (online stores like iTunes, or streaming platforms like Spotify) would have been required to post terms of service that prohibit the deployment of an AI system that violates the rules set forth in the Act.
There are no requirements for social media and digital platforms in the new version of the bill.
No reporting requirements
The original version of TRAIGA required AI deployers to notify, in writing, the appropriate state agency if the deployer discovers that its AI system has caused algorithmic discrimination of an individual or group.
There are no such requirements in the new version of the bill.
prohibition of ai ‘solely intended’ to produce explicit ai deepfakes and csam
The bill prohibits the development or deployment of an AI system “with the sole intent of producing, assisting or aiding in producing, or distributing unlawful visual material” in violation of Texas law (Section 43.26) regarding the depiction of sexual imagery involving a minor, or an unlawful deepfake video (under Texas Penal Code, Section 21.165).
The original bill banned AI systems that produced or distributed this material. The new version of the bill prohibits only AI systems that have been developed or deployed “with the sole intent” of producing or distributing this material.
enforcement provisions of traiga
Potential violations of TRAIGA would be investigated and enforced by the Office of the Texas Attorney General. There is no private right of action created by the bill.
The AG’s office would be required to create and post a public online mechanism for complaint submissions.
The bill does contain a right to cure: The Attorney General must notify a developer, distributor, or deployer 60 days prior to bringing action. The respondent then has 60 days to cure the violation, or face an injunction and/or civil penalties.
Uncured violations not related to an unacceptable use are subject to a fine of $10,000 to $12,000 per violation. Violations related to unacceptable uses are subject to a fine of $80,000 to $200,000. A developer or deployer continuing to operate after being found in violation is subject to a fine of $2,000 to $40,000 per day they’re operating.
creation of a state ai council
TRAIGA would create the Texas Artificial Intelligence Council, with rulemaking authority, administratively attached to the Texas Department of Information Resources. The Council’s purpose is to:
Ensure AI systems are ethical and in the public’s best interest and do not harm public safety or individual freedom.
Identify existing laws and regulations that impede innovation in AI development, and recommend appropriate reforms.
Analyze opportunities to improve state government efficiency through the use of AI systems.
Investigate potential instances of regulatory capture by tech companies and the censoring of competitors or smaller innovators.
creation of an ‘ai sandbox program’
In tech development a ‘sandbox program’ is a controlled environment separated from an existing system. It allows developers to test applications without affecting the rest of the system.
TRAIGA would task the Texas Department of Information Resources with creating an AI Regulatory Sandbox Program. This program would provide clear guidelines for AI developers to test systems while temporarily exempt from certain regulatory requirements. The idea is to promote the innovative use of AI by allowing developers room to create and innovate in a safe space prior to launching an AI system that would be subject to the rules of TRAIGA.
Participants would be allowed to work within the sandbox program for up to 36 months.
New bill version: No change to existing law on the personal data rights of consumers
In its original form, TRAIGA would have slightly tweaked the Texas Business & Commerce Code (Sec. 451) to add “artificial intelligence systems” into existing law regarding a consumer’s right to know if personal data is or will be used in any AI system, and for what purpose.
Consumers would have the right to opt out of the sale of personal data for use in AI systems prior to that data being collected.
That provision is no longer in the revised version of the bill.