AI Insights
Texas Takes a Shot at AI Regulation With ‘Responsible Artificial Intelligence Governance Act’
Quick Hits
- The Texas Responsible Artificial Intelligence Governance Act establishes a broad framework for the acceptable development, deployment, and oversight of AI systems in Texas, effective January 1, 2026.
- The act identifies certain acceptable and unacceptable uses of AI systems, creates the Texas Artificial Intelligence Council to oversee AI governance, and introduces a regulatory sandbox program for testing AI innovations.
- Enforcement authority is vested exclusively in the Texas Office of the Attorney General, with significant civil penalties for violations and structured opportunities to cure noncompliance.
Overview
The Texas Responsible Artificial Intelligence Governance Act marks a meaningful move by Texas to lead on AI regulation at the state level, aiming to balance corporate desires for AI innovation with consumer protection, anti-discrimination, and other ethical considerations. Its reach is broad: the act applies to any person or entity conducting business in Texas, producing products or services used by Texas residents, or developing or deploying AI systems within the state, though certain governmental and healthcare entities are exempted. And, while the set of prohibited AI technologies is restricted, the act applies to just about everyone, so it warrants broad awareness.
The act defines an “artificial intelligence system” as any machine-based system that infers from inputs to generate outputs—such as content, decisions, predictions, or recommendations—that can influence physical or virtual environments. Accordingly, the act should be understood as applying to AI systems that involve machine learning, natural language processing, perception, speech, and content generation. Unlike certain other state AI laws (including the since-vetoed Virginia H.B. 2094), which focus broadly on the risk associated with using certain of these systems, the Texas law focuses on a narrow, explicitly delineated set of harmful uses of AI tools, with a special focus on those that involve biometric information.
Prohibited Practices
The legislation outlines several prohibited AI practices that businesses operating in Texas are required to avoid. These prohibited practices are non-controversial and, although handled differently from some other laws, reflect common concerns in other AI laws, such as commonly voiced fears that AI tools will be used to further discriminatory practices. In particular, the act prohibits the development or deployment of AI systems that are intended to:
- manipulate human behavior, particularly to incite or encourage self-harm, harm to others, or criminal activity;
- infringe upon constitutional rights or unlawfully discriminate against protected classes, such as race, color, national origin, sex, age, religion, or disability. Unique to the Texas law, however, disparate impact associated with the use of an AI system alone is not considered sufficient evidence to establish an intent to discriminate; or
- create illegal content, including by producing or distributing AI-generated child sexual abuse material or deepfake content in violation of the Texas Penal Code.
Additional restrictions apply to certain uses of AI technologies by governmental entities. For example, governmental entities are prohibited from developing or using AI tools to uniquely identify individuals through biometric data or capture images without consent if doing so infringes on constitutional rights or violates other laws. Likewise, governmental agencies that use AI technologies to interact with consumers must generally provide a disclosure to each individual that they are interacting with AI.
Similar obligations apply to healthcare providers, which must provide a clear, written disclosure (such as through a hyperlink) when patients interact with AI systems used in their care or treatment.
Promoting Innovation: The Texas Artificial Intelligence Council and the Regulatory Sandbox Program
The act also establishes the Texas Artificial Intelligence Council, a seven-member body with varied expertise appointed by state leadership. The Council has a broad mandate that appears in part to be focused on expanding the perception of Texas as a hotbed for AI development. In particular, it is tasked with doing things such as:
- identifying legislative improvements and providing guidance and legislative recommendations on the use of AI systems;
- identifying laws and regulations that impede AI system innovation and proposing reforms; and
- evaluating potential regulatory capture risks, such as undue influence by technology companies or burdens disproportionately impacting smaller innovators.
Another unique component of the act is the establishment of a regulatory sandbox, which will be administered by the Texas Department of Information Resources in consultation with the Texas Artificial Intelligence Council. Approved participants can test AI systems for up to thirty-six months, giving them an avenue to experiment with research, training, testing, or other pre-deployment activities required to develop novel AI systems in a meaningful way without running the risk of regulatory enforcement in Texas. In this way, the state hopes to foster innovation while maintaining oversight. Notably, however, the administrative burden associated with participating in the program is meaningful, as participants must submit detailed applications to participate and provide quarterly reports detailing their systems’ performance, risks, benefits, mitigation activities associated with the tool, and stakeholder feedback they have received in relation to the AI system. Moreover, participation in the sandbox is not a “get out of jail free” card, and participants remain subject to core consumer protection provisions, regardless of whether they are playing in the sandbox.
The act does not include a private right of action; however, the Texas Office of the Attorney General has enforcement authority, including the power to investigate complaints, issue civil investigative demands, and seek civil penalties and injunctive relief. Penalties range from $10,000 to $12,000 per curable violation; $80,000 to $200,000 per uncurable violation; and $2,000 to $40,000 per day for continuing violations. A sixty-day cure period is provided before enforcement action, and compliance with recognized AI risk management frameworks (such as the National Institute of Standards and Technology (NIST)) may establish a rebuttable presumption of reasonable care. And, in some circumstances, state agencies may impose additional sanctions, including license suspension or monetary penalties, upon recommendation by the attorney general.
Looking Forward
The Texas Responsible Artificial Intelligence Governance Act positions Texas as a leader in state-level AI regulation. It also represents a new approach to AI regulation in the United States that seeks to balance technological progress with consumer protections and common-sense restrictions. While it remains to be seen whether the act has any teeth, given the very limited scope of its prohibitions and potential difficulties in proving discriminatory intent under its AI antidiscrimination provisions, businesses that operate in Texas will nevertheless want to remain mindful of this this new law and consider whether it would be appropriate to consider revisions to their current practices to align with the act or to take advantage of some of the new opportunities arising thereunder, such as participation in the AI regulatory sandbox.
Ogletree Deakins’s Cybersecurity and Privacy Practice Group will continue to monitor developments and will provide updates on the Cybersecurity and Privacy, Technology, and Texas blogs as additional information becomes available.
Benjamin W. Perry is a shareholder in Ogletree Deakins’s Nashville office, and he is co-chair of the firm’s Cybersecurity and Privacy Practice Group.
Lauren N. Watson is an associate in the Raleigh office of Ogletree Deakins and a member of the firm’s Cybersecurity and Privacy Practice Group.
James M. Childs, a law student currently participating in the summer associate program in the Raleigh office of Ogletree Deakins, contributed to this article.
Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts
AI Insights
5 Ways CFOs Can Upskill Their Staff in AI to Stay Competitive
Chief financial officers are recognizing the need to upskill their workforce to ensure their teams can effectively harness artificial intelligence (AI).
AI Insights
Real or AI: Band confirms use of artificial intelligence for its music on Spotify
The Velvet Sundown, a four-person band, or so it seems, has garnered a lot of attention on Spotify. It started posting music on the platform in early June and has since released two full albums with a few more singles and another album coming soon. Naturally, listeners started to accuse the band of being an AI-generated project, which as it now turns out, is true.
The band or music project called The Velvet Sundown has over a million monthly listeners on Spotify. That’s an impressive debut considering their first album called “Floating on Echoes” hit the music streaming platform on June 4. Then, on June 19, their second album called “Dust and Silence” was added to the library. Next week, July 14, will mark the release of the third album called “Paper Sun Rebellion.” Since their debut, listeners have accused the band of being an AI-generated project and now, the owners of the project have updated the Spotify bio and called it a “synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence.”
It goes on to state that this project challenges the boundaries of “authorship, identity, and the future of music itself in the age of AI.” The owners claim that the characters, stories, music, voices, and lyrics are “original creations generated with the assistance of artificial intelligence tools,” but it is unclear to what extent AI was involved in the development process.
The band art shows four individuals suggesting they are owners of the project, but the images are likely AI-generated as well. Interestingly, Andrew Frelon (pseudonym) claimed to be the owner of the AI band initially, but then confirmed that was untrue and that he pretended to run their Twitter because he wanted to insert an “extra layer of weird into this story,” of this AI band.
As it stands now, The Velvet Sundown’s music is available on Spotify with the new album releasing next week. Now, whether this unveiling causes a spike or a decline in monthly listeners, remains to be seen.
I have always been passionate about gaming and technology, which drove me towards pursuing a career in the tech writing industry. I have spent over 7 years in the tech space and about a decade in content writing. I hope to continue to use this passion and generate informative, entertaining, and accurate content for readers.
AI Insights
How to Choose Between Deploying an AI Chatbot or Agent
In artificial intelligence, the trend du jour is AI agents, or algorithmic bots that can autonomously retrieve data and act on it.
-
Funding & Business7 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business7 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit
-
Funding & Business4 days ago
Dust hits $6M ARR helping enterprises build AI agents that actually do stuff instead of just talking