Connect with us

AI Insights

Texas Takes a Shot at AI Regulation With ‘Responsible Artificial Intelligence Governance Act’

Published

on


Quick Hits

  • The Texas Responsible Artificial Intelligence Governance Act establishes a broad framework for the acceptable development, deployment, and oversight of AI systems in Texas, effective January 1, 2026.
  • The act identifies certain acceptable and unacceptable uses of AI systems, creates the Texas Artificial Intelligence Council to oversee AI governance, and introduces a regulatory sandbox program for testing AI innovations.
  • Enforcement authority is vested exclusively in the Texas Office of the Attorney General, with significant civil penalties for violations and structured opportunities to cure noncompliance.

Overview

The Texas Responsible Artificial Intelligence Governance Act marks a meaningful move by Texas to lead on AI regulation at the state level, aiming to balance corporate desires for AI innovation with consumer protection, anti-discrimination, and other ethical considerations. Its reach is broad: the act applies to any person or entity conducting business in Texas, producing products or services used by Texas residents, or developing or deploying AI systems within the state, though certain governmental and healthcare entities are exempted. And, while the set of prohibited AI technologies is restricted, the act applies to just about everyone, so it warrants broad awareness.

The act defines an “artificial intelligence system” as any machine-based system that infers from inputs to generate outputs—such as content, decisions, predictions, or recommendations—that can influence physical or virtual environments. Accordingly, the act should be understood as applying to AI systems that involve machine learning, natural language processing, perception, speech, and content generation. Unlike certain other state AI laws (including the since-vetoed Virginia H.B. 2094), which focus broadly on the risk associated with using certain of these systems, the Texas law focuses on a narrow, explicitly delineated set of harmful uses of AI tools, with a special focus on those that involve biometric information.

Prohibited Practices

The legislation outlines several prohibited AI practices that businesses operating in Texas are required to avoid. These prohibited practices are non-controversial and, although handled differently from some other laws, reflect common concerns in other AI laws, such as commonly voiced fears that AI tools will be used to further discriminatory practices. In particular, the act prohibits the development or deployment of AI systems that are intended to:

  • manipulate human behavior, particularly to incite or encourage self-harm, harm to others, or criminal activity;
  • infringe upon constitutional rights or unlawfully discriminate against protected classes, such as race, color, national origin, sex, age, religion, or disability. Unique to the Texas law, however, disparate impact associated with the use of an AI system alone is not considered sufficient evidence to establish an intent to discriminate; or
  • create illegal content, including by producing or distributing AI-generated child sexual abuse material or deepfake content in violation of the Texas Penal Code.

Additional restrictions apply to certain uses of AI technologies by governmental entities. For example, governmental entities are prohibited from developing or using AI tools to uniquely identify individuals through biometric data or capture images without consent if doing so infringes on constitutional rights or violates other laws. Likewise, governmental agencies that use AI technologies to interact with consumers must generally provide a disclosure to each individual that they are interacting with AI. 

Similar obligations apply to healthcare providers, which must provide a clear, written disclosure (such as through a hyperlink) when patients interact with AI systems used in their care or treatment.

Promoting Innovation: The Texas Artificial Intelligence Council and the Regulatory Sandbox Program

The act also establishes the Texas Artificial Intelligence Council, a seven-member body with varied expertise appointed by state leadership. The Council has a broad mandate that appears in part to be focused on expanding the perception of Texas as a hotbed for AI development. In particular, it is tasked with doing things such as:

  • identifying legislative improvements and providing guidance and legislative recommendations on the use of AI systems;
  • identifying laws and regulations that impede AI system innovation and proposing reforms; and
  • evaluating potential regulatory capture risks, such as undue influence by technology companies or burdens disproportionately impacting smaller innovators.

Another unique component of the act is the establishment of a regulatory sandbox, which will be administered by the Texas Department of Information Resources in consultation with the Texas Artificial Intelligence Council. Approved participants can test AI systems for up to thirty-six months, giving them an avenue to experiment with research, training, testing, or other pre-deployment activities required to develop novel AI systems in a meaningful way without running the risk of regulatory enforcement in Texas. In this way, the state hopes to foster innovation while maintaining oversight. Notably, however, the administrative burden associated with participating in the program is meaningful, as participants must submit detailed applications to participate and provide quarterly reports detailing their systems’ performance, risks, benefits, mitigation activities associated with the tool, and stakeholder feedback they have received in relation to the AI system. Moreover, participation in the sandbox is not a “get out of jail free” card, and participants remain subject to core consumer protection provisions, regardless of whether they are playing in the sandbox.

Enforcement and Penalties

The act does not include a private right of action; however, the Texas Office of the Attorney General has enforcement authority, including the power to investigate complaints, issue civil investigative demands, and seek civil penalties and injunctive relief. Penalties range from $10,000 to $12,000 per curable violation; $80,000 to $200,000 per uncurable violation; and $2,000 to $40,000 per day for continuing violations. A sixty-day cure period is provided before enforcement action, and compliance with recognized AI risk management frameworks (such as the National Institute of Standards and Technology (NIST)) may establish a rebuttable presumption of reasonable care. And, in some circumstances, state agencies may impose additional sanctions, including license suspension or monetary penalties, upon recommendation by the attorney general.

Looking Forward

The Texas Responsible Artificial Intelligence Governance Act positions Texas as a leader in state-level AI regulation. It also represents a new approach to AI regulation in the United States that seeks to balance technological progress with consumer protections and common-sense restrictions. While it remains to be seen whether the act has any teeth, given the very limited scope of its prohibitions and potential difficulties in proving discriminatory intent under its AI antidiscrimination provisions, businesses that operate in Texas will nevertheless want to remain mindful of this this new law and consider whether it would be appropriate to consider revisions to their current practices to align with the act or to take advantage of some of the new opportunities arising thereunder, such as participation in the AI regulatory sandbox.

Ogletree Deakins’s Cybersecurity and Privacy Practice Group will continue to monitor developments and will provide updates on the Cybersecurity and Privacy, Technology, and Texas blogs as additional information becomes available.

Benjamin W. Perry is a shareholder in Ogletree Deakins’s Nashville office, and he is co-chair of the firm’s Cybersecurity and Privacy Practice Group.

Lauren N. Watson is an associate in the Raleigh office of Ogletree Deakins and a member of the firm’s Cybersecurity and Privacy Practice Group.

James M. Childs, a law student currently participating in the summer associate program in the Raleigh office of Ogletree Deakins, contributed to this article.

Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

5 Ways CFOs Can Upskill Their Staff in AI to Stay Competitive

Published

on


Chief financial officers are recognizing the need to upskill their workforce to ensure their teams can effectively harness artificial intelligence (AI).

According to a June 2025 PYMNTS Intelligence report, “The Agentic Trust Gap: Enterprise CFOs Push Pause on Agentic AI,” all the CFOs surveyed said generative AI has increased the need for more analytically skilled workers. That’s up from 60% in March 2024.

“The shift in the past year reflects growing hands-on use and a rising urgency to close capability gaps,” according to the report.

The CFOs also said the overall mix of skills required across the business has changed. They need people who have AI-ready skills: “CFOs increasingly need talent that can evaluate, interpret and act on machine-generated output,” the report said.

The CFO role itself is changing. According to The CFO, 27% of job listings for chief financial officers now call for AI expertise.

Notably, the upskill challenge is not limited to IT. The need for upskilling in AI affects all departments, including finance, operations and compliance. By taking a proactive approach to skill development, CFOs can position their teams to work alongside AI rather than compete with it.

The goal is to cultivate professionals who can critically assess AI output, manage risks, and use the tools to generate business value.

Among CEOs, the impact is just as pronounced. According to a Cisco study, 74% fear that gaps in knowledge will hinder decisions in the boardroom and 58% fear it will stifle growth.

Moreover, 73% of CEOs fear losing ground to rivals because of IT knowledge or infrastructure gaps. One of the barriers holding back CEOs are skills shortages.

Their game plan: investing in knowledge and skills, upgrading infrastructure and enhancing security.

Here are some ways companies can upskill their workforce for AI:

Ensure Buy-in by the C-Suite

  • With leadership from the top, AI learning initiatives will be prioritized instead of falling by the wayside.
  • Allay any employee concerns about artificial intelligence replacing them so they will embrace the use and management of AI.

Build AI Literacy Across the Company

  • Invest in AI training programs: Offer structured training tailored to finance to help staff understand both the capabilities and limitations of AI models, according to CFO.university.
  • Promote AI fluency: Focus on both technical skills, such as how to use AI tools, and conceptual fluency of AI, such as understanding where AI can add value and its ethical implications, according to the CFO’s AI Survival Guide.
  • Create AI champions: Identify and develop ‘AI champions’ within the team who can bridge the gap between finance and technology, driving adoption and supporting peers, according to Upflow.

Integrate AI Into Everyday Workflows

  • Start with small, focused projects such as expense management to demonstrate value and build confidence.
  • Foster a culture where staff can explore AI tools, automate repetitive tasks, and share learnings openly.

Encourage Continuous Learning

Make learning about AI a continuous process, not a one-time event. Encourage staff to stay updated on AI trends and tools relevant to finance.

  • Promote collaboration between finance, IT, and other departments to maximize AI’s impact and share best practices.

Tap External Resources

  • Partner with universities and providers: Tap into external courses, certifications, and workshops to supplement internal training.
  • Consider tapping free or low-cost resources, such as online courses and AI literacy programs offered by tech companies (such as Grow with Google). These tools can provide foundational understanding and help employees build confidence in using AI responsibly.

Read more:

CFOs Move AI From Science Experiment to Strategic Line Item

3 Ways AI Shifts Accounts Receivable From Lagging to Leading Indicator

From Nice-to-Have to Nonnegotiable: How AI Is Redefining the Office of the CFO



Source link

Continue Reading

AI Insights

Real or AI: Band confirms use of artificial intelligence for its music on Spotify

Published

on


The Velvet Sundown, a four-person band, or so it seems, has garnered a lot of attention on Spotify. It started posting music on the platform in early June and has since released two full albums with a few more singles and another album coming soon. Naturally, listeners started to accuse the band of being an AI-generated project, which as it now turns out, is true.

The band or music project called The Velvet Sundown has over a million monthly listeners on Spotify. That’s an impressive debut considering their first album called “Floating on Echoes” hit the music streaming platform on June 4. Then, on June 19, their second album called “Dust and Silence” was added to the library. Next week, July 14, will mark the release of the third album called “Paper Sun Rebellion.” Since their debut, listeners have accused the band of being an AI-generated project and now, the owners of the project have updated the Spotify bio and called it a “synthetic music project guided by human creative direction, and composed, voiced, and visualized with the support of artificial intelligence.”

It goes on to state that this project challenges the boundaries of “authorship, identity, and the future of music itself in the age of AI.” The owners claim that the characters, stories, music, voices, and lyrics are “original creations generated with the assistance of artificial intelligence tools,” but it is unclear to what extent AI was involved in the development process.

The band art shows four individuals suggesting they are owners of the project, but the images are likely AI-generated as well. Interestingly, Andrew Frelon (pseudonym) claimed to be the owner of the AI band initially, but then confirmed that was untrue and that he pretended to run their Twitter because he wanted to insert an “extra layer of weird into this story,” of this AI band.

As it stands now, The Velvet Sundown’s music is available on Spotify with the new album releasing next week. Now, whether this unveiling causes a spike or a decline in monthly listeners, remains to be seen. 



Source link

Continue Reading

AI Insights

How to Choose Between Deploying an AI Chatbot or Agent

Published

on


In artificial intelligence, the trend du jour is AI agents, or algorithmic bots that can autonomously retrieve data and act on it.

But how are AI agents different from AI chatbots, and why should businesses care?

Understanding how they differ can help businesses choose the right solution for the right job and avoid underusing or overcomplicating their AI investments.

An AI chatbot or assistant is a program that uses natural language processing to interact with users in a conversational way. Think of ChatGPT. It can answer questions, guide users and simulate dialogue.

Chatbots only react to prompts. They don’t act on their own or carry out multistep goals. They are helpful and conversational but ultimately limited to what they’re asked.

An AI agent goes a step further. Like a chatbot, it can understand natural language and interact conversationally. But it also has autonomy and can complete tasks. It is proactive.

Instead of just replying, an AI agent can make decisions, take actions across systems, plan and carry out multistep processes, and learn from past interactions or external data.

For example, imagine a travel platform. An AI chatbot might help a user plan their travel itinerary. An AI agent, on the other hand, could do more, such as:

  • Understand the request, such as booking a flight to Los Angeles.
  • Search multiple airline sites.
  • Compare flight options based on user preferences.
  • Book the flight.
  • Send a confirmation email.

All of this could happen without the user needing to click through a series of links or speak to a human agent. AI agents can be embedded in customer service, HR systems, sales platforms and the like.

Read also: Understanding the Difference Between AI Training and Inference

Why Businesses Should Care

Knowing the difference can help a business plan more strategically. AI chatbots use less inference than AI agents and therefore are more cost-effective. Moreover, businesses can use AI chatbots and AI agents for very different outcomes.

AI chatbot use cases include the following:

  • Customer service
  • Data retrieval
  • Planning and analysis
  • Basic IT support
  • Conversation
  • Writing documents
  • Code generation

AI agent use cases include the following:

  • Automated checkout
  • Automated content curation
  • Travel and reservation execution tasks
  • Shopping and payment processing

AI chatbots and AI agents both use natural language and large language models, but their functions are different. Chatbots are answer machines while agents are action bots.

For businesses looking to improve how they serve customers, streamline operations or support employees, AI agents offer a new level of power and flexibility. Knowing when and how to use each tool can help companies make smarter AI investments.

To choose between deploying an AI chatbot or AI agent, consider the following:

  • Budgets: AI chatbots are cheaper to run since they use less inference.
  • Complexity of use case: For straightforward tasks, use a chatbot. For tasks that need multistep coordination, use an AI agent.
  • Skilled talent: Assess the IT team’s ability to handle chatbots versus agents. Chatbots are easier to deploy and update. AI agents require more advanced machine learning, natural language processing and other skills.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:



Source link

Continue Reading

Trending