Senate pulls AI regulatory ban from GOP bill after complaints from states
WASHINGTON (AP) — A proposal to deter states from regulating artificial intelligence for a decade was soundly defeated in the U.S. Senate on Tuesday, thwarting attempts to insert the measure into President Donald Trump’s big bill of tax breaks and spending cuts.
The Senate voted 99-1 to strike the AI provision from the legislation after weeks of criticism from both Republican and Democratic governors and state officials.
Originally proposed as a 10-year ban on states doing anything to regulate AI, lawmakers later tied it to federal funding so that only states that backed off on AI regulations would be able to get subsidies for broadband internet or AI infrastructure.
A last-ditch Republican effort to save the provision would have reduced the time frame to five years and sought to exempt some favored AI laws, such as those protecting children or country music performers from harmful AI tools.
But that effort was abandoned when Sen. Marsha Blackburn, a Tennessee Republican, teamed up with Democratic Sen. Maria Cantwell of Washington on Monday night to introduce an amendment to strike the entire proposal.
Blackburn said on the floor that “it is frustrating” that Congress has been unable to legislate on emerging technology, including online privacy and AI-generated “deepfakes” that impersonate an artist’s voice or visual likeness. “But you know who has passed it? It is our states,” Blackburn said. “They’re the ones that are protecting children in the virtual space. They’re the ones that are out there protecting our entertainers — name, image, likeness — broadcasters, podcasters, authors.”
READ MORE: AI and ‘recession-proof’ jobs: 4 tips for new job seekers
Voting on the amendment happened after 4 a.m. Tuesday as part of an overnight session as Republican leaders sought to secure support for the tax cut bill while fending off other proposed amendments, mostly from Democrats trying to defeat the package.
Proponents of an AI moratorium had argued that a patchwork of state and local AI laws is hindering progress in the AI industry and the ability of U.S. firms to compete with China.
Some prominent tech leaders welcomed the idea after Republican Sen. Ted Cruz of Texas, who leads the Senate Commerce committee, floated it at a hearing in May. OpenAI CEO Sam Altman told Cruz that “it is very difficult to imagine us figuring out how to comply with 50 different sets of regulation.”
But state and local lawmakers and AI safety advocates argued that the rule is a gift to an industry that wants to avoid accountability for its products. Led by Arkansas Gov. Sarah Huckabee Sanders, a majority of GOP governors sent a letter to Congress opposing it.
Sanders, who was White House press secretary in Trump’s first term, credited Blackburn for “leading the charge” to defend states’ rights to regulate AI.
“This is a monumental win for Republican Governors, President Trump’s one, big beautiful bill, and the American people,” Sanders wrote on X on Tuesday.
Also appealing to lawmakers to strike the provision was a group of parents of children who have died as a result of online harms.
“In the absence of federal action, the moratorium gives AI companies exactly what they want: a license to develop and market dangerous products with impunity — with no rules and no accountability,” Florida mother Megan Garcia wrote in a letter last week. Garcia has sued the maker of an AI chatbot she says pushed her 14-year-old son to kill himself. “A moratorium gives companies free rein to create and launch products that sexually groom children and encourage suicide, as in the case of my dear boy.”
Cruz over the weekend tried to broker a last-ditch compromise with Blackburn to save the provision. Changes included language designed to protect child safety as well as Tennessee’s so-called ELVIS Act, championed by Nashville’s country music industry to restrict AI tools from replicating an artist’s voice without their consent. Cruz said it could have “passed easily” had Blackburn not backed out.
“When I spoke to President Trump last night, he said it was a terrific agreement,” Cruz said. “The agreement protected kids and protected the rights of creative artists. But outside interests opposed that deal.”
Blackburn said Tuesday there were “problems with the language” of the amendment.
Cruz withdrew the compromise amendment and blamed a number of people and entities he said “hated the moratorium,” including China, Democratic California Gov. Gavin Newsom, a teachers union leader and “transgender groups and radical left-wing groups who want to use blue state regulations to mandate woke AI.”
He didn’t mention the broad group of Republican state legislators, attorneys general and governors who also opposed it. Critics say Cruz’s proposal, while carving out some exemptions, would have affected states’ enforcement of any AI rules if they were found to create an “undue or disproportionate burden” on AI systems.
Even Cruz ultimately joined the early Tuesday vote to strip the proposal. Only Sen. Thom Tillis, a North Carolina Republican who opposed Trump’s broader budget bill, voted against eliminating the AI provision.
“The proposed ban that has now been removed would have stopped states from protecting their residents while offering nothing in return at the federal level,” Jim Steyer, founder and CEO of children’s advocacy group Common Sense Media, wrote in a statement. “In the end, 99 senators voted to strip the language out when just hours earlier it looked like the moratorium might have survived.”
O’Brien reported from Providence, Rhode Island.
Tools & Platforms
This is what happened when I asked journalism students to keep an ‘AI diary’
Last month I wrote about my decision to use an AI diary as part of assessment for a module I teach on the journalism degrees at Birmingham City University. The results are in — and they are revealing.
What if we just asked students to keep a record of all their interactions with AI? That was the thinking behind the AI diary, a form of assessment that I introduced this year for two key reasons: to increase transparency about the use of AI, and to increase critical thinking.
The diary was a replacement for the more formal ‘critical evaluation’ that students typically completed alongside their journalism and, in a nutshell, it worked. Students were more transparent about the use of AI, and showed more critical thinking in their submissions.
But there was more:
- Performance was noticeably higher, not only in terms of engagement with wider reading, but also in terms of better journalism
- There was a much wider variety of applications of generative AI.
- Perceptions of AI changed during the module, both for those who declared themselves pro-AI and those who said they were anti-AI at the beginning.
- And students developed new cross-industry skills in prompt design.
It’s not just that marks were higher — but why
The AI diary itself contributed most to the higher marks — but the journalism itself also improved. Why?
Part of the reason was that inserting AI into the production process, and having to record and annotate that in a diary, provided a space for students to reflect on that process.
This was most visible in pre-production stages such as idea generation and development, sourcing and planning. What might otherwise take place entirely internally or informally was externalised and formalised in the form of genAI prompts.
This was a revelation: the very act of prompting — regardless of the response — encouraged reflection.
In the terms of Nobel prize-winning psychologist Daniel Kahneman, what appeared to be happening was a switch from System 1 thinking (“fast, automatic, and intuitive”) to System 2 thinking (“slow, deliberate, and conscious, requiring intentional effort”).
For example, instead of pursuing their first idea for a story, students devoted more thought to the idea development process. The result was the development of (and opportunity to choose) much stronger story ideas as a result.
Similarly, more and better sources were identified for interview, and the planning of interview approaches and questions became more strategic and professional.
These were all principles that had been taught multiple times across the course as a whole — but the discipline to stop and think, reflect and plan, outside of workshop activities was enforced by the systematic use of AI.
Applying the literature, not just quoting it
When it came to the AI diaries themselves, students referenced more literature than they had in previous years’ traditional critical evaluations. The diaries made more connections to that literature, and showed a deeper understanding of and engagement with it.
In other words, students put their reading into practice more often throughout the process, instead of merely quoting it at the end.
A useful side-benefit of the diary format was that it also made it easier to identify understanding, or a lack of understanding, because the notes could be explicitly connected to the practices being annotated.
It is possible that the AI diary format made it clearer what the purpose of reading is on a journalism degree — not to merely pass an assignment, but to be a better journalist.
The obvious employability benefits of developing prompt design skills may have also motivated more independent reading — there was certainly more focus on this area than any other aspect of journalism practice, while the least-explored areas of literature tended to be less practical considerations such as ethics.
Students’ opinions on AI were very mixed — and converged
This critical thinking also showed itself in how opinions on generative AI technology developed in the group.
Surveys taken at the start and end of the module found that students’ feelings about AI became more sophisticated: those with anti- or pro-genAI positions at the start expressed a more nuanced understanding at the end. Crucially, there was a reduction in trust in AI, which has been found to be important for critical thinking.
An AI diary allows you to see how people really use technology
One of the unexpected benefits of the AI diary format was providing a window into how people actually used generative AI tools. By getting students to complete diary-based activities in classes, and reviewing the diaries throughout the module (both inside and outside class), it was possible to identify and address themes early on, both individually and as a group. These included:
- Trusting technology too much, especially in areas of low confidence such as data analysis
- Assuming that ChatGPT etc. understood a concept or framework without it being explained
- Assuming that ChatGPT etc. was able to understand by providing a link instead of a summary
- A need to make the implicit (e.g. genre, audience) explicit
- Trying to instruct AI in a concept or framework before they had fully understood it themselves
These themes suggest potential areas for future teaching such as identifying areas of low confidence, or less-documented concepts, as ‘high risk’ for the use of AI, and the need for checklists to ensure contexts such as genre, audience, etc. are embedded into prompt design.
There were also some novel experiments which suggested new ways to test generative AI, such as the student who invented a footballer to check ChatGPT’s lack of criticality (it failed to challenge the misinformation).
Barriers to transparency still remain
Although the AI diary did succeed in students identifying where they had used tools to generate content or improve their own writing, it was clear that barriers remained for some students.
I have a feeling that part of the barrier lies in the challenge genAI presents to our sense of creativity. This is an internal barrier as much as an external one: in pedagogical terms, we might be looking at a challenge for transformative learning — specifically a “disorienting dilemma”, where assumptions are questioned and beliefs are changed.
It is not just in the AI sphere where seeking or obtaining help is often accompanied by a sense of shame: we want to be able to say “I made that”, even when we only part-authored something (and there are plenty of examples of journalists wishing to take sole credit for stories that others initiated, researched, or edited).
Giving permission will not be enough on its own in these situations.
So it may be that we need to engage more directly in these debates, and present students with disorienting dilemmas, to help students arrive at a place where they feel comfortable admitting just how much AI may have contributed to their creative output. Part of this lies in acknowledging the creativity involved in effective prompts, ‘stewardship‘, and response editing.
Another option would be to require particular activities to be completed: for example, a requirement that work is reviewed by AI and there be some reflection on that (and a decision about which recommendations to follow).
Reducing barriers to declaration could also be achieved by reducing the effort required, by providing an explicit, structured ‘checklist’ of how AI was used in each story, rather than relying solely on the AI diary to do this.
Each story might be accompanied by a table, for example, where the student declares ticks a series of boxes indicating where AI was used, from generating the idea itself, to background research, identifying sources, planning, generating content, and editing. Literature on how news organisations approach transparency in the use of AI should be incorporated into teaching.
AI generation raises new challenges around editing and transparency
I held back from getting students to generate drafts of stories themselves using AI, and this was perhaps a mistake. Those who did experiment with this application of genAI generally did so badly because they were ill-equipped to recognise the flaws in AI-generated material, or to edit effectively. And they failed to engage with debates around transparency.
Those skills are going to be increasingly important in AI-augmented roles, so the next challenge is how (and if) to build those.
The obvious problem? Those skills also make it easier for any AI plagiarism to go undetected.
There are two obvious strategies to adopt here: the first is to require stories to be based on an initial AI-generated draft (so there is no doubt about authorship); the second is to create controlled conditions (i.e. exams) for any writing assessment where you want to assess the person’s own writing skills rather than their editing skills.
Either way, any introduction of these skills needs to be considered beyond the individual module, as students may also apply these skills in other modules.
A module is not enough
In fact, it is clear that one module isn’t enough to address all of the challenges that AI presents.
At the most basic level, a critical understanding of how generative AI works (it’s not a search engine!), where it is most useful (not for text generation!), and what professional use looks like (e.g. risk assessment) should be foundational knowledge on any journalism degree. Not teaching it from day one would be like having students starting a course without knowing how to use a computer.
Designing prompts — specifically role prompting — provides a great method for encouraging students to explore and articulate qualities and practices of professionalism. Take this example:
"You are an editor who checks every fact in a story, is sceptical about every claim, corrects spelling and grammar for clarity, and is ruthless in cutting out unnecessary detail. In addition to all the above, you check that the structure of the story follows newswriting conventions, and that the angle of the story is relevant to the target audience of people working in the health sector. Part of your job involves applying guidelines on best practice in reporting particular subjects (such as disability, mental health, ethnicity, etc). Provide feedback on this story draft..."
Here the process of prompt design doubles as a research task, with a practical application, and results that the student can compare and review.
Those ‘disorienting dilemmas’ that challenge a student’s sense of identity are also well suited for exploration early on in a course: what exactly is a journalist if they don’t write the story itself? Where do we contribute value? What is creativity? How do we know what to believe? These are fundamental questions that AI forces us to confront.
And the answers can be liberating: we can shift the focus from quantity to quality; from content to original newsgathering; from authority to trust.
Now I’ve just got to decide which bits I can fit into the module next year.
AI Insights
This Magnificent Artificial Intelligence (AI) Stock Is Down 26%. Buy the Dip, Or Run for the Hills?
Duolingo (DUOL 1.09%) operates the world’s most popular digital language education platform, and the company continues to deliver stellar financial results. Duolingo is elevating the learning experience with artificial intelligence (AI), which is also unlocking new revenue streams that could fuel its next phase of growth.
Duolingo stock set a new record high in May, but it has since declined by 26%. It’s trading at a sky-high valuation, so investors might be wondering whether the company’s rapid growth warrants paying a premium. With that in mind, is the dip a buying opportunity, or should investors completely avoid the stock?
Image source: Getty Images.
AI is creating new opportunities for Duolingo
Duolingo’s mobile-first, gamified approach to language education is attracting hordes of eager learners. During the first quarter of 2025 (ended March 31), the platform had 130.2 million monthly active users, which was a 33% jump from the year-ago period. However, the number of users paying a monthly subscription grew at an even faster pace, thanks partly to AI.
Duolingo makes money in two ways. It sells advertising slots to businesses and then shows those ads to its free users, and it also offers a monthly subscription option for users who want access to additional features to accelerate their learning experience. The number of users paying a subscription soared by 40% to a record 10.3 million during the first quarter.
Duolingo’s Max subscription plan continues to be a big driver of new paying users. It includes three AI-powered features: Roleplay, Explain My Answer, and Videocall. Roleplay uses an AI chatbot interface to help users practice their conversational skills, whereas Explain My Answer offers personalized feedback to users based on their mistakes in each lesson. Videocall, which is the newest addition to the Max plan, features a digital avatar named Lily, which helps users practice their speaking skills.
Duolingo Max was launched just two years ago in 2023, and it’s the company’s most expensive plan, yet it already accounts for 7% of the platform’s total subscriber base. It brings Duolingo a step closer to achieving its long-term goal of delivering a digital learning experience that rivals that of a human tutor.
Duolingo’s revenue and earnings are soaring
Duolingo delivered $230.7 million in revenue during the first quarter of 2025, which represented 38% growth from the year-ago period. It was above the high end of the company’s forecast ($223.5 million), which drove management to increase its full-year guidance for 2025. Duolingo is now expected to deliver as much as $996 million in revenue, compared to $978.5 million as of the last forecast. But there is another positive story unfolding at the bottom line.
Duolingo generated $35.1 million in GAAP (generally accepted accounting principles) net income during the first quarter, which was a 30% increase year over year. However, the company’s adjusted earnings before interest, tax, depreciation, and amortization (EBITDA) soared by 43% to $62.8 million. This is management’s preferred measure of profitability because it excludes one-off and non-cash expenses, so it’s a better indicator of how much actual money the business is generating.
A combination of Duolingo’s rapid revenue growth and prudent expense management is driving the company’s surging profits, and this trend might be key to further upside in its stock from here.
Duolingo stock is trading at a sky-high valuation
Based on Duolingo’s trailing 12-month earnings per share (EPS), its stock is trading at a price-to-earnings (P/E) ratio of 193.1. That is an eye-popping valuation considering the S&P 500 is sitting at a P/E ratio of 24.1 as of this writing. In other words, Duolingo stock is a whopping eight times more expensive than the benchmark index.
The stock looks more attractive if we value it based on the company’s future potential earnings, though. If we look ahead to 2026, the stock is trading at a forward P/E ratio of 48.8 based on Wall Street’s consensus EPS estimate (provided by Yahoo! Finance) for that year. It’s still expensive, but slightly more reasonable.
Data by YCharts.
Even if we set Duolingo’s earnings aside and value its stock based on its revenue, it still looks quite expensive. It’s trading at a price-to-sales (P/S) ratio of 22.9, which is a 40% premium to its average of 16.3 dating back to when it went public in 2021.
Data by YCharts.
With all of that in mind, Duolingo stock probably isn’t a great buy for investors who are looking for positive returns in the next 12 months or so. However, the company will grow into its valuation over time if its revenue and earnings continue to increase at around the current pace, so the stock could be a solid buy for investors who are willing to hold onto it for the long term. A time horizon of five years (or more) will maximize the chances of earning a positive return.
Funding & Business
Taiwan’s Record Exports Fuel US Trade Tensions, Currency Risks
Taiwan’s exports are on a tear, powered by global demand for artificial intelligence — but the boom is becoming a flashpoint in trade relations with Washington and a growing risk for the economy.
Source link
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Tools & Platforms7 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth