Connect with us

AI Insights

US Senator Ted Cruz Proposes SANDBOX Act to Waive Federal Regulations for AI Developers

Published

on


US Senator Ted Cruz (R-TX), chairman of the Senate Commerce Committee, at a hearing titled “AI’ve Got a Plan: America’s AI Action Plan” on Wednesday, September 10, 2025.

On Wednesday, United States Senator Ted Cruz (R-TX) unveiled the “Strengthening Artificial intelligence Normalization and Diffusion By Oversight and eXperimentation Act,” or the SANDBOX Act. The 41-page bill would direct the director of the White House Office of Science and Technology Policy (OSTP) to establish a federal “regulatory sandbox” for AI developers to apply for waivers or modifications on compliance with federal regulations in order to test, experiment with, or temporarily offer AI products and services.

In a statement, Cruz said the legislation is consistent with the goals of the Trump administration’s AI Action Plan, which was released in July, and is the first step toward a “new AI framework” that can “turbocharge economic activity, cut through bureaucratic red tape, and empower American AI developers while protecting human flourishing.”

The bill would create a mechanism for companies to apply to the OSTP director for a waiver or modification to rules or regulations under any federal agency “that has jurisdiction over the enforcement or implementation of a covered provision for which an applicant is seeking a waiver or modification” under the sandbox program. Waivers or modifications would be granted for a two-year period, with four potential renewals totaling up to a decade.

Applicants under the program must demonstrate that “how potential benefits of the product or service or development method outweigh the risks, taking into account any mitigation measures,” including descriptions of “foreseeable risks” such as “health and safety,” “economic damage,” and “unfair or deceptive trade practices.” Applicants that receive a waiver are not immune to civil or criminal liability that may result from the deployment of their AI product or service. The bill requires mandatory incident reporting under a public disclosure mechanism.

Federal agencies are given 90 days to review applications. If an agency does not submit a decision or seek an extension by the deadline, the OSTP director is permitted to presume that the agency does not object. If an application is denied, it can be appealed.

The bill also includes a provision for Congressional review of rules and regulations that “should be amended or repealed as a result of persons being able to operate safely without those covered provisions” under the sandbox program. The OSTP director is tapped to identify any such provisions in a “special message” to Congress submitted each year.

The bill also contemplates coordination with “State programs that are similar or comparable to the Program,” including to “accept joint applications for projects benefitting from both Federal and State regulatory relief” and to harmonize other aspects of the program.

The Senate Commerce Committee’s announcement said the bill is backed by groups including the Abundance Institute, the US Chamber of Commerce, and the Information Technology Council (ITI). Public Citizen, a watchdog group, said in a statement that the bill puts public safety on the “chopping block” in favor of “corporate immunity.”

The announcement of the bill was timed alongside a Senate Commerce hearing titled “AI’ve Got a Plan: America’s AI Action Plan,” which featured testimony from OSTP director Michael Kratsios. During the hearing, Cruz laid out a legislative agenda on AI, including reducing the regulatory burden on AI developers. But, he said, AI developers should still face consequences if they create harm.

“A regulatory sandbox is not a free pass,” said Cruz. “People creating or using AI still have to follow the same laws as everyone else. Our laws are adapting to this new technology.”

In response to a question from Cruz, Kratsios said he would support the approach described by the SANDBOX Act.

The new legislation follows a failed effort by Cruz and other Republicans to impose a sweeping moratorium on the enforcement of state laws regulating artificial intelligence. Earlier this year, the House passed the moratorium as part of the so-called “One Big, Beautiful” bill, or HR 1. After efforts by Cruz to move the measure through the Senate by tying it to the allocation of $42 billion in funding for the Broadband Equity and Access Deployment (BEAD) program, the chamber voted 99-1 to strip it out of the budget bill prior to passage. Still, some experts remain concerned that the administration may try to use other federal levers to restrict state AI laws.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Westwood joins 40 other municipalities using artificial intelligence to examine roads

Published

on


The borough of Westwood has started using artificial intelligence to determine if their roads need to be repaired or repaved.

It’s an effort by elected officials as a way to save money on manpower and to be sure that all decisions are objective.

Instead of relying on his own two eyes, the superintendent of Public Works is now allowing an app on his phone to record images of Westwood’s roads as he drives them.

Data on every pothole, faded striping and 13 other types of road defects are collected by the app.

The road management app is from a New Jersey company called Vialytics.

Westwood is one of 40 municipalities in the state to use the software, which also rates road quality and provides easy to use data.

“Now you’re relying on the facts here not just my opinion of the street. It’s helped me a lot already. A lot of times you’ll have residents who just want their street paved. Now I can go back to people and say there’s nothing wrong with your street that it needs to be repaved,” said Rick Woods, superintendent of Public Works.

Superintendent Woods says he can even create work orders from the road as soon as a defect is detected.

Borough officials believe the Vialytics app will pay for itself in manpower and offer elected officials objective data when determining how to use taxpayer dollars for roads.



Source link

Continue Reading

AI Insights

How AI Simulations Match Up to Real Students—and Why It Matters

Published

on


AI-simulated students consistently outperform real students—and make different kinds of mistakes—in math and reading comprehension, according to a new study.

That could cause problems for teachers, who increasingly use general prompt-based artificial intelligence platforms to save time on daily instructional tasks. Sixty percent of K-12 teachers report using AI in the classroom, according to a June Gallup study, with more than 1 in 4 regularly using the tools to generate quizzes and more than 1 in 5 using AI for tutoring programs. Even when prompted to cater to students of a particular grade or ability level, the findings suggest underlying large language models may create inaccurate portrayals of how real students think and learn.

“We were interested in finding out whether we can actually trust the models when we try to simulate any specific types of students. What we are showing is that the answer is in many cases, no,” said Ekaterina Kochmar, co-author of the study and an assistant professor of natural-language processing at the Mohamed bin Zayed University of Artificial Intelligence in the United Arab Emirates, the first university dedicated entirely to AI research.

How the study tested AI “students”

Kochmar and her colleagues prompted 11 large language models (LLMs), including those underlying generative AI platforms like ChatGPT, Qwen, and SocraticLM, to answer 249 mathematics and 240 reading grade-level questions on the National Assessment of Educational Progress in reading and math using the persona of typical students in grades 4, 8, and 12. The researchers then compared the models’ answers to NAEP’s database of real student answers to the same questions to measure how closely AI-simulated students’ answers mirrored those of actual student performance.

The LLMs that underlie AI tools do not think but generate the most likely next word in a given context based on massive pools of training data, which might include real test items, state standards, and transcripts of lessons. By and large, Kochmar said, the models are trained to favor correct answers.

“In any context, for any task, [LLMs] are actually much more strongly primed to answer it correctly,” Kochmar said. “That’s why it’s very difficult to force them to answer anything incorrectly. And we’re asking them to not only answer incorrectly but fall in a particular pattern—and then it becomes even harder.”

For example, while a student might miss a math problem because he misunderstood the order of operations, an LLM would have to be specifically prompted to misuse the order of operations.

None of the tested LLMs created simulated students that aligned with real students’ math and reading performance in 4th, 8th, or 12th grades. Without specific grade-level prompts, the proxy students performed significantly higher than real students in both math and reading—scoring, for example, 33 percentile points to 40 percentile points higher than the average real student in reading.

Kochmar also found that simulated students “fail in different ways than humans.” While specifying specific grades in prompts did make simulated students perform more like real students with regard to how many answers they got correct, they did not necessarily follow patterns related to particular human misconceptions, such as order of operations in math.

The researchers found no prompt that fully aligned simulated and real student answers across different grades and models.

What this means for teachers

For educators, the findings highlight both the potential and the pitfalls of relying on AI-simulated students, underscoring the need for careful use and professional judgment.

“When you think about what a model knows, these models have probably read every book about pedagogy, but that doesn’t mean that they know how to make choices about how to teach,” said Robbie Torney, the senior director of AI programs at Common Sense Media, which studies children and technology.

Torney was not connected to the current study, but last month released a study of AI-based teaching assistants that similarly found alignment problems. AI models produce answers based on their training data, not professional expertise, he said. “That might not be bad per se, but it might also not be a good fit for your learners, for your curriculum, and it might not be a good fit for the type of conceptual knowledge that you’re trying to develop.”

This doesn’t mean teachers shouldn’t use general prompt-based AI to develop tools or tests for their classes, the researchers said, but that educators need to prompt AI carefully and use their own professional judgement when deciding if AI outputs match their students’ needs.

“The great advantage of the current technologies is that it is relatively easy to use, so anyone can access [them],” Kochmar said. “It’s just at this point, I would not trust the models out of the box to mimic students’ actual ability to solve tasks at a specific level.”

Torney said educators need more training to understand not just the basics of how to use AI tools but their underlying infrastructure. “To be able to optimize use of these tools, it’s really important for educators to recognize what they don’t have, so that they can provide some of those things to the models and use their professional judgement.”





Source link

Continue Reading

AI Insights

We’re Entering a New Phase of AI in Schools. How Are States Responding?

Published

on


Artificial intelligence topped the list of state technology officials’ priorities for the first time, according to an annual survey released by the State Educational Technology Directors’ Association on Wednesday.

More than a quarter of respondents—26%—listed AI as their most pressing issue, compared to 18% in a similar survey conducted by SETDA last year. AI supplanted cybersecurity, which state leaders previously identified as their No. 1 concern.

About 1 in 5 state technology officials—21%—named cybersecurity as their highest priority, and 18% identified professional development and technology support for instruction as their top issues.

Forty percent of respondents reported that their state had issued guidance on AI. That’s a considerable increase from just two years ago, when only 2% of respondents to the same survey reported their state had released AI guidance.

State officials’ heightened attention on AI suggests that even though many more states have released some sort of AI guidance in the past year or two, officials still see a lot left on their to-do lists when it comes to supporting districts in improving students’ AI literacy, offering professional development about AI for educators, and crafting policies around cheating and proper AI use.

“A lot of guidance has come out, but now the rubber’s hitting the road in terms of implementation and integration,” said Julia Fallon, SETDA’s executive director, in an interview.

SETDA, along with Whiteboard Advisors, surveyed state education leaders—including ed-tech directors, chief information officers, and state chiefs—receiving more than 75 responses across 47 states. It conducted interviews with state ed-tech teams in Alabama, Delaware, Nebraska, and Utah and did group interviews with ed-tech leaders from 14 states.

AI professional development is a rising priority

States are taking a myriad of approaches to responding to the AI challenge, the report noted.

Some states—such as North Carolina and Utah—designated an AI point person to help support districts in puzzling through the technology. For instance, Matt Winters, who leads Utah’s work, has helped negotiate statewide pricing for AI-powered ed-tech tools and worked with an outside organization to train 4,500 teachers on AI, according to the report.

Wyoming, meanwhile, has developed an “innovator” network that pays teachers to offer AI professional development to colleagues across the state. Washington hosted two statewide AI summits to help district and school leaders explore the technology.

And North Carolina and Virginia have used state-level competitive grant programs to support activities such as AI-specific professional development or AI-infused teaching and learning initiatives.

“As AI continues to evolve, developing connections with those in tech, in industry, and in commerce, as well as with other educators, will become more important than ever,” wrote Sydnee Dickson, formerly Utah’s state superintendent of public instruction, in an introduction to the report. “The technology is advancing too quickly for any one person or state to have all the answers.”





Source link

Continue Reading

Trending