Connect with us

AI Research

Draft bill from Ted Cruz would establish a federal AI sandbox in OSTP

Published

on


Upcoming legislation from Sen. Ted Cruz, R-Texas, looks to establish a federal artificial intelligence sandbox program to evaluate the safety and efficacy of AI technologies, with the effort being overseen by the White House Office of Science and Technology Policy.

In documents reviewed by Nextgov/FCW, the proposed regulatory sandbox program created within OSTP would allow participating AI companies to receive temporary exemption from external AI regulations.

These temporary waivers would be available for “one or more covered provisions of an applicable agency in order to test, experiment, or temporarily provide to consumers artificial intelligence products or services or artificial intelligence development methods on a limited basis without being subject to the enforcement, licensing, or authorization requirements of such covered provisions.”

Part of Cruz’s proposed program, which was first reported by Bloomberg, would create a review process to determine whether an AI software applying for an exemption waiver presents “a health and safety risk, a risk of economic damage, or a risk of unfair and deceptive trade practices.”

The bill further calls for the specific assessment and evaluation criteria to be published in the Federal Register with an open comment period. 

Cruz first announced his intent to propose an AI sandbox bill back in May, with the ultimate goal being to remove barriers to AI adoption and prevent overregulation at the state level.

The first draft of his sandbox bill addresses the failed 10-year moratorium that was originally part of the recent budget reconciliation package and would have prohibited the passage of new state-level AI regulation for the next decade by offering waivers on existing regulations for companies testing their AI in the sandbox program. 

Federal agencies and private sector companies have participated in sandbox efforts before, such as NVIDIA and non-profit MITRE working to improve and implement AI tools tailored for government workloads.





Source link

AI Research

Congress ramps up push to arm consumer product regulators with AI tools

Published

on


A move to empower federal consumer product regulators with artificial intelligence tools picked up steam this week with the introduction of a bipartisan Senate bill whose companion has already passed the House.

The Consumer Safety Technology Act from Sens. John Curtis, R-Utah, and Lisa Blunt Rochester, D-Del., calls on the Consumer Product Safety Commission to create a pilot program that uses AI to track product injury trends, identify hazards, monitor recalls and pinpoint which products fall short of critical standards.

The legislation also directs the Federal Trade Commission and the Commerce secretary to deliver a report on blockchain technology and tokens. 

“The world is changing fast, and consumer protection must keep pace,” Curtis said in a press release Thursday. “This bill puts the right tools in the hands of experts — employing AI to catch dangerous products before they hurt families, exploring blockchain to strengthen supply chains, and making sure digital tokens don’t become a new avenue for fraud. This is about keeping people safe while helping American innovation thrive.”

The House version of the bill, introduced in March by Rep. Darren Soto, cleared the lower chamber in July. The Florida Democrat said at the time that the legislation would “help make the CPSC more efficient.”

“The reality is, the crooks are already using AI,” Soto said. “The cops on the beat need to be able to use this, too.”

The Senate bill directs the CPSC to seek out a variety of stakeholders to consult on the agency’s AI pilot, including cybersecurity experts, technologists, data scientists, machine-learning specialists, retailers, consumer product safety groups and manufacturers.

Within a year of the pilot’s conclusion, the CPSC would be charged with submitting a report to Congress detailing its findings and data, “including the extent to which the use of artificial intelligence improved the ability of the Commission to advance the consumer product safety mission,” the bill states.

The blockchain section of the bill orders the FTC and Commerce Department to study how the technology can be leveraged to protect consumers by guarding against fraud attempts and other unfair and deceptive practices. There would also be an examination of what federal regulations could be modified to spur blockchain adoption.

A separate report would look into unfair or deceptive acts and practices tied to transactions via digital tokens. A fact sheet from Curtis said that provision is aimed at “ensuring consumers are protected without stifling responsible innovation.”

Blunt Rochester said in a statement that the government “must be able to keep up with new and emerging technologies, especially when it comes to consumer safety.”

“The Consumer Safety Technology Act would allow the Consumer Product Safety Commission to explore using artificial intelligence to further its critical goals,” she continued. “I am grateful to work alongside Senator Curtis on this legislation and look forward to getting it over the finish line.”


Written by Matt Bracken

Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity.

Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star.

You can reach him at matt.bracken@scoopnewsgroup.com.



Source link

Continue Reading

AI Research

3 Arguments Against AI in the Classroom

Published

on


Generative artificial intelligence is here to stay, and K-12 schools need to find ways to use the technology for the benefit of teaching and learning. That’s what many educators, technology companies, and AI advocates say.

In response, more states and districts are releasing guidance and policies around AI use in the classroom. Educators are increasingly experimenting with the technology, with some saying that it has been a big time saver and has made the job more manageable.

But not everyone agrees. There are educators who are concerned that districts are buying into the AI hype too quickly and without enough skepticism.

A nationally representative EdWeek Research Center survey of 559 K-12 educators conducted during the summer found that they are split on whether AI platforms will have a negative or positive impact on teaching and learning in the next five years: 47% say AI’s impact will be negative, while 43% say it will be positive.

Education Week talked to three veteran teachers who are not using generative AI regularly in their work and are concerned about the potential negative effects the technology will have on teaching and learning.

Here’s what they think about using generative AI in K-12.

AI provides ‘shortcuts’ that are not conducive for learning

Dylan Kane, a middle school math teacher at Lake County High School in Leadville, Colo., isn’t “categorically against AI,” he said.

He has experimented with the technology personally, using it to help him improve his Spanish-language skills. AI is a “half decent” Spanish tutor, if you understand its limitations, he said. For his teaching job, Kane has experimented with AI tools to generate student materials like many other teachers, but it takes too many iterations of prompting to generate something he would actually put in front of his classes.

“I will do a better job just doing it myself and probably take less time to do so,” said Kane, who is in his 14th year of teaching. Creating student materials himself means he can be “more intentional” about the questions he asks, how they’re sequenced, how they fit together, how they build on each other, and what students already know.

His biggest concern is how generative AI will affect educators and students’ critical-thinking skills. Too often, people are using these tools to take “shortcuts,” he said.

“If I want students to learn something, I need them to be thinking about it and not finding shortcuts to avoid thinking,” Kane said.

The best way to prepare students for an AI-powered future is to “give them a broad and deep collection of knowledge about the world and skills in literacy, math, history and civics, and science,” so they’ll have the knowledge they need to understand if an AI tool is providing them with a helpful answer, he said.

That’s true for teachers, too, Kane said. The reason he can evaluate whether AI-generated material is accurate and helpful is because of his years of experience in education.

“One of my hesitations about using large language models is that I won’t be developing skills as a teacher and thinking really hard about what things I put in front of students and what I want them to be learning,” Kane said. “I worry that if I start leaning heavily on large language models, that it will stunt my growth as a teacher.”

And the fact that teachers have to use generative AI tools to create student materials “points to larger issues in the teaching profession” around the curricula and classroom resources teachers are given, Kane said. AI is not “an ideal solution. That’s a Band-Aid for a larger problem.”

Kane’s open to using AI tools. For instance, he said he finds generative AI technology helpful for writing word problems. But educators should “approach these things with a ton of skepticism and really ask ourselves: ‘Is this better than what we should be doing?’”

Experts and leaders haven’t provided good justifications for AI use in K-12

Jed Williams, a high school math and science teacher in Belmont, Mass., said he hasn’t heard any good justifications for why generative AI should be implemented in schools.

The way AI is being presented to teachers tends to be “largely uncritical,” said Williams, who teaches computer science, physics, and robotics at Belmont High School. Often, professional development opportunities about AI don’t provide a “critical analysis” of the technology and just “check the box” by mentioning that AI tools have downsides, he said.

For instance, one professional development session he attended only spent “a few seconds” on the downsides of AI tools, Williams said. The session covered the issue of overreliance on AI tools, but Williams criticized it for not talking about “labor exploitation, overuse of resources, sacrificing the privacy of students and faculty,” he said.

“We have a responsibility to be skeptical about technologies that we bring into the classroom,” Williams said, especially because there’s a long history of ed-tech adoption failures.

Williams, who has been teaching since 2006, is also concerned that AI tools could decrease students’ cognitive abilities.

“So much of learning is being put into a situation that is cognitively challenging,” he said. “These tools, fundamentally, are built on relieving the burden of cognitive challenge.

“Especially in introductory courses, where students aren’t familiar with programming and you want them to try new things and experiment and explore, why would you give them this tool that completely removes those aspects that are fundamental to learning?” Williams said.

Williams is also worried that a rushed implementation of AI tools would sacrifice students and teachers’ privacy and use them as “experimental subjects in developing technologies for tech companies.”

Education leaders “have a tough job,” Williams said. He understands the pressure they feel around implementing AI, but he hopes they give it “critical thought.”

Decisionmakers need to be clear about what technology is being proposed, how they anticipate teachers and students using it, what the goal of its use is, and why they think it’s a good technology to teach students how to use, Williams said.

“If somebody has a good answer for that, I’m very happy to hear proposals on how to incorporate these things in a healthy, safe way,” he said.

Educators shouldn’t fall for the ‘fallacy’ that AI is inevitable

Elizabeth Bacon, a middle school computer science teacher in California, hasn’t found any use cases with generative AI tools that she feels will be beneficial for her work.

“I would rather do my own lesson plan,” said Bacon, who has been teaching for more than 20 years. “I have an idea of what I want the students to learn, of what’s interesting to them, and where they are and the entry points for them to engage in it.”

Teachers have a lot of pressure to do more with less. That’s why Bacon said she doesn’t judge other teachers who want to use AI to get the job done. It’s “a systemic problem,” but teaching and learning shouldn’t be replaced by machines, she said.

Bacon believes it’s “particularly dangerous” for middle school students to be using “a machine emulating a person.” Students are still developing their character, their empathy, their ability to socialize with peers and work collectively toward a goal, she said, and a chatbot would undermine that.

She can foresee using generative AI tools to explain to her students what large language models are. It’s important for them to learn about generative AI, that it’s a statistical model predicting the next likely word based on data it’s been trained on, that there’s no meaning [or feelings] behind it, Bacon said.

Last school year, she asked her high school students what they wanted to know about AI. Their answers: the technology’s social and environmental impacts.

Bacon doesn’t think educators should fall for the “fallacy” that AI is the inevitable future because technology companies are the ones saying that and they have an incentive to say that, she said.

“Educators have basically been told, in a lot of ways, ‘don’t trust your own instincts about what’s right for your students, because [technology companies are] going to come in and tell you what’s going to be good for your students,” she said.

It’s discouraging to see that a lot of the AI-related professional development events she’s attended have “essentially been AI evangelism” and “product marketing,” she said. There should be more thought about why this technology is necessary in K-12, she said.

Technology experts have talked up AI’s potential to increase productivity and efficiency. But as an educator, “efficiency is not one of my values,” Bacon said.

“My value is supporting students, meeting them where they are, taking the time it takes to connect with these students, taking the time that it takes to understand their needs,” she said. “As a society, we have to take a hard look: Do we value education? Do we value doing our own thinking?”





Source link

Continue Reading

AI Research

University Of Utah Teams With HPE, NVIDIA To Boost AI Research

Published

on


The University of Utah (the U) is planning to join forces with two powerhouse tech firms to accelerate research and discovery using artificial intelligence (AI). The agreement with Hewlett Packard Enterprise (HPE) and AI chipmaker NVIDIA will amplify the U’s capacity for understanding cancer, Alzheimer’s disease, mental health, and genetics. The initiative is projected to enable medical breakthroughs, driving innovation, and scientific discovery across disciplines.

“The U has a proud legacy of pioneering technological breakthroughs,” said Taylor Randall, president of the University of Utah. “Our goal is to make the state awash in computing power by building a robust AI ecosystem benefiting our entire system of higher education, driving research to find new cures, and igniting Utah’s entrepreneurial spirit.”

(Photo: The University of Utah / Facebook)

The partnership, which includes a $50 million investment of funds from both public and philanthropic sources, is projected to increase the U’s computing capacity 3.5-fold. The flagship school’s Board of Trustees gave preliminary approval to the proposed arrangement on September 9.

The structure paves a path for substantial advances in computing storage and infrastructure required for Utah-based projects in AI and innovation. The goal is to lay the foundation for a scalable AI ecosystem available to researchers, learners, and entrepreneurs across Utah. The multi-year initiative would build upon existing capabilities in AI, giving the U access to substantially more computing power.

Brynn and Peter Huntsman along with the Huntsman Family Foundation will provide a lead philanthropic gift to the U that is intended to initiate the project and help encourage other supporters to make investments required to move the work forward through AI “supercomputer” systems designed to handle enormous processing and storage needs. The university will seek remaining funds from the state of Utah and other sources.

“This AI initiative will accelerate world class cancer research that enhances capabilities in ways we hardly imagined just a few years ago,” said Peter Huntsman, CEO and chairman, Huntsman Cancer Foundation. “Huntsman Cancer Foundation recently announced our commitment to support the expansion of the educational, research, and clinical care capacity of the world renown Huntsman Cancer Institute in Vineyard, Utah, which will serve as a hub for cancer AI research. These investments will speed discoveries and enhance the state of Utah’s leadership in AI education and economic opportunity.”

Mental health will be a major focus of the AI research endeavor. 

“As the Huntsman Mental Health Institute opens its new 185,000-square-foot Translational Research Building this coming year, we’re looking forward to increasing momentum around mental health research, including the impact of this technology,” said Christena Huntsman Durham, Huntsman Mental Health Foundation CEO and co-chair. “We know so many people are struggling with mental health challenges; we’re thrilled we will be able to move even faster to get help to those who need it most.”

Check out all the latest news related to Utah economic development, corporate relocation, corporate expansion and site selection.



Source link

Continue Reading

Trending