AI Research
Senator Cruz Unveils AI Framework and Regulatory Sandbox Bill

On September 10, Senate Commerce, Science, and Transportation Committee Chair Ted Cruz (R-TX) released what he called a “light-touch” regulatory framework for federal AI legislation, outlining five pillars for advancing American AI leadership. In parallel, Senator Cruz introduced the Strengthening AI Normalization and Diffusion by Oversight and eXperimentation (“SANDBOX”) Act (S. 2750), which would establish a federal AI regulatory sandbox program that would waive or modify federal agency regulations and guidance for AI developers and deployers. Collectively, the AI framework and the SANDBOX Act mark the first congressional effort to implement the recommendations of AI Action Plan the Trump Administration released on July 23.
- Light-Touch AI Regulatory Framework
Senator Cruz’s AI framework, titled “A Legislative Framework for American Leadership in Artificial Intelligence,” calls for the United States to “embrace its history of entrepreneurial freedom and technological innovation” by adopting AI legislation that promotes innovation while preventing “nefarious uses” of AI technology. Echoing President Trump’s January 23 Executive Order on “Removing Barriers to American Leadership in Artificial Intelligence” and recommendations in the AI Action Plan, the AI framework sets out five pillars as a “starting point for discussion”:
- Unleashing American Innovation and Long-Term Growth. The AI framework recommends that Congress establish a federal AI regulatory sandbox program, provide access to federal datasets for AI training, and streamline AI infrastructure permitting. This pillar mirrors the priorities of the AI Action Plan and President Trump’s July 23 Executive Order on “Accelerating Federal Permitting of Data Center Infrastructure.”
- Protecting Free Speech in the Age of AI. Consistent with President Trump’s July 23 Executive Order on “Preventing Woke AI in the Federal Government,” Senator Cruz called on Congress to “stop government censorship” of AI (“jawboning”) and address foreign censorship of Americans on AI platforms. Additionally, while the AI Action Plan recommended revising the National Institute of Standards & Technology (“NIST”)’s AI Risk Management Framework to “eliminate references to misinformation, Diversity, Equity, and Inclusion, and climate change,” this pillar calls for reforming NIST’s “AI priorities and goals.”
- Prevent a Patchwork of Burdensome AI Regulation. Following a failed attempt by Congressional Republicans to enact a moratorium on the enforcement of state and local AI regulations in July, the AI Action Plan called on federal agencies to limit federal AI-related funding to states with burdensome AI regulatory regimes and on the FCC to review state AI laws that may be preempted under the Communications Act. Similarly, the AI framework calls on Congress to enact federal standards to prevent burdensome state AI regulation, while also countering “excessive foreign regulation” of Americans.
- Stop Nefarious Uses of AI Against Americans. In a nod to bipartisan support for state digital replica protections – which ultimately doomed Congress’s state AI moratorium this summer – this pillar calls on Congress to protect Americans against digital impersonation scams and fraud. Additionally, this pillar calls on Congress to expand the principles of the federal TAKE IT DOWN Act, signed into law in May, to safeguard American schoolchildren from nonconsensual intimate visual depictions.
- Defend Human Value and Dignity. This pillar appears to expand on the policy of U.S. “global AI dominance in order to promote human flourishing” established by President Trump’s January 23 Executive Order by calling on Congress to reinvigorate “bioethical considerations” in federal policy and to “oppose AI-driven eugenics and other threats.”
- SANDBOX Act
Consistent with recommendations in the AI Action Plan and AI Framework, the SANDBOX Act would direct the White House Office of Science & Technology Policy (“OSTP”) to establish and operate an “AI regulatory sandbox program” with the purpose of incentivizing AI innovation, the development of AI products and services, and the expansion of AI-related economic opportunities and jobs. According to Senator Cruz’s press release, the SANDBOX Act marks a “first step” in implementing the AI Action Plan, which called for “regulatory sandboxes or AI Centers of Excellence around the country where researchers, startups, and established enterprises can rapidly deploy and test AI tools.”
Program Applications. The AI regulatory sandbox program would allow U.S. companies and individuals, or the OSTP Director, to apply for a “waiver or modification” of one or more federal agency regulations in order to “test, experiment, or temporarily provide” AI products, AI services, or AI development methods. Applications must include various categories of information, including:
- Contact and business information,
- A description of the AI product, service, or development method,
- Specific regulation(s) that the applicant seeks to have waived or modified and why such waiver or modification is needed,
- Consumer benefits, business operational efficiencies, economic opportunities, jobs, and innovation benefits of the AI product, service, or development method,
- Reasonably foreseeable risks to health and safety, the economy, and consumers associated with the waiver or modification, and planned risk mitigations,
- The requested time period for the waiver or modification, and
- Each agency with jurisdiction over the AI product, service, or development method.
Agency Reviews and Approvals. The bill would require OSTP to submit applications to federal agencies with jurisdiction over the AI product, service, or development method within 14 days. In reviewing AI sandbox program applications, federal agencies would be required to solicit input from the private sector and technical experts on whether the applicant’s plan would benefit consumers, businesses, the economic, or AI innovation, and whether potential benefits outweigh health and safety, economic, or consumer risks. Agencies would be required to approve or deny applications within 90 days, with a record documenting reasonably foreseeable risks, the mitigations and consumer protections that justify agency approval, or the reasons for agency denial. Denied applicants would be authorized to appeal to OSTP for reconsideration. Approved waivers or modifications would be granted for a term of two years, with up to four additional two-year terms if requested by the applicant and approved by OSTP.
Participant Terms and Requirements. Participants with approved waivers or modifications would be immune from federal criminal, civil, or agency enforcement of the waived or modified regulations, but would remain subject to private consumer rights of action. Additionally, participants would be required to report incidents of harm to health and safety, economic damage, or unfair or deceptive trade practices to OSTP and federal agencies within 72 hours after the incident occurs, and to make various disclosures to consumers. Participants would also be required to submit recurring reports to OSTP throughout the term of the waiver or modification, which must include the number of consumers affected, likely risks and mitigations, any unanticipated risks that arise during deployment, adverse incidents, and the benefits of the waiver or modification.
Congressional Review. Finally, the SANDBOX Act would require the OSTP Director to submit to Congress any regulations that the Director recommends for amendment or repeal “as a result of persons being able to operate safely” without those regulations under the sandbox program. The bill would establish a fast-track procedure for joint resolutions approving such recommendations, which, if enacted, would immediately repeal the regulations or adopt the amendments recommended by OSTP.
The SANDBOX Act’s regulatory sandbox program would sunset in 12 years unless renewed. The introduction of the SANDBOX Act comes as states have pursued their own AI regulatory sandbox programs – including a sandbox program established under the Texas Responsible AI Governance Act (“TRAIGA”), enacted in June, and an “AI Learning Laboratory Program” established under Utah’s 2024 AI Policy Act. The SANDBOX Act would require OSTP to share information these state AI sandbox programs if they are “similar or comparable” to the SANDBOX Act, in addition to coordinating reviews and accepting “joint applications” for participants with AI projects that would benefit from “both Federal and State regulatory relief.”
AI Research
Researchers ‘polarised’ over use of AI in peer review

Researchers appear to be becoming more divided over whether generative artificial intelligence should be used in peer review, with a survey showing entrenched views on either side.
A poll by IOP Publishing found that there has been a big increase in the number of scholars who are positive about the potential impact of new technologies on the process, which is often criticised for being slow and overly burdensome for those involved.
A total of 41 per cent of respondents now see the benefits of AI, up from 12 per cent from a similar survey carried out last year. But this is almost equal to the proportion with negative opinions which stands at 37 per cent after a 2 per cent year-on-year increase.
This leaves only 22 per cent of researchers neutral or unsure about the issue, down from 36 per cent, which IOP said indicates a “growing polarisation in views” as AI use becomes more commonplace.
Women tended to have more negative views about the impact of AI compared with men while junior researchers tended to have a more positive view than their more senior colleagues.
Nearly a third (32 per cent) of those surveyed say they already used AI tools to support them with peer reviews in some form.
Half of these say they apply it in more than one way with the most common use being to assist with editing grammar and improving the flow of text.
A minority used it in more questionable ways such as the 13 per cent who asked the AI to summarise an article they were reviewing – despite confidentiality and data privacy concerns – and the 2 per cent who admitted to uploading an entire manuscript into a chatbot so it could generate a review on their behalf.
IOP – which currently does not allow AI use in peer reviews – said the survey showed a growing recognition that the technology has the potential to “support, rather than replace, the peer review process”.
But publishers must fund ways to “reconcile” the two opposing viewpoints, the publisher added.
A solution could be developing tools that can operate within peer review software, it said, which could support reviewers without positing security or integrity risks.
Publishers should also be more explicit and transparent about why chatbots “are not suitable tools for fully authoring peer review reports”, IOP said.
“These findings highlight the need for clearer community standards and transparency around the use of generative AI in scholarly publishing. As the technology continues to evolve, so too must the frameworks that support ethical and trustworthy peer review,” Laura Feetham-Walker, reviewer engagement manager at IOP and lead author of the study, said.
AI Research
Amazon Employing AI to Help Shoppers Comb Reviews

Amazon earlier this year began rolling out artificial intelligence-voiced product descriptions for select customers and products.
AI Research
Nubank To Continue Leveraging AI To Enhance Digital Financial Services In Latin America

Nubank (NYSE: NU) is reportedly millions of customers across Latin America. Recently, the company’s Chief Technology Officer, Eric Young, shared his vision for leveraging artificial intelligence to fuel Nubank’s global expansion and improve financial services.
During a recent discussion, Young outlined how AI is not just a tool but a cornerstone for operational efficiency, customer-centric growth, and democratizing access to personalized finance.
With a career that includes work at Amazon in the early 2000s, Young brings a philosophy of prioritizing customer experience.
At Amazon, he witnessed firsthand how technology could transform user experiences, a mindset he now applies to Nubank’s mission. “If not us, then who?”
Young posed rhetorically during the videocast, underscoring Nubank’s unique position to disrupt traditional banking.
Founded in Brazil in 2013, Nubank has positively impacted the financial sector by prioritizing financial inclusion and superior customer service, challenging legacy banks with its digital-first approach.
Under Young’s leadership, Nubank’s priorities are clear: enhance agility, expand internationally, and harness AI to serve customers better.
He emphasized the need for cross-functional collaboration, particularly with the product and design teams.
This includes partnering with Nubank’s recently appointed Chief Design Officer (CDO), Ethan Eismann, to iterate quickly on new features.
By fostering a culture of testing and learning, Young aims to deliver products that not only meet but exceed user expectations, ultimately capturing a larger market share.
This involves deepening engagement with existing users, attracting new ones, and venturing into underserved markets where financial services remain inaccessible.
Central to Young’s strategy is AI’s transformative potential.
Nubank’s 2024 acquisition of Hyperplane, an AI-focused startup, marks a pivotal step in this direction.
Young highlighted how advanced language models—such as those powering ChatGPT and Google Gemini—can bridge the gap between everyday users and elite financial advisory services.
These models excel at processing vast amounts of data, including transaction histories, to offer hyper-personalized recommendations.
Imagine an AI that automates budgeting, predicts spending patterns, and suggests investment opportunities tailored to an individual’s financial profile, all without the hefty fees of traditional private banking.
Young drew a parallel to the exclusivity of high-end services.
Historically, AI-driven private banking was reserved for the ultra-wealthy, but Nubank’s vision is to make it ubiquitous.
“We’re democratizing access to hyper-personalized financial experiences.”
By analyzing user data ethically and securely, AI can empower customers from all segments—whether a small business owner in Mexico or a young professional in Colombia—to manage their finances with the precision once afforded only to elites.
This aligns with Nubank’s core ethos of inclusion, ensuring that technology serves as an equalizer rather than a divider.
Looking ahead, Young sees AI as the engine for Nubank’s platformization efforts, enabling scalable solutions that support international growth.
As Nubank eyes further expansion beyond Brazil, Mexico, and Colombia, AI will streamline operations, from fraud detection to customer support chatbots, reducing costs while enhancing reliability.
Yet, Young cautioned that success hinges on responsible implementation—prioritizing privacy, transparency, and human oversight to build trust.
In an era where fintechs aggressively compete for market share, Eric Young’s insights position Nubank not just as a bank, but as a key player in AI-powered financial services.
By blending technological prowess with a focus on the customer, Nubank is set to transform money management, making various services more accessible to consumers.
As Young basically put it, the question isn’t whether AI will change finance—it’s how Nubank will aim to make a positive impact.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries