AI Insights
New York Seeks to RAISE the Bar on AI Regulation – Tech & Sourcing @ Morgan Lewis
New York state lawmakers on June 12, 2025 passed the Responsible AI Safety and Education Act (the RAISE Act), which aims to safeguard against artificial intelligence (AI)-driven disaster scenarios by focusing on the largest AI model developers; the bill now heads to the governor’s desk for final approval. The RAISE Act is the latest legislative movement at the state level seeking to regulate AI, a movement that may continue to gain momentum after a 10-year moratorium on AI regulation was removed from the recently passed One Big Beautiful Bill.
Background and Core Provisions
Inspired by California’s SB 1047 bill, which was vetoed by California Governor Gavin Newsom in September 2024 over concerns that it could stifle innovation, the RAISE Act aims to prevent so-called “frontier AI models” from contributing to “critical harm.” For the purposes of the RAISE Act, “critical harm” is defined as events in which AI causes the death or injury of more than 100 people, or more than $1 billion in damages to rights in money or property caused or materially enabled by a large developer’s creation, use, storage, or release of frontier model, through either (1) the creation or use of a chemical, biological, radiological, or nuclear weapon or (2) an artificial intelligence model engaging in conduct that is both (a) done with limited human intervention and (b) would, if committed by a human, constitute a crime specified in the penal law that required intent, recklessness, or gross negligence or the soliciting or aiding and abetting of such crimes.
Unlike SB 1047, which faced criticism for casting too wide a net over general AI systems, the RAISE Act targets only “frontier” models developed by companies that meet both of the following criteria: (1) a training cost threshold where the applicable AI model was trained using more than $100 million in computing resources, or more than $5 million in computing resources where a smaller artificial model was trained on a larger artificial intelligence model and has similar capabilities to the larger artificial intelligence model; and (2) the model is made available to New York residents. To the extent the RAISE Act aligns with similar state-level regulations and restrictions, this would theoretically allow some room for innovation by entities (like startup companies and research organizations) less likely to cause such critical harm.
If a company meets both criteria and is therefore subject to the jurisdiction of the RAISE Act, it will need to comply with all the following before deploying any frontier AI model:
- Implement a written safety and security protocol
- Retain an unredacted version of such safety and security protocol for as long as the frontier model is deployed, plus five years
- Conspicuously publish a copy of the safety and security protocol and transmit such protocol to the division of homeland security and emergency services
- Record information on specific tests and test results used in any assessment of the frontier AI model
From a practical perspective, requirements such as recordation of information on testing of any frontier AI model may push smaller startups and research organizations out of the market to the extent the resources necessary to maintain such information present additional and costly overhead.
Enforcement and Exceptions
The RAISE Act empowers the New York attorney general to levy civil penalties of up to $10 million for initial violations and up to $30 million for subsequent violations by noncompliant covered companies. This includes penalties for violations of a developer’s transparency obligations as specified above or as required elsewhere in the RAISE Act, such as the requirement that covered companies retain an independent auditor annually to review compliance with the law. However, covered companies may make “appropriate redactions” to their safety protocols when necessary to protect public safety, safeguard trade secrets, maintain confidential information as required by law, or protect employee or customer privacy.
Looking Ahead
The bill’s fate remains uncertain. Our team is monitoring developments closely, including potential impacts on commercial contracting, compliance obligations, and technology adoption.
AI Insights
Govt. AI Assessment Ranks States’ Readiness, Adoption Levels
An AI readiness assessment released Wednesday by Code for America explores how U.S. state governments are preparing for the AI-powered public-sector transformation and identifies emerging trends within that shift.
Trends highlighted in the analysis include the rise of chief AI officers, investment in training programs, an evolving cybersecurity threat landscape, state-level policymaking, and secure sandbox environments for experimentation.
The Government AI Landscape Assessment explores AI readiness in three areas: leadership and governance, capacity building, and technical infrastructure and capabilities. The resource classifies states’ readiness levels in each of these areas under one of four categories: early, developing, established or advanced. The early classification includes states that have taken the initial steps in AI adoption, while the advanced classification recognizes states with sophisticated capabilities, frameworks and approaches.
States leading in readiness, according to this assessment, are Pennsylvania, New Jersey, and Utah, each of which received two “advanced” classifications and one “established” classification.
Each of these states has prioritized AI readiness. Pennsylvania has been testing and measuring AI for impact, and New Jersey is taking an economy-focused approach to AI and has been an early implementer of AI training. Utah has been an early AI adopter and even recently created an AI policy office that aims to answer societal AI questions.
Overall, in the category of leadership and governance, only three states were classified as advanced. Half, or 25, were classified as established; 16 as developing; and seven as early. Washington, D.C., was included as a state in this assessment. Utah and North Carolina were highlighted for their work in this area.In AI capacity building, four states were classified as advanced, 10 as established, 23 as developing, and 14 as early. New Jersey and Pennsylvania were highlighted for their work here.
In technical infrastructure and capabilities, three states were classified as advanced, 16 as established, 23 as developing, and nine as early. Colorado and Minnesota were highlighted for their work in this.
“This analysis demonstrates what many of us know to be true: states are leading the way when it comes to adopting AI to make government more efficient and effective,” Jenn Thom, Code for America’s senior director of data science, said in a statement.
The assessment was created by reviewing public materials, AI-focused legislation and policy, guidance and reports, news coverage, and direct input.
Debate has arisen recently about whether AI policymaking should occur at the state or federal level, with the consensus largely being that both should have a role in regulation. With the removal of a provision to enact a moratorium on state-level AI regulation from the federal budget bill, states retain the authority to enact policy to guide responsible AI use.
AI Insights
Microsoft launches $4B artificial intelligence reskilling institute
Microsoft unveiled a new initiative Wednesday that’s intended to bring artificial intelligence skills to millions of people around the world.
Microsoft Elevate will spend $4 billion in cash and technology donations to philanthropic, educational, and labor organizations over the next four years, as it seeks to accelerate the proliferation of AI technology.
Microsoft makes the AI tool CoPilot, and is a key partner of OpenAI, the maker of ChatGPT. The company is investing aggressively in the infrastructure needed to power its AI push, pledging to spend $80 billion on data centers this year.
The investments come as Microsoft lays off thousands of employees in in its home state, Washington, and globally.
RELATED: Latest Microsoft layoffs could hit 9,000 employees
“ One of the things that has changed the most dramatically about Microsoft is we’ve moved as a company — as our industry has moved as an industry — from one that spent almost every dollar it earned on employing people to what is in fact the greatest capital and infrastructure investment in the history of global infrastructure,” Microsoft President and Vice Chair Brad Smith said at a launch event in Seattle.
In an interview with KUOW, Smith said that restructuring is “ frankly something that should always be hard, but it is something that needs to be done for a company to be successful for many decades and not just a few years.”
Smith said Microsoft Elevate will employ about 300 people, and partner with organizations around the world on a variety of initiatives aimed at increasing AI literacy. The Microsoft Elevate Academy plans to help 20 million people earn AI skilling credentials to be more competitive in an uncertain job market.
“ I think in many ways it gives us the opportunity to reach everybody,” Smith said, “and that includes people who will be using and designing AI in the future, say the future of what computer science education becomes, people who are designing AI systems for businesses, but consumers as well, students and teachers who can use AI to better reach and prepare for helping students.”
The initiative also includes the creation of Microsoft’s AI Economy Institute, a think tank of academics that will study the societal impacts of AI.
The effect generative AI will have on education remains a source of much speculation and debate.
RELATED: Learning tool or BS machine? How AI is shaking up higher ed
While some educators are embracing the technology, others are struggling to rein in cheating and question whether the technology could undermine the very premise of education as we know it.
Regardless of the ongoing debate, Microsoft has always been at the forefront of bringing technology into the classroom, first with PCs and now AI. The company is betting that the resources it is devoting to Microsoft Elevate will help shape a path forward that allows AI to be more useful than disruptive in education and across the economy.
RELATED: AI should be used in class, not feared. That’s the message of these Seattle area teachers
“ There are many different skills that we’re all going to need to work together to pursue, but I think there’s also a North Star that should guide us,” Smith said. “It’s a North Star that might sound unusual coming from a tech company, but I think it’s a North Star that matters most. We need to use AI to help us think more, not less.”
AI Insights
Artificial Intelligence and Criminal Exploitation: A New Era of Risk
WASHINGTON, D.C. – The House Judiciary Subcommittee on Crime and Federal Government Surveillance will hold a hearing on Wednesday, July 16, 2025, at 10:00 a.m. ET. The hearing, “Artificial Intelligence and Criminal Exploitation: A New Era of Risk,” will examine the growing threat of Artificial Intelligence (AI)-enabled crime, including how criminals are leveraging AI to conduct fraud, identity theft, child exploitation, and other illicit activities. It will also explore the capabilities and limitations of law enforcement in addressing these evolving threats, as well as potential legislative and policy responses to ensure public safety in the age of AI.
WITNESSES:
- LTC Andrew Bowne, Former Counsel, Department of the Air Force Artificial Intelligence Accelerator at the Massachusetts Institute of Technology
- Ari Redbord, Global Head of Policy, TRM Labs; former Assistant United States Attorney
- Zara Perumal, Co-Founder, Overwatch Data; former member, Threat Analysis Department, Google
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle