AI Insights
New York Passes RAISE Act—Artificial Intelligence Safety Rules
The New York legislature recently passed the Responsible AI Safety and Education Act (SB6953B) (“RAISE Act”). The bill awaits signature by New York Governor Kathy Hochul.
Applicability and Relevant Definitions
The RAISE Act applies to “large developers,” which is defined as a person that has trained at least one frontier model and has spent over $100 million in compute costs in aggregate in training frontier models.
- “Frontier model” means either (1) an artificial intelligence (AI) model trained using greater than 10°26 computational operations (e.g., integer or floating-point operations), the compute cost of which exceeds $100 million; or (2) an AI model produced by applying knowledge distillation to a frontier model, provided that the compute cost for such model produced by applying knowledge distillation exceeds $5 million.
- “Knowledge distillation” is defined as any supervised learning technique that uses a larger AI model or the output of a larger AI model to train a smaller AI model with similar or equivalent capabilities as the larger AI model.
The RAISE Act imposes the following obligations and restrictions on large developers:
- Prohibition on Frontier Models that Create Unreasonable Risk of Critical Harm: The RAISE Act prohibits large developers from deploying a frontier model if doing so would create an unreasonable risk of “critical harm.”
- “Critical harm” is defined as the death or serious injury of 100 or more people, or at least $1 billion in damage to rights in money or property, caused or materially enabled by a large developer’s use, storage, or release of a frontier model through (1) the creation or use of a chemical, biological, radiological or nuclear weapon; or (2) an AI model engaging in conduct that (i) acts with no meaningful human intervention and (ii) would, if committed by a human, constitute a crime under the New York Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
- Pre-Deployment Documentation and Disclosures: Before deploying a frontier model, large developers must:
- (1) implement a written safety and security protocol;
- (2) retain an unredacted copy of the safety and security protocol, including records and dates of any updates or revisions, for as long as the frontier model is deployed plus five years;
- (3) conspicuously publish a redacted copy of the safety and security protocol and provide a copy of such redacted protocol to the New York Attorney General (“AG”) and the Division of Homeland Security and Emergency Services (“DHS”) (as well as grant the AG access to the unredacted protocol upon request);
- (4) record and retain for as long as the frontier model is deployed plus five years information on the specific tests and test results used in any assessment of the frontier model that provides sufficient detail for third parties to replicate the testing procedure; and
- (5) implement appropriate safeguards to prevent unreasonable risk of critical harm posed by the frontier model.
- Safety and Security Protocol Annual Review: A large developer must conduct an annual review of its safety and security protocol to account for any changes to the capabilities of its frontier models and industry best practices and make any necessary modifications to protocol. For material modifications, the large developer must conspicuously publish a copy of such protocol with appropriate redactions (as described above).
- Reporting Safety Incidents: A large developer must disclose each safety incident affecting a frontier model to the AG and DHS within 72 hours of the large developer learning of the safety incident or facts sufficient to establish a reasonable belief that a safety incident occurred.
- “Safety incident” is defined as a known incidence of critical harm or one of the following incidents that provides demonstrable evidence of an increased risk of critical harm: (1) a frontier model autonomously engaging in behavior other than at the request of a user; (2) theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a frontier model; (3) the critical failure of any technical or administrative controls, including controls limiting the ability to modify a frontier model; or (4) unauthorized use of a frontier model. The disclosure must include (1) the date of the safety incident; (2) the reasons the incident qualifies as a safety incident; and (3) a short and plain statement describing the safety incident.
If enacted, the RAISE Act would take effect 90 days after being signed into law.
AI Insights
Chip Firms in Malaysia Pause Investment Plans on Tariff Angst
Chip firms in Malaysia are holding back on investment and expansion as they await clarity on tariffs from the US, according to Malaysia Semiconductor Industry Association President Wong Siew Hai.
Source link
AI Insights
Witcher Game Maker Among Europe’s Priciest Stocks as Hype Grows
Optimism over a distant video-game launch has turned a Polish studio developing the title into one of Europe’s most richly valued companies, topping even hot sectors such as defense and electrification by one measure.
Source link
AI Insights
Tampa General Hospital, USF developing artificial intelligence to monitor NICU baby’s pain in real-time
TAMPA, Fla. – Researchers are looking to use artificial intelligence to detect when a baby is in pain.
The backstory:
A baby’s cry is enough to alert anyone that something’s wrong. But for some of the most critical babies in hospital care, they can’t cry when they are hurting.
READ: FDA approves first AI tool to predict breast cancer risk
“As a bedside nurse, it is very hard. You are trying to read from the signals from the baby,” said Marcia Kneusel, a clinical research nurse with TGH and USF Muma NICU.
With more than 20 years working in the neonatal intensive care unit, Kneusel said nurses read vital signs and rely on their experience to care for the infants.
“However, it really, it’s not as clearly defined as if you had a machine that could do that for you,” she said.
MORE: USF doctor enters final year of research to see if AI can detect vocal diseases
Big picture view:
That’s where a study by the University of South Florida comes in. USF is working with TGH to develop artificial intelligence to detect a baby’s pain in real-time.
“We’re going to have a camera system basically facing the infant. And the camera system will be able to look at the facial expression, body motion, and hear the crying sound, and also getting the vital signal,” said Yu Sun, a robotics and AI professor at USF.
Yu heads up research on USF’s AI study, and he said it’s part of a two-year $1.2 million National Institutes of Health grant.
He said the study will capture data by recording video of the babies before a procedure for a baseline. Video will record the babies for 72 hours after the procedure, then be loaded into a computer to create the AI program. It will help tell the computer how to use the same basic signals a nurse looks at to pinpoint pain.
READ: These states are spending the most on health insurance, study shows
“Then there’s alarm will be sent to the nurse, the nurse will come and check the situation, decide how to treat the pain,” said Sun.
What they’re saying:
Kneusel said there’s been a lot of change over the years in the NICU world with how medical professionals handle infant pain.
“There was a time period we just gave lots of meds, and then we realized that that wasn’t a good thing. And so we switched to as many non-pharmacological agents as we could, but then, you know, our baby’s in pain. So, I’ve seen a lot of change,” said Kneusel.
Why you should care:
Nurses like Kneusel said the study could change their care for the better.
“I’ve been in this world for a long time, and these babies are dear to me. You really don’t want to see them in pain, and you don’t want to do anything that isn’t in their best interest,” said Kneusel.
MORE: California woman gets married after lifesaving surgery to remove 40-pound tumor
USF said there are 120 babies participating in the study, not just at TGH but also at Stanford University Hospital in California and Inova Hospital in Virginia.
What’s next:
Sun said the study is in the first phase of gathering the technological data and developing the AI model. The next phase will be clinical trials for real world testing in hospital settings, and it would be through a $4 million NIH grant, Sun said.
The Source: The information used in this story was gathered by FOX13’s Briona Arradondo from the University of South Florida and Tampa General Hospital.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained