Connect with us

AI Research

Artificial intelligence bill clears first Senate special session committee | News

Published

on


The proposed Senate Democrats’ fix to the first-in-the-nation artificial intelligence regulation dealing with “algorithmic discrimination” won approval from its first committee on the first day of the 2025 special session.

Senate Bill 4 — sponsored by Senate Majority Leader Robert Rodriguez, D-Denver, who is also the author of the 2024 law that Gov. Jared Polis and the attorney general want to see amended — won a 4-3 party-line vote from the Senate Business Affairs and Labor Committee on Thursday. 

Backers have described SB 24-205 as seeking to establish guardrails around the use of artificial intelligence, primarily in employment, health care, education, and government practices, where, they said, the risk of bias or discrimination exists.

While Gov. Jared Polis signed the AI law last year, he pushed for adjustments to address lingering worries. In his May 17 signing statement, Gov. Jared Polis asked lawmakers to keep working on it before its 2026 implementation date.

“I am concerned about the impact this law may have on an industry fueling critical technological advancements,” Polis said. State-level government regulation, he added, can “tamper with innovation and deter competition.”

Rodriguez pushed a measure earlier this year, but the bill, which was introduced just days before the 2025 regular session ended, failed to win support. Lawmakers also tried to delay the 2024 law’s implementation date of Feb. 1, 2026, but a filibuster by Rep. Brianna Titone, D-Arvada, a co-sponsor of the 2024 law and Rodriguez’s 2025 bill, ended that effort.

The day for SB 4, the special session bill, began with a big problem — its cost to the Colorado state government. In a session when lawmakers are faced with an $800 million general fund shortfall, anything that comes with a price could be in trouble.

The bill’s fiscal analysis said the judicial department and the Office of Information Technology would need an additional $4.5 million in general funds in the current fiscal year, and $7 million in outyears, to implement it. The analysis did not provide an estimate for the costs to other public entities, a worry others have raised before.

In a letter to the governor last month, a coalition of schools, colleges, tech, and medical organizations said the 2024 law created “unexpected and costly problems for organizations simply using everyday AI-enabled software, from K-12 schools and universities to hospitals, banks, and local governments.”  

Rodriguez offered four amendments, including one that deals with the bill’s requirement that a deployer — that’s anyone doing business in Colorado that deploys an algorithmic decision system — provide disclosures to anyone who would be affected by a decision made by that AI system and that has a significant effect on education enrollment, employment, financial decisions, government services, health care, housing, insurance or legal services. 

That disclosure would also require the deployer to provide a list of the types of personal characteristics of the individual that were collected by the AI system.

Rodriguez’s amendment on that section replaced the requirement for the disclosure for public entities with a provision allowing that information to be obtained through an open records request.

None of his amendments, all of which passed, appears to address the bill’s cost.

Supporters called the bill a “compromise.”

Hillary Jorgensen of the Colorado Cross-Disability Coalition said there must be transparency for people who interact with AI. The disability community has a very high unemployment rate, and increased use of AI tools in employment could lead to people with disabilities not getting hired, she said.

SB 4 is a reasonable alternative, she added.

Charles Brennan of the Colorado Center on Law and Policy told the committee the public wants these protections — people want to know what AI data is being used to make decisions and want those errors fixed when they happen. In other states, people have lost health care or food stamps assistance because of AI errors, Brennan said.

In Colorado, 90% of letters sent out by the Colorado Benefit Management System, the computer system that determines eligibility for Medicaid, had at least one error, and that puts people’s health and livelihoods at risk, he said.

The tech community, which raised objections to the 2024 law, pleaded for a delay.

This bill needs substantial amendments to be workable, said Andrew Wood of TechNet.

The definition of an algorithmic decision system using anything that uses data to create an outcome is overly broad, as is the definition of “deployer,” which should be limited to entities that actually design or materially modify an algorithmic system and not just those who use or host them, he said.

The liability standard is also problematic, Wood said.

It holds developers and deployers liable for actions outside of their control, he said, adding that the bill also needs to be better aligned with the state’s privacy act.

Jennifer Prusack of Aurora Public Schools has seen how students use AI for spelling or for driving tests. That shows what’s possible, she said, but they also highlight the need for careful safeguards. She said she is concerned it will curtail student innovation. Regulations that go so far as to demand technically impossible disclosure will risk driving out tools for families and students, she added.

Michelle Bourgeois asked for amendments on behalf of the Colorado Association of School Executives. Under the bill, students and school districts could be held jointly liable, which could discourage students from innovating. Any change should not stifle innovation, she asked. 

The Colorado Chamber of Commerce’s Rachel Beck told the committee AI has great potential, noting 58% of small businesses now use some form of the technology. As a result, it’s difficult to overstate the implications of the legislation, she said. The scope of the bill — to prevent discrimination — rests largely on defining key terms so that businesses have a clear path to compliance, she added.

One of the concerns they’re hearing from developers and deployers is that they want to be responsible for the parts they have control over — developers control systems and don’t control customization, and deployers don’t set up the framework, she said.

The scope of the bill is still too broad and doesn’t focus only on high risk for consumers, she said. 

Requiring disclosures will drive up costs and drive down innovation, Beck added.

The Virginia-based Chamber of Progress, which bills itself as a “left-leaning tech policy coalition,” sent letters to lawmakers and the governor this week, also asking for a delay in the implementation of the 2024 law. 

The 2024 law imposes “layers of red tape, legal review, and compliance costs that smaller companies simply cannot afford,” the coalition said.

Rodriguez’s bill throws that structure out, but it does not solve the underlying problem, the group said, adding, “Instead, it shifts to a transparency-only model, where companies are forced to provide endless disclosures to consumers and deployers.”

That approach offers little in the way of real protection and instead seeks mountains of paperwork, the group said.

Both laws are “compliance theater: one drowning in risk frameworks, the other drowning in disclosure requirements. Neither creates a workable or effective path forward,” the group said.

Senate Bill 4 now heads to the Senate Appropriations Committee, along with its hefty fiscal cost. 

A second AI bill in the Senate, offered by Sen. Mark Baisley, R-Woodland Park, died in the Senate State Veterans and Military Affairs Committee on Thursday afternoon, also on a party-line vote.



Source link

AI Research

If I Could Only Buy 1 Artificial Intelligence (AI) Chip Stock Over The Next 10 Years, This Would Be It (Hint: It’s Not Nvidia)

Published

on


While Nvidia continues to capture headlines, a critical enabler of the artificial intelligence (AI) infrastructure boom may be better positioned for long-term gains.

When investors debate the future of the artificial intelligence (AI) trade, the conversation generally finds its way back to the usual suspects: Nvidia, Advanced Micro Devices, and cloud hyperscalers like Microsoft, Amazon, and Alphabet.

Each of these companies is racing to design GPUs or develop custom accelerators in-house. But behind this hardware, there’s a company that benefits no matter which chip brand comes out ahead: Taiwan Semiconductor Manufacturing (TSM -3.05%).

Let’s unpack why Taiwan Semi is my top AI chip stock over the next 10 years, and assess whether now is an opportune time to scoop up some shares.

Agnostic to the winner, leveraged to the trend

As the world’s leading semiconductor foundry, TSMC manufactures chips for nearly every major AI developer — from Nvidia and AMD to Amazon’s custom silicon initiatives, dubbed Trainium and Inferentia.

Unlike many of its peers in the chip space that rely on new product cycles to spur demand, Taiwan Semi’s business model is fundamentally agnostic. Whether demand is allocated toward GPUs, accelerators, or specialized cloud silicon, all roads lead back to TSMC’s fabrication capabilities.

With nearly 70% market share in the global foundry space, Taiwan Semi’s dominance is hard to ignore. Such a commanding lead over the competition provides the company with unmatched structural demand visibility — a trend that appears to be accelerating as AI infrastructure spend remains on the rise.

Image source: Getty Images.

Scaling with more sophisticated AI applications

At the moment, AI development is still concentrated on training and refining large language models (LLMs) and embedding them into downstream software applications.

The next wave of AI will expand into far more diverse and demanding use cases — autonomous systems, robotics, and quantum computing remain in their infancy. At scale, these workloads will place greater demands on silicon than today’s chips can support.

Meeting these demands doesn’t simply require additional investments in chips. Rather, it requires chips engineered for new levels of efficiency, performance, and power management. This is where TSMC’s competitive advantages begin to compound.

With each successive generation of process technology, the company has a unique opportunity to widen the performance gap between itself and rivals like Samsung or Intel.

Since Taiwan Semi already has such a large footprint in the foundry landscape, next-generation design complexities give the company a chance to further lock in deeper, stickier customer relationships.

TSMC’s valuation and the case for expansion

Taiwan Semi may trade at a forward price-to-earnings (P/E) ratio of 24, but dismissing the stock as “expensive” overlooks the company’s extraordinary positioning in the AI realm. To me, the company’s valuation reflects a robust growth outlook, improving earnings prospects, and a declining risk premium.

TSM PE Ratio (Forward) Chart

TSM PE Ratio (Forward) data by YCharts

Unlike many of its semiconductor peers, which are vulnerable to cyclicality headwinds, TSMC has become an indispensable utility for many of the world’s largest AI developers, evolving into one of the backbones of the ongoing infrastructure boom.

The scale of investment behind current AI infrastructure is jaw-dropping. Hyperscalers are investing staggering sums to expand and modernize data centers, and at the heart of each new buildout is an unrelenting demand for more chips. Moreover, each of these companies is exploring more advanced use cases that will, at some point, require next-generation processing capabilities.

These dynamics position Taiwan Semi at the crossroad of immediate growth and enduring long-term expansion, as AI infrastructure swiftly evolves from a constant driver of growth today into a multidecade secular theme.

TSMC’s manufacturing dominance ensures that its services will continue to witness robust demand for years to come. For this reason, I think Taiwan Semi is positioned to experience further valuation expansion over the next decade as the infrastructure chapter of the AI story continues to unfold.

While there are many great opportunities in the chip space, TSMC stands alone. I see it as perhaps the most unique, durable semiconductor stock to own amid a volatile technology landscape over the next several years.

Adam Spatacco has positions in Alphabet, Amazon, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Intel, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft, short August 2025 $24 calls on Intel, short January 2026 $405 calls on Microsoft, and short November 2025 $21 puts on Intel. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Research

Researchers train AI to diagnose heart failure in rural patients using low-tech electrocardiograms

Published

on


WVU computer scientists are training AI models to diagnose heart failure using data generated by low-tech equipment widely available in rural Appalachian medical practices. Credit: WVU/Micaela Morrissette

Concerned about the ability of artificial intelligence models trained on data from urban demographics to make the right medical diagnoses for rural populations, West Virginia University computer scientists have developed several AI models that can identify signs of heart failure in patients from Appalachia.

Prashnna Gyawali, assistant professor in the Lane Department of Computer Science and Electrical Engineering at the WVU Benjamin M. Statler College of Engineering and Mineral Resources, said —a chronic, persistent condition in which the heart cannot pump enough blood to meet the body’s need for oxygen—is one of the most pressing national and global health issues, and one that hits rural regions of the U.S. especially hard.

Despite the outsized impact of heart failure on rural populations, AI models are currently being trained to diagnose the disease using data representing patients from urban and suburban areas like Stanford, California, Gyawali said.

“Imagine Jane Doe, a 62-year-old woman living in a rural Appalachian community,” he suggested. “She has limited access to specialty care, relies on a small local clinic, and her lifestyle, diet and health history reflect the realities of her environment: high physical labor, minimal preventive care, and increased exposure to environmental risk factors like coal dust or poor air quality. Jane begins to experience fatigue and shortness of breath—symptoms that could point to heart failure.

“An AI system, trained primarily on data from urban hospitals in more affluent, coastal areas, evaluates Jane’s lab results. But because the system was not trained on patients who share Jane’s socioeconomic and environmental context, it fails to recognize her condition as urgent or abnormal,” Gyawali said. “This is why this work matters. By training AI models on data from West Virginia patients, we aim to ensure people like Jane receive accurate diagnoses, no matter where they live or how their lives differ from national averages.”

The researchers identified the AI models that were most accurate at diagnosing heart failure in an anonymized sample of more than 55,000 patients who received medical care in West Virginia. They also pinpointed the exact parameters for providing the AI models with data that most enhanced diagnostic accuracy. The findings appear in Scientific Reports, a Nature portfolio journal.

Doctoral student Alina Devkota emphasized they trained the AI models to work from patients’ electrocardiogram results, rather than the echocardiogram readings typical for patient data from urban areas.

Electrocardiograms rely on round electrodes stuck to the patient’s torso to record electrical signals from the heart. According to Devkota, they don’t require specialized equipment or specialized training to operate, but they still provide valuable insights into heart function.

“One of the criteria to diagnose heart failure is by measuring the ‘ejection fraction,’ or how much blood is pumped out of the heart with every beat, and the gold standard for doing that is with echocardiography, which uses to create images of the heart and the blood flowing through its valves,” she said.

“But echocardiography is expensive, time-consuming and often unavailable to patients in the very same rural Appalachian states that have the highest prevalence of heart failure across the nation. West Virginia, for example, ranks first in the U.S. for the prevalence of heart attack and , but many West Virginians don’t have local access to high-tech echocardiograms. They do have access to inexpensive electrocardiograms, so we tested whether AI models could use electrocardiogram readings to predict a patient’s ejection fraction.”

Devkota, Gyawali and their colleagues trained several AI models on patient records from 28 hospitals across West Virginia. The AI models used either “deep learning,” which relies on multilayered neural networks, or “non-deep learning,” which relies on simpler algorithms, to analyze the patient records and draw conclusions.

The researchers found the models, particularly one called ResNet, did best at correctly predicting a patient’s ejection fraction based on data from 12-lead electrocardiograms, with the results suggesting that a larger dataset for training would yield even better results. They also found that providing the AI models with specific “leads,” or combinations of data from different electrode pairs, affected how accurate the models’ ejection fraction predictions were.

Gyawali said while AI models are not yet being used in due to reliability concerns, training an AI to successfully estimate from electrocardiogram signals could soon give clinicians an edge in protecting patients’ cardiac health.

“Heart failure affects more than six million Americans today, and factors like our aging population mean the risk is growing rapidly—approximately 1 in 4 people alive today will experience heart failure during their lifetimes. The prevalence is even higher in rural Appalachia, so it’s critical the people here do not continue to be overlooked.”

Additional WVU contributors to the research included Rukesh Prajapati, graduate research assistant; Amr El-Wakeel, assistant professor; Donald Adjeroh, professor and chair for computer science; and Brijesh Patel, assistant professor in the WVU Health Sciences School of Medicine.

More information:
AI analysis for ejection fraction estimation from 12-lead ECG, Scientific Reports (2025). DOI: 10.1038/s41598-025-97113-0scientific

Citation:
Researchers train AI to diagnose heart failure in rural patients using low-tech electrocardiograms (2025, August 31)
retrieved 31 August 2025
from https://medicalxpress.com/news/2025-08-ai-heart-failure-rural-patients.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

AI Research

Should artificial intelligence be embraced in the classroom? – CBS News

Published

on



Should artificial intelligence be embraced in the classroom?  CBS News



Source link

Continue Reading

Trending