The proposed Senate Democrats’ fix to the first-in-the-nation artificial intelligence regulation dealing with “algorithmic discrimination” won approval from its first committee on the first day of the 2025 special session.
Senate Bill 4 — sponsored by Senate Majority Leader Robert Rodriguez, D-Denver, who is also the author of the 2024 law that Gov. Jared Polis and the attorney general want to see amended — won a 4-3 party-line vote from the Senate Business Affairs and Labor Committee on Thursday.
Backers have described SB 24-205 as seeking to establish guardrails around the use of artificial intelligence, primarily in employment, health care, education, and government practices, where, they said, the risk of bias or discrimination exists.
While Gov. Jared Polis signed the AI law last year, he pushed for adjustments to address lingering worries. In his May 17 signing statement, Gov. Jared Polis asked lawmakers to keep working on it before its 2026 implementation date.
“I am concerned about the impact this law may have on an industry fueling critical technological advancements,” Polis said. State-level government regulation, he added, can “tamper with innovation and deter competition.”
Rodriguez pushed a measure earlier this year, but the bill, which was introduced just days before the 2025 regular session ended, failed to win support. Lawmakers also tried to delay the 2024 law’s implementation date of Feb. 1, 2026, but a filibuster by Rep. Brianna Titone, D-Arvada, a co-sponsor of the 2024 law and Rodriguez’s 2025 bill, ended that effort.
The day for SB 4, the special session bill, began with a big problem — its cost to the Colorado state government. In a session when lawmakers are faced with an $800 million general fund shortfall, anything that comes with a price could be in trouble.
The bill’s fiscal analysis said the judicial department and the Office of Information Technology would need an additional $4.5 million in general funds in the current fiscal year, and $7 million in outyears, to implement it. The analysis did not provide an estimate for the costs to other public entities, a worry others have raised before.
In a letter to the governor last month, a coalition of schools, colleges, tech, and medical organizations said the 2024 law created “unexpected and costly problems for organizations simply using everyday AI-enabled software, from K-12 schools and universities to hospitals, banks, and local governments.”
Rodriguez offered four amendments, including one that deals with the bill’s requirement that a deployer — that’s anyone doing business in Colorado that deploys an algorithmic decision system — provide disclosures to anyone who would be affected by a decision made by that AI system and that has a significant effect on education enrollment, employment, financial decisions, government services, health care, housing, insurance or legal services.
That disclosure would also require the deployer to provide a list of the types of personal characteristics of the individual that were collected by the AI system.
Rodriguez’s amendment on that section replaced the requirement for the disclosure for public entities with a provision allowing that information to be obtained through an open records request.
None of his amendments, all of which passed, appears to address the bill’s cost.
Supporters called the bill a “compromise.”
Hillary Jorgensen of the Colorado Cross-Disability Coalition said there must be transparency for people who interact with AI. The disability community has a very high unemployment rate, and increased use of AI tools in employment could lead to people with disabilities not getting hired, she said.
SB 4 is a reasonable alternative, she added.
Charles Brennan of the Colorado Center on Law and Policy told the committee the public wants these protections — people want to know what AI data is being used to make decisions and want those errors fixed when they happen. In other states, people have lost health care or food stamps assistance because of AI errors, Brennan said.
In Colorado, 90% of letters sent out by the Colorado Benefit Management System, the computer system that determines eligibility for Medicaid, had at least one error, and that puts people’s health and livelihoods at risk, he said.
The tech community, which raised objections to the 2024 law, pleaded for a delay.
This bill needs substantial amendments to be workable, said Andrew Wood of TechNet.
The definition of an algorithmic decision system using anything that uses data to create an outcome is overly broad, as is the definition of “deployer,” which should be limited to entities that actually design or materially modify an algorithmic system and not just those who use or host them, he said.
The liability standard is also problematic, Wood said.
It holds developers and deployers liable for actions outside of their control, he said, adding that the bill also needs to be better aligned with the state’s privacy act.
Jennifer Prusack of Aurora Public Schools has seen how students use AI for spelling or for driving tests. That shows what’s possible, she said, but they also highlight the need for careful safeguards. She said she is concerned it will curtail student innovation. Regulations that go so far as to demand technically impossible disclosure will risk driving out tools for families and students, she added.
Michelle Bourgeois asked for amendments on behalf of the Colorado Association of School Executives. Under the bill, students and school districts could be held jointly liable, which could discourage students from innovating. Any change should not stifle innovation, she asked.
The Colorado Chamber of Commerce’s Rachel Beck told the committee AI has great potential, noting 58% of small businesses now use some form of the technology. As a result, it’s difficult to overstate the implications of the legislation, she said. The scope of the bill — to prevent discrimination — rests largely on defining key terms so that businesses have a clear path to compliance, she added.
One of the concerns they’re hearing from developers and deployers is that they want to be responsible for the parts they have control over — developers control systems and don’t control customization, and deployers don’t set up the framework, she said.
The scope of the bill is still too broad and doesn’t focus only on high risk for consumers, she said.
Requiring disclosures will drive up costs and drive down innovation, Beck added.
The Virginia-based Chamber of Progress, which bills itself as a “left-leaning tech policy coalition,” sent letters to lawmakers and the governor this week, also asking for a delay in the implementation of the 2024 law.
The 2024 law imposes “layers of red tape, legal review, and compliance costs that smaller companies simply cannot afford,” the coalition said.
Rodriguez’s bill throws that structure out, but it does not solve the underlying problem, the group said, adding, “Instead, it shifts to a transparency-only model, where companies are forced to provide endless disclosures to consumers and deployers.”
That approach offers little in the way of real protection and instead seeks mountains of paperwork, the group said.
Both laws are “compliance theater: one drowning in risk frameworks, the other drowning in disclosure requirements. Neither creates a workable or effective path forward,” the group said.
Senate Bill 4 now heads to the Senate Appropriations Committee, along with its hefty fiscal cost.
A second AI bill in the Senate, offered by Sen. Mark Baisley, R-Woodland Park, died in the Senate State Veterans and Military Affairs Committee on Thursday afternoon, also on a party-line vote.