AI Research
Colorado lawmakers battle over artificial intelligence law

As Colorado lawmakers return to the Capitol to fill a substantial budget hole this week, a second — but similarly complicated — fight is set to unfold over AI.
Lawmakers are preparing four bills to amend Colorado’s first-in-the-nation artificial intelligence regulations, which seek to prevent discrimination when companies use AI to make various decisions.
They are part of a law that has yet to take effect and has faced significant industry pushback. When Gov. Jared Polis called the special session that starts Thursday, a $783 million state budget gap opened by the federal tax bill was the main motivation. But he also charged lawmakers with considering changes to the AI law.
And while Democratic lawmakers who constitute the legislative majority are largely aligned on how to fill the budget hole, there’s significantly less agreement on how to answer Polis’ call to amend the AI regulations.
Two of the new bills, each backed primarily by Democrats, are the likeliest to advance. But their aims largely run counter to one another: One bill is accused of doing too much and implementing unworkable rules. The other is criticized for doing too little to protect consumers and to regulate a burgeoning — and affluent — industry.
Given that they start in opposite chambers and are backed by different power centers in the Capitol, they’re likely to collide in the coming multiday session.
“It’s a mess,” said David Seligman, an attorney general candidate who runs the nonprofit law firm Towards Justice.
Seligman is supporting a bill that would slim down the already-adopted regulations, which were passed in 2024 and will otherwise come into effect in February. Backed by the initial law’s sponsors, the new proposal would still require AI developers and the companies that use their technology to inform consumers of AI’s presence in a variety of services, ranging from chatbots to job and university applications and health care records. It would give people affected by those AI services the ability to challenge and correct information provided to an employer or a potential landlord.
A company or government service that uses the technology to screen applicants or make other decisions would also be required to disclose characteristics that influenced an AI-driven decision to a person affected by it.
“That’s really what the bottom line is: It’s about protecting citizens from the unforeseen and making sure developers are thinking about what their product is going to do — and having some plans about how they can figure out how to remedy that for that person,” said Rep. Brianna Titone, an Arvada Democrat who’s set to sponsor the bill with the backing of progressive lawmakers and allied groups.
The competing bill, which is backed by more moderate Democrats and at least one Republican, similarly would require that people be notified when they’re interacting with or being screened by AI. It would also make clear that AI developers are subject to the state’s antidiscrimination and consumer-protection laws.
Under its provisions, only the state’s attorney general — not individual users — could sue either tech companies or the agencies and companies that use their technology for violating the consumer-protection law.
Rep. William Lindstedt, a Broomfield Democrat, said the goal was to provide transparency while ensuring that companies that use AI for various services aren’t immediately held liable for problems that come from the underlying tech.
“That’s what we’re trying to do: Find a middle ground that protects consumers and also protects our innovative economy here in Colorado,” he said. “I think if you took the bill that I have on the page right now and compared it to every other state, it would still be one of the most stringent, strongest consumer protections in the country.”
Scrambles
The brewing policy scuffle is the latest turn in an unhappy 15-month honeymoon for the state’s marquee AI regulations.
Even as he signed them into law, Polis called for the guardrails to be reformed amid industry critiques that they were unworkable. A task force then met for months to find a path forward, culminating in legislation introduced earlier this year during the regular session.
But after opponents of the regulations tried to further clip them, Sen. Robert Rodriguez, the Senate’s majority leader and the regulations’ primary backer, killed the reform bill entirely.
That prompted a last-minute scramble to delay the regulations’ February start date so that negotiations could restart. The effort, advanced by Lindstedt, then collapsed under the weight of a late-night filibuster by Titone.
Further scrambling ensued. After Rodriguez euthanized his bill, Polis and five other Democrats — including Denver Mayor Mike Johnston and U.S. Sen. Michael Bennet — sent the legislature a letter to “implore” them to delay the regulations.
In July, a coalition that included school boards, airlines and tech companies sent Polis a letter asking him to include reform of the regulations in his coming special session announcement.
Battle lines
Now, the debate returns as lawmakers get ready for what’s likely to be an already-bruising fight over state revenues.
And the clock is ticking: The regulations are alive and well in Colorado’s statute books, and they take effect in six months.
Titone and supporters of her bill have said Lindstedt’s approach is pro-tech fluff. The state’s consumer-protection laws already cover AI, they argue, and that bill would limit consumers’ ability to file lawsuits or know if they’ve been discriminated against.
But Lindstedt and supporters of his approach argue that while clarity and oversight are needed, the scale of the other bill’s regulations would significantly stifle the burgeoning AI industry’s ability to operate — or for its systems to be used — in the state. Requiring companies to list the characteristics assessed by AI and send them to each individual consumer would be “nearly impossible for many AI systems,” according to a fact sheet prepared by lobbyists supporting Lindstedt’s bill.
Titone’s bill also would place liability for violations of state law on both tech companies and the services or government agencies that use AI.
Lindstedt, who said he’s working with the constellation of groups that urged Polis to reform the law, hopes to reach a deal with Titone and Rodriguez.
But doing so may be tough. One side’s sore spot — the liability provision, for instance — is the other’s non-negotiable.
The Colorado Chamber of Commerce declined to comment Tuesday ahead of internal deliberations over the legislation. The Colorado Technology Association’s president and CEO, Brittany Morris Saunders, struck a similarly even tone in an email, writing that her group felt “progress has been made” and that it looked “forward to continued conversations” with legislators this week.
For his part, Polis was noncommittal about which approach he preferred Tuesday.
But his office has engaged in discussions with the supporters of Lindstedt’s bill, according to officials involved in those negotiations. The governor supported U.S. Senate Republicans’ failed attempt to add a provision to the tax bill that would have blocked states from regulating AI for the next decade. He’s also repeatedly called on state legislators to amend the bill he signed into law last year.
“I will work with anyone to find the right path forward on AI for Colorado — including the development of a new policy framework that addresses bias while also spurring innovation, a delay of implementation, or some combination,” Polis wrote in a statement Tuesday. “There is clear motivation in the legislature to take action now to protect consumers and promote innovation, all without creating new costs for the state or unworkable burdens for Colorado businesses and local governments.”
Stay up-to-date with Colorado Politics by signing up for our weekly newsletter, The Spot.
AI Research
Lessons from 60 Years in Journalism — A Postscript on Artificial Intelligence – TAPinto
AI Research
Building a responsible AI future in Saudi Arabia

As the world transitions from a digital age to the era of artificial intelligence (AI), key economies in the Middle East are accelerating their AI adoption plans as part of their broader economic diversification efforts. Among the GCC countries, Saudi Arabia has made bold moves to position itself as a global AI leader, writes Oliver Sykes, partner at PwC.
At the World Economic Forum (WEF) this year, Saudi Arabia reinforced its commitment to shaping global AI discourse, highlighting efforts at strengthening the digital economy and fostering innovation.
According to a recently launched report highlighting Saudi Arabia’s advancements in deep technology, 50 percent of the kingdom’s deep tech startups are already focused on developing AI and the Internet of Things (IoT), while the nation aims for AI to contribute 12 percent of its gross domestic product by 2030. Last year, the kingdom took 14th place globally and the top spot in the Arab world in the Global AI Index for 2024.
Saudi Arabia’s AI strategy includes major investments, such as a $40 billion fund to boost AI as it continues to position the kingdom as a global AI hub, with opportunities for chip makers and large-scale data centers. The country is also forging global partnerships to enhance Arabic AI models.
A catalyst for economic growth
AI is expected to be a catalyst for economic growth across multiple sectors in Saudi Arabia. Some of the most promising applications include integrating the technology into healthcare for early disease diagnosis, predictive care and pandemic prevention. AI is also being used for ride-sharing and launching autonomous vehicles, as well as for personalized financial planning, fraud detection and anti-money laundering in finance.
In energy, AI optimizes usage through smart grids, real-time monitoring and renewable integration, and it is also being harnessed to drive sustainability efforts, including carbon footprint tracking, climate change mitigation, and optimizing resource allocation in agriculture and water management.
While this potential of AI to drive value across industries is undeniable, it is equally crucial for organizations to establish robust governance frameworks surrounding data privacy, data management, and around AI itself. Without appropriate governance, the very technologies that promise to enhance efficiency and decision-making can lead to critical pitfalls that undermine trust, compliance and ethical standards.
Since AI systems rely on vast amounts of personal and sensitive data, they pose significant data privacy, governance, and ethical risks if not properly managed. Unauthorized access, data breaches and lack of consent can lead to regulatory non-compliance, while poor data governance practices – including inaccurate data, unclear ownership, and bias – can compromise AI outcomes. Organizations must, therefore, prioritize data quality, integrity, and compliance to prevent biased or flawed AI outcomes.
As regulatory landscapes evolve, strong governance frameworks help businesses stay compliant while maintaining trust and transparency. Additionally, AI governance requires ethical guidelines and accountability measures to mitigate risks related to bias, decision-making, and societal impact.
This is especially critical as 90 percent of business leaders in the GCC expect AI to enhance business processes and workflows, while 81 percent anticipate its use in new product and service development over the next three years, as indicated in PwC’s latest CEO Survey. Therefore, it is all the more crucial for businesses to be prepared for these disruptive technologies with strong governance frameworks in place.
The need to ensure ethical and transparent AI
AI governance involves establishing ethical guidelines, ensuring transparency, and managing risks associated with AI deployment. Eight key solutions for AI governance:
- Ethical guidelines
Developing ethical guidelines for AI development and deployment is essential to ensure fairness, transparency and accountability. The organization’s responsible AI procedures ought to be built upon these criteria. - Risk management
Organizations can detect and reduce possible ethical, reputational and technological risks related to AI by putting strong risk assessment procedures in place. This proactive strategy is essential for protecting the integrity and interests of the company. - Regulatory compliance
Keeping up with industry regulations is crucial to ensure AI systems comply with applicable laws and standards. Regularly reviewing compliance requirements helps organizations avoid legal issues and maintain their reputation. - Cross-functional collaboration
All viewpoints are considered when stakeholders from other departments, including legal, IT, and HR, are included in governance conversations. This cooperative strategy promotes a thorough comprehension of the ramifications of AI technologies. - Model validation and monitoring
Organizations can implement validation and monitoring processes for AI models by establishing criteria for assessing model performance, including accuracy, fairness, and compliance with ethical standards. - Continuous monitoring
It is essential to set up monitoring methods to assess AI performance to make sure that these systems function as planned and continue to be consistent with organizational values. Organizations can spot any discrepancies early on with the use of continuous assessment. - Feedback loops
Establishing avenues for user and stakeholder feedback is critical to the continuous improvement of AI systems. Organizations can modify their AI solutions to better meet consumer expectations and needs by actively soliciting feedback. - Certification
Organizations can consider internationally acclaimed standards like the ISO 42001 for AI systems to ensure AI systems are deployed in a responsible and ethical manner.
Looking ahead: A call for AI governance
Despite the widespread use of AI technologies, the challenge now is not just about adopting AI but about leading AI responsibly. As AI adoption accelerates, businesses must be future-ready, embedding trust and ethical considerations into their AI strategies. Established in 2019, the Saudi Data & AI Authority (SDAIA) has played a pivotal role in shaping AI regulations and ethical frameworks in Saudi Arabia.
Recently, the country has ranked third globally in the Organization for Economic Co-operation and Development’s (OECD) AI Policy Observatory, behind the US and the UK, reflecting its strong commitment to AI regulation and ethical governance.
AI’s success hinges on trust. Without governance, transparency and accountability, AI’s potential could be overshadowed by risks that undermine its credibility. With its ambitious investments, regulatory foresight and a commitment to ethical AI, Saudi Arabia has the potential to set global benchmarks for AI adoption.
AI Research
Tories pledge to get ‘all our oil and gas out of the North Sea’

Conservative leader Kemi Badenoch has said her party will remove all net zero requirements on oil and gas companies drilling in the North Sea if elected.
Badenoch is to formally announce the plan to focus solely on “maximising extraction” and to get “all our oil and gas out of the North Sea” in a speech in Aberdeen on Tuesday.
Reform UK has said it wants more fossil fuels extracted from the North Sea.
The Labour government has committed to banning new exploration licences. A spokesperson said a “fair and orderly transition” away from oil and gas would “drive growth”.
Exploring new fields would “not take a penny off bills” or improve energy security and would “only accelerate the worsening climate crisis”, the government spokesperson warned.
Badenoch signalled a significant change in Conservative climate policy when she announced earlier this year that reaching net zero would be “impossible” by 2050.
Successive UK governments have pledged to reach the target by 2050 and it was written into law by Theresa May in 2019. It means the UK must cut carbon emissions until it removes as much as it produces, in line with the 2015 Paris Climate Agreement.
Now Badenoch has said that requirements to work towards net zero are a burden on oil and gas producers in the North Sea which are damaging the economy and which she would remove.
The Tory leader said a Conservative government would scrap the need to reduce emissions or to work on technologies such as carbon storage.
Badenoch said it was “absurd” the UK was leaving “vital resources untapped” while “neighbours like Norway extracted them from the same sea bed”.
In 2023, then Prime Minister Rishi Sunak granted 100 new licences to drill in the North Sea which he said at the time was “entirely consistent” with net zero commitments.
Reform UK has said it will abolish the push for net zero if elected.
The current government said it had made the “biggest ever investment in offshore wind and three first of a kind carbon capture and storage clusters”.
Carbon capture and storage facilities aim to prevent carbon dioxide (CO2) produced from industrial processes and power stations from being released into the atmosphere.
Most of the CO2 produced is captured, transported and then stored deep underground.
It is seen by the likes of the International Energy Agency and the Climate Change Committee as a key element in meeting targets to cut the greenhouse gases driving dangerous climate change.
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Business2 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies