Connect with us

AI Research

Colorado lawmakers battle over artificial intelligence law

Published

on


As Colorado lawmakers return to the Capitol to fill a substantial budget hole this week, a second — but similarly complicated — fight is set to unfold over AI.

Lawmakers are preparing four bills to amend Colorado’s first-in-the-nation artificial intelligence regulations, which seek to prevent discrimination when companies use AI to make various decisions.

They are part of a law that has yet to take effect and has faced significant industry pushback. When Gov. Jared Polis called the special session that starts Thursday, a $783 million state budget gap opened by the federal tax bill was the main motivation. But he also charged lawmakers with considering changes to the AI law.

And while Democratic lawmakers who constitute the legislative majority are largely aligned on how to fill the budget hole, there’s significantly less agreement on how to answer Polis’ call to amend the AI regulations.

Two of the new bills, each backed primarily by Democrats, are the likeliest to advance. But their aims largely run counter to one another: One bill is accused of doing too much and implementing unworkable rules. The other is criticized for doing too little to protect consumers and to regulate a burgeoning — and affluent  — industry.

Given that they start in opposite chambers and are backed by different power centers in the Capitol, they’re likely to collide in the coming multiday session.

“It’s a mess,” said David Seligman, an attorney general candidate who runs the nonprofit law firm Towards Justice.

Seligman is supporting a bill that would slim down the already-adopted regulations, which were passed in 2024 and will otherwise come into effect in February. Backed by the initial law’s sponsors, the new proposal would still require AI developers and the companies that use their technology to inform consumers of AI’s presence in a variety of services, ranging from chatbots to job and university applications and health care records. It would give people affected by those AI services the ability to challenge and correct information provided to an employer or a potential landlord.

A company or government service that uses the technology to screen applicants or make other decisions would also be required to disclose characteristics that influenced an AI-driven decision to a person affected by it.

“That’s really what the bottom line is: It’s about protecting citizens from the unforeseen and making sure developers are thinking about what their product is going to do — and having some plans about how they can figure out how to remedy that for that person,” said Rep. Brianna Titone, an Arvada Democrat who’s set to sponsor the bill with the backing of progressive lawmakers and allied groups.

The competing bill, which is backed by more moderate Democrats and at least one Republican, similarly would require that people be notified when they’re interacting with or being screened by AI. It would also make clear that AI developers are subject to the state’s antidiscrimination and consumer-protection laws.

Under its provisions, only the state’s attorney general — not individual users — could sue either tech companies or the agencies and companies that use their technology for violating the consumer-protection law.

Rep. William Lindstedt, a Broomfield Democrat, said the goal was to provide transparency while ensuring that companies that use AI for various services aren’t immediately held liable for problems that come from the underlying tech.

“That’s what we’re trying to do: Find a middle ground that protects consumers and also protects our innovative economy here in Colorado,” he said. “I think if you took the bill that I have on the page right now and compared it to every other state, it would still be one of the most stringent, strongest consumer protections in the country.”

Scrambles

The brewing policy scuffle is the latest turn in an unhappy 15-month honeymoon for the state’s marquee AI regulations.

Even as he signed them into law, Polis called for the guardrails to be reformed amid industry critiques that they were unworkable. A task force then met for months to find a path forward, culminating in legislation introduced earlier this year during the regular session.

But after opponents of the regulations tried to further clip them, Sen. Robert Rodriguez, the Senate’s majority leader and the regulations’ primary backer, killed the reform bill entirely.

That prompted a last-minute scramble to delay the regulations’ February start date so that negotiations could restart. The effort, advanced by Lindstedt, then collapsed under the weight of a late-night filibuster by Titone.

Further scrambling ensued. After Rodriguez euthanized his bill, Polis and five other Democrats — including Denver Mayor Mike Johnston and U.S. Sen. Michael Bennet — sent the legislature a letter to “implore” them to delay the regulations.

In July, a coalition that included school boards, airlines and tech companies sent Polis a letter asking him to include reform of the regulations in his coming special session announcement.

Battle lines

Now, the debate returns as lawmakers get ready for what’s likely to be an already-bruising fight over state revenues.

And the clock is ticking: The regulations are alive and well in Colorado’s statute books, and they take effect in six months.

Titone and supporters of her bill have said Lindstedt’s approach is pro-tech fluff. The state’s consumer-protection laws already cover AI, they argue, and that bill would limit consumers’ ability to file lawsuits or know if they’ve been discriminated against.

But Lindstedt and supporters of his approach argue that while clarity and oversight are needed, the scale of the other bill’s regulations would significantly stifle the burgeoning AI industry’s ability to operate — or for its systems to be used — in the state. Requiring companies to list the characteristics assessed by AI and send them to each individual consumer would be “nearly impossible for many AI systems,” according to a fact sheet prepared by lobbyists supporting Lindstedt’s bill.

Titone’s bill also would place liability for violations of state law on both tech companies and the services or government agencies that use AI.

Lindstedt, who said he’s working with the constellation of groups that urged Polis to reform the law, hopes to reach a deal with Titone and Rodriguez.

But doing so may be tough. One side’s sore spot — the liability provision, for instance — is the other’s non-negotiable.



Source link

AI Research

Lessons from 60 Years in Journalism — A Postscript on Artificial Intelligence – TAPinto

Published

on



Lessons from 60 Years in Journalism — A Postscript on Artificial Intelligence  TAPinto



Source link

Continue Reading

AI Research

Building a responsible AI future in Saudi Arabia

Published

on


As the world transitions from a digital age to the era of artificial intelligence (AI), key economies in the Middle East are accelerating their AI adoption plans as part of their broader economic diversification efforts. Among the GCC countries, Saudi Arabia has made bold moves to position itself as a global AI leader, writes Oliver Sykes, partner at PwC.

At the World Economic Forum (WEF) this year, Saudi Arabia reinforced its commitment to shaping global AI discourse, highlighting efforts at strengthening the digital economy and fostering innovation.

According to a recently launched report highlighting Saudi Arabia’s advancements in deep technology, 50 percent of the kingdom’s deep tech startups are already focused on developing AI and the Internet of Things (IoT), while the nation aims for AI to contribute 12 percent of its gross domestic product by 2030. Last year, the kingdom took 14th place globally and the top spot in the Arab world in the Global AI Index for 2024.

Saudi Arabia’s AI strategy includes major investments, such as a $40 billion fund to boost AI as it continues to position the kingdom as a global AI hub, with opportunities for chip makers and large-scale data centers. The country is also forging global partnerships to enhance Arabic AI models.

A catalyst for economic growth

AI is expected to be a catalyst for economic growth across multiple sectors in Saudi Arabia. Some of the most promising applications include integrating the technology into healthcare for early disease diagnosis, predictive care and pandemic prevention. AI is also being used for ride-sharing and launching autonomous vehicles, as well as for personalized financial planning, fraud detection and anti-money laundering in finance.

In energy, AI optimizes usage through smart grids, real-time monitoring and renewable integration, and it is also being harnessed to drive sustainability efforts, including carbon footprint tracking, climate change mitigation, and optimizing resource allocation in agriculture and water management.

While this potential of AI to drive value across industries is undeniable, it is equally crucial for organizations to establish robust governance frameworks surrounding data privacy, data management, and around AI itself. Without appropriate governance, the very technologies that promise to enhance efficiency and decision-making can lead to critical pitfalls that undermine trust, compliance and ethical standards.

Since AI systems rely on vast amounts of personal and sensitive data, they pose significant data privacy, governance, and ethical risks if not properly managed. Unauthorized access, data breaches and lack of consent can lead to regulatory non-compliance, while poor data governance practices – including inaccurate data, unclear ownership, and bias – can compromise AI outcomes. Organizations must, therefore, prioritize data quality, integrity, and compliance to prevent biased or flawed AI outcomes.

As regulatory landscapes evolve, strong governance frameworks help businesses stay compliant while maintaining trust and transparency. Additionally, AI governance requires ethical guidelines and accountability measures to mitigate risks related to bias, decision-making, and societal impact.

This is especially critical as 90 percent of business leaders in the GCC expect AI to enhance business processes and workflows, while 81 percent anticipate its use in new product and service development over the next three years, as indicated in PwC’s latest CEO Survey. Therefore, it is all the more crucial for businesses to be prepared for these disruptive technologies with strong governance frameworks in place.

Building a responsible AI future in Saudi Arabia

The need to ensure ethical and transparent AI

AI governance involves establishing ethical guidelines, ensuring transparency, and managing risks associated with AI deployment. Eight key solutions for AI governance:

  1. Ethical guidelines
    Developing ethical guidelines for AI development and deployment is essential to ensure fairness, transparency and accountability. The organization’s responsible AI procedures ought to be built upon these criteria.
  2. Risk management
    Organizations can detect and reduce possible ethical, reputational and technological risks related to AI by putting strong risk assessment procedures in place. This proactive strategy is essential for protecting the integrity and interests of the company.
  3. Regulatory compliance
    Keeping up with industry regulations is crucial to ensure AI systems comply with applicable laws and standards. Regularly reviewing compliance requirements helps organizations avoid legal issues and maintain their reputation.
  4. Cross-functional collaboration
    All viewpoints are considered when stakeholders from other departments, including legal, IT, and HR, are included in governance conversations. This cooperative strategy promotes a thorough comprehension of the ramifications of AI technologies.
  5. Model validation and monitoring
    Organizations can implement validation and monitoring processes for AI models by establishing criteria for assessing model performance, including accuracy, fairness, and compliance with ethical standards.
  6. Continuous monitoring
    It is essential to set up monitoring methods to assess AI performance to make sure that these systems function as planned and continue to be consistent with organizational values. Organizations can spot any discrepancies early on with the use of continuous assessment.
  7. Feedback loops
    Establishing avenues for user and stakeholder feedback is critical to the continuous improvement of AI systems. Organizations can modify their AI solutions to better meet consumer expectations and needs by actively soliciting feedback.
  8. Certification
    Organizations can consider internationally acclaimed standards like the ISO 42001 for AI systems to ensure AI systems are deployed in a responsible and ethical manner.

Looking ahead: A call for AI governance

Despite the widespread use of AI technologies, the challenge now is not just about adopting AI but about leading AI responsibly. As AI adoption accelerates, businesses must be future-ready, embedding trust and ethical considerations into their AI strategies. Established in 2019, the Saudi Data & AI Authority (SDAIA) has played a pivotal role in shaping AI regulations and ethical frameworks in Saudi Arabia.

Recently, the country has ranked third globally in the Organization for Economic Co-operation and Development’s (OECD) AI Policy Observatory, behind the US and the UK, reflecting its strong commitment to AI regulation and ethical governance.

AI’s success hinges on trust. Without governance, transparency and accountability, AI’s potential could be overshadowed by risks that undermine its credibility. With its ambitious investments, regulatory foresight and a commitment to ethical AI, Saudi Arabia has the potential to set global benchmarks for AI adoption.



Source link

Continue Reading

AI Research

Tories pledge to get ‘all our oil and gas out of the North Sea’

Published

on


Conservative leader Kemi Badenoch has said her party will remove all net zero requirements on oil and gas companies drilling in the North Sea if elected.

Badenoch is to formally announce the plan to focus solely on “maximising extraction” and to get “all our oil and gas out of the North Sea” in a speech in Aberdeen on Tuesday.

Reform UK has said it wants more fossil fuels extracted from the North Sea.

The Labour government has committed to banning new exploration licences. A spokesperson said a “fair and orderly transition” away from oil and gas would “drive growth”.

Exploring new fields would “not take a penny off bills” or improve energy security and would “only accelerate the worsening climate crisis”, the government spokesperson warned.

Badenoch signalled a significant change in Conservative climate policy when she announced earlier this year that reaching net zero would be “impossible” by 2050.

Successive UK governments have pledged to reach the target by 2050 and it was written into law by Theresa May in 2019. It means the UK must cut carbon emissions until it removes as much as it produces, in line with the 2015 Paris Climate Agreement.

Now Badenoch has said that requirements to work towards net zero are a burden on oil and gas producers in the North Sea which are damaging the economy and which she would remove.

The Tory leader said a Conservative government would scrap the need to reduce emissions or to work on technologies such as carbon storage.

Badenoch said it was “absurd” the UK was leaving “vital resources untapped” while “neighbours like Norway extracted them from the same sea bed”.

In 2023, then Prime Minister Rishi Sunak granted 100 new licences to drill in the North Sea which he said at the time was “entirely consistent” with net zero commitments.

Reform UK has said it will abolish the push for net zero if elected.

The current government said it had made the “biggest ever investment in offshore wind and three first of a kind carbon capture and storage clusters”.

Carbon capture and storage facilities aim to prevent carbon dioxide (CO2) produced from industrial processes and power stations from being released into the atmosphere.

Most of the CO2 produced is captured, transported and then stored deep underground.

It is seen by the likes of the International Energy Agency and the Climate Change Committee as a key element in meeting targets to cut the greenhouse gases driving dangerous climate change.



Source link

Continue Reading

Trending