Connect with us

AI Insights

A Neural-Network-Based Approach to Smarter DPD Engines – Electronic Design

Published

on

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

California bill to regulate high-risk AI fails to advance state legislature

Published

on


A California artificial intelligence bill addressing the use of automated decision systems in hiring and other consequential matters failed to advance in the state assembly during the final hours of the 2025 legislative session Friday.

The bill (AB 1018) would have required companies and government agencies to notify individuals when automated decision systems were used for “consequential decisions,” such as employment, housing, health care, and financial services.

Democratic assemblymember Rebecca Bauer-Kahan, the bill’s author, paused voting on the bill until next year to allow for “additional stakeholder engagement and productive conversations with the Governor’s office,” according to a Friday press release from her office.

“This pause reflects our commitment to getting this critical legislation right, not a retreat from our responsibility to protect Californians,” Bauer-Kahan said in a statement. “We remain committed to advancing thoughtful protections against algorithmic discrimination.”

The Business Software Alliance, a global trade association that represents large technology companies and led an opposition campaign against the bill, argued that the legislation would have unfairly subjected companies using AI systems “into an untested audit regime” that risked discouraging responsible adoption of AI tools throughout the state.

“Setting clear, workable, and consistent expectations for high-risk uses of AI ultimately furthers the adoption of technology and more widely spreads its benefits,” Craig Albright, senior vice president at BSA, told StateScoop in a written statement. “BSA believes there is a path forward that sets obligations for companies based on their different roles within the AI value chain and better focuses legislation to ensure that everyday and low-risk uses of AI are not subjected to a vague and confusing regulatory regime.”

Since it was introduced in February, the bill was amended to narrow when AI audits are required, clarify what kinds of systems and “high-stakes” decisions are covered, exempt low-risk tools like spam filters, and add protections for trade secrets while limiting what audit details must be made public. It also refined how lawsuits and appeals work and aligned the bill more clearly with existing civil rights laws.

AB 1018’s failure comes on the heels of the Colorado state legislature voting to delay implementing the Colorado AI Act, the state’s high-risk artificial intelligence legislation, until the end of June next year, five months after the law was supposed to go into effect. Similar to California’s AI bill, Colorado’s Artificial Intelligence Act would also regulate high-risk AI systems in areas like hiring, lending, housing, insurance and government services.


Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.



Source link

Continue Reading

AI Insights

The Despair of the Teacher in the Age of Artificial Intelligence – Commentary Magazine

Published

on


There may still be a few sheltered analog folk out there who pronounce the abbreviation for Artificial Intelligence, AI, like the name of the steak sauce, mistaking the “I” for a “1,” but the rest of us are very much aware that it is already playing a role in every…





Source link

Continue Reading

AI Insights

xAI lays off 500 AI tutors working on Grok

Published

on


Elon Musk’s artificial intelligence startup xAI has laid off 500 workers from its data annotation team, which helps train its Grok chatbot.

The layoffs were earlier reported by Business Insider.

The AI company notified employees over email that it was planning to downsize its team of generalist AI tutors, according to messages viewed by the publication. The company said the “strategic pivot” meant prioritizing specialist AI tutors, while scaling back its focus on general AI tutor roles.

In response to the story, xAI directed reporters to a post on X, in which the company said it plans to expand its specialist AI tutor team by “10X” and intends to open roles on its careers page.

The human data annotator team at xAI plays a key role in teaching Grok to understand the world by labeling, contextualizing, and categorizing raw data used to train the chatbot. The email sent by xAI said that laid-off workers would be paid through either the end of their contract or Nov. 30, but their access to company systems would be terminated the day of the layoff notice.

Prior to the layoff, the xAI’s data annotation team was one of the largest, with 1,500 full-time and contract staff members, which included AI tutors. The reorganization of the data annotators team comes on the back of a leadership shake-up at the team that saw nine employees reportedly exit the firm last week.

As a sign of its changing approach to training Grok, xAI on Thursday asked some of the AI tutors to prepare for tests, Business Insider reported, that covered traditional domains such as STEM, coding, finance, and medicine, as well as quirkier specialties such as Grok’s “personality and model behavior” and doomscrollers.”

Musk launched xAI in 2023 to compete with OpenAI and Google DeepMind, which are racing to win the AI race. He introduced Grok as a safe and truthful alternative to what he accused competitors of building, “woke” chatbots prone to censorship.



Source link

Continue Reading

Trending