Connect with us

AI Insights

Big Tech group calls for California high-risk AI bill to narrow its scope

Published

on


Business Software Alliance, a global trade association that represents large technology companies like Microsoft, Oracle and Salesforce, is calling for California Assembly Bill 1018, which would regulate high-risk use cases of artificial intelligence, to limit its scope and include more precise definitions.

Also known as the Automated Decisions Safety Act, the bill sets new rules for how artificial intelligence and other automated-decision systems are used in situations that significantly affect people’s lives, such as in the domains of housing, jobs, health care, credit, education and law.

If passed, starting in 2027, AI developers, companies that create or significantly modify such AI systems, as well as deployers, the companies that use them to make decisions, would be required to test these tools before they’re used, provide users clear notice and explanation and give people the right to correct, opt out, or appeal decisions made using these tools.

At a Senate Judiciary meeting in July, Rep. Rebecca Bauer-Kahan, the bill’s author, told lawmakers that the legislation sets common-sense guardrails for AI systems in order to reduce bias in critical areas.

“The reason this is so critically important is the way these AI tools are built is that we put in data, historical data, and we use that to decide how the world works and then it outputs a decision. And as everybody sitting here knows, historical data is full of bias,” Bauer-Kahan, a Democrat, said during the meeting.

But Craig Albright, a senior vice president at BSA, said the bill uses vague language that could affect low-risk AI systems, offers conflicting enforcement by various government entities and misunderstands how AI systems are practically developed and used.

“The bill is really misguided in a couple of ways and would have serious consequences,” Albright said.

Albright argued that the bill needs to clearly define terms like “tools that are used to assist human decision making” and “quality and accessibility of important opportunity benefits,” both of which, he said, could be widely applied. He also urged lawmakers to specify which AI tools would be defined as “systems that are intended to make decisions about that eligibility.”

“In sort of plain reading of the text as it stands, you could have a scenario where a doctor’s office is using scheduling software for appointments with its patients, and that could be considered affecting the accessibility of the health care business,” Albright said.

Albright also said the bill also misunderstands what he calls, the “AI value chain,” the stages involved in creating and deploying AI tools by various companies, each responsible for a different phase of the process. He argues that the bill expects each company along the chain to test the system for potential high-risk uses, which, he said, is “not feasible.”

BSA began lobbying legislators to address these concerns in February, after the bill was introduced. The group published an opposition letter in July, stating that “[e]ffective AI regulation should assign responsibility based on real-world roles and risks. Both developers and deployers must play a part — but AB 1018 gets it wrong.”

The next stop for the bill is the Senate Appropriations Committee, when state legislators return to the Capitol from summer recess on August 18.


Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.



Source link

AI Insights

Here’s what parents need to know about artificial intelligence

Published

on


ChatGPT, AI chatbots, and the growing world of artificial intelligence: it’s another conversation parents may not have planned on having with their kids.

A new Harvard study found that half of all young adults have already used AI, and younger kids are quickly joining in.

Karl Ernsberger, a former high school teacher turned AI entrepreneur, says that’s not necessarily a bad thing.

“It is here to stay. It’s like people trying to resist the Industrial Revolution,” Ernsberger said.

Ernsberger believes tools like chatbots can be powerful for learning, but only if kids and parents know the limits.

One example is “Rudi the Red Panda,” a virtual character available for free in kids mode on X’s Grok AI. When asked, Rudi can even answer questions about Arizona history.

GROK

“The five C’s of Arizona are Copper, Cotton, Cattle, Citrus, Climate,” Rudi said.

But Ernsberger warns that children may struggle to understand that Rudi isn’t real, and that “friendship” with a chatbot is different from human connection.

“It’s hard for the student to actually develop a real friendship,” he said. “They get confused by that because friendship is something they continue to learn about as they get older.”

When asked if Rudi was really my best friend, it replied: “I’m as real as a red panda can be in your imagination. I’m here to be your best friend.”

That, Ernsberger says, is where parents need to step in.

For families trying to keep kids safe while exploring AI, Ernsberger’s first recommendation is simple.

“Use it yourself. There are so many use cases, so many different things that can be done with AI. Just finding a familiarity with it can help you find the weaknesses for your case, and its weaknesses for your kids.”

Then he says if your child is using AI, be there with them to watch over and keep the human connection.

“The key thing with AI is it’s challenging our ability to connect with each other, that’s a different kind of challenge to society than any other tool we’ve built in the past,” Ernsberger said.

Regulators are paying attention, too.

Arizona Attorney General Kris Mayes, along with 43 other state attorneys general, recently sent a letter to 12 AI companies, including the maker of Rudi, demanding stronger safeguards to protect young users.





Source link

Continue Reading

AI Insights

This MOSI exhibit will give you a hands-on look at artificial intelligence – Tampa Bay Times

Published

on



This MOSI exhibit will give you a hands-on look at artificial intelligence  Tampa Bay Times



Source link

Continue Reading

AI Insights

Spain Leads Europe in Adopting AI for Vacation Planning, Study Shows

Published

on


Spain records higher adoption of Artificial Intelligence – AI in vacation planning than the European average, according to the 2025 Europ Assistance-Ipsos barometer.

The study finds that 20% of Spanish travelers have used AI-based tools to organize or book their holidays, compared with 16% across Europe.

The research highlights Spain as one of the leading countries in integrating digital tools into travel planning. AI applications are most commonly used for accommodation searches, destination information, and itinerary planning, indicating a shift in how tourists prepare for trips.

Growing Use of AI in Travel

According to the survey, 48% of Spanish travelers using AI rely on it for accommodation recommendations, while 47% use it for information about destinations. Another 37% turn to AI tools for help creating itineraries. The technology is also used for finding activities (33%) and booking platform recommendations (26%).

Looking ahead, the interest in AI continues to grow. The report shows that 26% of Spanish respondents plan to use AI in future travel planning, compared with 21% of Europeans overall. However, 39% of Spanish participants remain undecided about whether they will adopt such tools.

Comparison with European Trends

The survey indicates that Spanish travelers are more proactive than the European average in experimenting with AI for holidays. While adoption is not yet universal, Spain’s figures consistently exceed continental averages, underscoring the country’s readiness to embrace new technologies in tourism.

In Europe as a whole, AI is beginning to make inroads into vacation planning but at a slower pace. The 2025 Europ Assistance-Ipsos barometer suggests that cultural attitudes and awareness of technological solutions may play a role in shaping adoption levels across different countries.

Changing Travel Behaviors

The findings suggest a gradual transformation in how trips are organized. Traditional methods such as guidebooks and personal recommendations are being complemented—and in some cases replaced—by AI-driven suggestions. From streamlining searches for accommodation to tailoring activity options, digital tools are expanding their influence on the traveler experience.

While Spain shows higher-than-average adoption rates, the survey also reflects caution. A significant portion of travelers remain unsure about whether they will use AI in the future, highlighting that trust, familiarity, and data privacy considerations continue to influence behavior.

The Europ Assistance-Ipsos barometer confirms that Spain is emerging as a frontrunner in adopting AI for travel planning, reflecting both a strong appetite for digital solutions and an evolving approach to how holidays are designed and booked.

Photo Credit: ProStockStudio / Shutterstock.com



Source link

Continue Reading

Trending