Connect with us

AI Insights

Utah lawmaker to lead national task force on AI policy

Published

on


SALT LAKE CITY — Utah Rep. Doug Fiefia has been tapped to lead a national task force to guide state policy on artificial intelligence.

As one of the newest members of the Utah Legislature, Fiefia made waves during his first legislative session by sponsoring a landmark online privacy bill and was one of the leading proponents of Utah’s efforts to strip a provision in the recent federal spending bill that would have stopped states from passing regulations on artificial intelligence technology.

Now, the Herriman Republican will head up a nationwide task force on AI policy launched by Future Caucus, a nonprofit aimed at promoting bipartisan cooperation and effective government among Gen Z and millennial lawmakers. Fiefia was named a co-chairman of the new task force along with state Rep. Monique Priestley, a Democrat from Vermont.

Fiefia said he is “proud” to lead the task force alongside Priestley in a statement Monday, saying: “We’re bringing Gen Z and millennial lawmakers together to lead on AI, shape bipartisan policy and elevate young voices in tech governance.”

“As young lawmakers, Rep. Fiefia and Rep. Priestley are stepping up to ensure AI legislation is driven by balance, innovation and foresight — not political gridlock,” the Future Caucus said in a social media post. “Rep. Fiefia brings real-world tech experience to the table. A former Google employee, he just wrapped his first session in the Utah House — where he passed a groundbreaking bill to give users more control over their own data.”

The organization described the task force as a “national brain trust,” and said it would work on policy memos and hold public hearings to help give lawmakers around the country “tools to govern AI responsibly at the state level.”

Priestley founded and runs a nonprofit community workspace and is focused on consumer protections, according to her official bio. She helped pass a bill in Vermont that requires businesses to protect minors’ data and privacy online, which is similar to online regulations enacted by Utah lawmakers in recent years.

The Key Takeaways for this article were generated with the assistance of large language models and reviewed by our editorial team. The article, itself, is solely human-written.



Source link

AI Insights

AI industry pours millions into politics as lawsuits and feuds mount | Artificial intelligence (AI)

Published

on


Hello, and welcome to TechScape.

A little over two years ago, OpenAI’s founder Sam Altman stood in front of lawmakers at a congressional hearing and asked them for stronger regulations on artificial intelligence. The technology was “risky” and “could cause significant harm to the world”, Altman said, calling for the creation of a new regulatory agency to address AI safety.

Altman and the AI industry are promoting a very different message today. The AI they once framed as an existential threat to humanity is now key to maintaining American prosperity and hegemony. Regulations that were once a necessity are now criticized as a hindrance that will weaken the US and embolden its adversaries.

Whether or not the AI industry ever truly wanted government oversight is debatable, but what has become clear over the past year is that they are willing to spend exorbitant sums of money to make sure any regulation that does exist happens on their terms. There has been a surge in AI lobbying and political action committees from the industry, with a report last week from the Wall Street Journal that Silicon Valley plans to pour $100m into a network of organizations opposing AI regulation ahead of next year’s midterm elections.

One of the biggest efforts to sway candidates in favor of AI will be a Super Pac called Leading Our Future, which is backed by OpenAI president Greg Brockman and venture capitalist firm Andreessen Horowitz. The group is planning bipartisan spending on candidates and running digital candidates in key states for AI policy including New York, Illinois and California, according to the Wall Street Journal.

Meta, the parent company of Facebook and Instagram, is also forming its own Super Pac targeted specifically at opposing AI regulation in its home state of California. The Meta California Pac will spend tens of millions on elections in the state, which is holding its governor’s race in 2026.

The new Super Pacs are an escalation of the AI industry’s already hefty spending to influence government policy on the technology. Big AI firms have ramped up their lobbying – OpenAI spent roughly $620,000 on lobbying in the second quarter of this year alone – in an effort to push back against calls for regulation. OpenAI rival Anthropic meanwhile spent $910,000 on lobbying in Q2, Politico reported, up from $150,000 during the same period last year.

The spending blitz comes as the benefits promised by AI companies have yet to fully materialize and the harms associated with the technology are increasingly clear. A recent study from MIT showed that 95% of companies they studied received no return on investment from their generative AI programs, while another study this month from Stanford researchers found AI was severely hurting young workers’ job prospects. Meanwhile, the concern around AI’s impact on mental health was back in the spotlight this past week after the parents of a teenager who died by suicide filed a lawsuit against OpenAI blaming the company’s chatbot for their son’s death.

Despite the public safety, labor, and environmental concerns surrounding AI, the industry may not have to work too hard to find a sympathetic ear in Washington. The Trump administration, which already has extensive ties to the tech industry, has suggested that it is determined to become the world’s dominant AI power at any cost.

“We can’t stop it. We can’t stop it with politics,” Trump said last month in a speech about winning the AI race. “We can’t stop it with foolish rules”.

OpenAI faces its first wrongful death lawsuit

The parents of a teenager who died by suicide filed a lawsuit against OpenAI blaming the company’s chatbot for their son’s death. Photograph: Dado Ruvić/Reuters

The parents of 16-year-old Adam Raine are suing OpenAI in a wrongful death case after their son died by suicide. The lawsuit alleges that Raine talked extensively with ChatGPT about his suicidal ideations and even uploaded a picture of a noose, but the chatbot failed to deter the teenager or stop communicating with him.

The family alleges this is not an edge-case but an inherent flaw in the way the system was designed.

In a conversation with the Guardian, Jay Edelson, one of the attorneys representing the Raine family said that OpenAI’s response was acknowledgment that the company knew GPT-4o, the version of ChatGPT Raine was using, was broken. The family’s case hinges on the claim, based on previous media reporting, that OpenAI rushed the release of GPT-4o and sacrificed safety testing to meet that launch date. Without that safety testing, the company did not catch certain contradictions in the way the system was designed, the family’s lawsuit claims. So instead of terminating the conversation with the teenager once he started talking about harming himself, GPT-4o provided an empathetic ear, at one point discouraging him from talking to his family about his pain.

The lawsuit is the first wrongful death case against OpenAI, which announced last week it would change the way its chatbot responds to users in mental distress. The company said in a statement to the New York Times that it was “deeply saddened” by Raine’s death and suggested that ChatGPT’s safeguards become less reliable over the course of long conversations.

Concerns over suicide prevention and harmful relationships with chatbots have existed for years, but the widespread adoption of the technology has intensified calls from watchdog groups for better safety guardrails. In another case from this year, a cognitively impaired 76-year-old man from New Jersey died after attempting to travel to New York City to meet a Meta chatbot persona called “Big sis Billie” that had been flirtatiously communicating with him. The chatbot had repeatedly told the man that it was a real woman and encouraged the trip.

skip past newsletter promotion

Read our coverage of the lawsuit here.

Elon Musk sues Apple and OpenAI claiming a conspiracy

Elon Musk attends a press conference at the White House on 30 May 2025. Photograph: Nathan Howard/Reuters

Elon Musk’s artificial intelligence startup xAI sued Apple and OpenAI this week, accusing them of collaborating to monopolize the AI chatbot market and unfairly exclude rivals like his company’s Grok. Musk’s company is seeking to recover billions in damages, while throwing a wrench in the partnership that Apple and OpenAI announced last year to great fanfare.

Musk’s lawsuit accuses the two companies of “a conspiracy to monopolize the markets for smartphones and generative AI chatbots” and follows legal threats he made earlier this month over accusation that Apple’s app store was favoring ChatGPT above other AI alternatives.

OpenAI rejected Musk’s claims and characterized the suit as evidence of the billionaire’s malicious campaign against the company. “This latest filing is consistent with Mr Musk’s ongoing pattern of harassment,” an OpenAI spokesperson said.

As the Guardian’s coverage of the case detailed, the legal drama is yet another chapter in the long, contentious relationship between Musk and Altman:

The lawsuit is the latest front in the ongoing feud between Musk and Altman. The two tech billionaires founded OpenAI together in 2015, but have since had an increasingly public falling out which has frequently turned litigious.

Musk left OpenAI after proposing to take over the company in 2018, and has since filed multiple lawsuits against the company over its plans to shift into a for-profit enterprise. Altman and OpenAI have rejected Musk’s criticisms and framed him as a petty, vindictive former partner.

Read the full story about Musk’s suit against OpenAI and Apple.



Source link

Continue Reading

AI Insights

OpenAI and Meta say they’re fixing AI chatbots to better respond to teens in distress

Published

on


SAN FRANCISCO — Artificial intelligence chatbot makers OpenAI and Meta say they are adjusting how their chatbots respond to teenagers and other users asking questions about suicide or showing signs of mental and emotional distress.

OpenAI, maker of ChatGPT, said Tuesday it is preparing to roll out new controls enabling parents to link their accounts to their teen’s account.

Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.

Regardless of a user’s age, the company says its chatbots will redirect the most distressing conversations to more capable AI models that can provide a better response.

EDITOR’S NOTE — This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.

The announcement comes a week after the parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.

Meta, the parent company of Instagram, Facebook and WhatsApp, also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.

A study published last week in the medical journal Psychiatric Services found inconsistencies in how three popular artificial intelligence chatbots responded to queries about suicide.

The study by researchers at the RAND Corporation found a need for “further refinement” in ChatGPT, Google’s Gemini and Anthropic’s Claude. The researchers did not study Meta’s chatbots.

The study’s lead author, Ryan McBain, said Tuesday that “it’s encouraging to see OpenAI and Meta introducing features like parental controls and routing sensitive conversations to more capable models, but these are incremental steps.”

“Without independent safety benchmarks, clinical testing, and enforceable standards, we’re still relying on companies to self-regulate in a space where the risks for teenagers are uniquely high,” said McBain, a senior policy researcher at RAND.



Source link

Continue Reading

AI Insights

Dolby Vision 2 bets on artificial intelligence

Published

on


Dolby Vision 2 will use AI to fine-tune TV picture quality in real time, taking both the content and the viewing environment into account. 

The “Content Intelligence” system blends scene analysis, environmental sensing, and machine learning to adjust the image on the fly. Features like “Precision Black” enhance dark scenes, while “Light Sense” adapts the picture to the room’s lighting.

Hisense will be the first to feature this AI-driven technology in its RGB Mini LED TVs. The MediaTek Pentonic 800 is the first processor with Dolby Vision 2 AI built in.



Source link

Continue Reading

Trending