Connect with us

AI Insights

Trump Family Adds $1.3 Billion of Crypto Wealth in Span of Weeks

Published

on




It took just a few eventful weeks for President Donald Trump’s family to rack up about $1.3 billion from two crypto ventures, each less than a year old.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

FTC Probes AI Chatbots’ Impact on Child Safety

Published

on


The Federal Trade Commission (FTC) is investigating the effect of artificial intelligence (AI) chatbots on children and teens.

The commission announced Thursday (Sept. 11) that it was issuing orders to seven providers of AI chatbots in search of information on how those companies measure and monitor potentially harmful impacts of the technology on young people.

The companies in question are Google, Character.AI, Instagram, Meta, OpenAI, Snap and xAI.

“AI chatbots may use generative artificial intelligence technology to simulate human-like communication and interpersonal relationships with users,” the FTC said in a news release. “AI chatbots can effectively mimic human characteristics, emotions and intentions, and generally are designed to communicate like a friend or confidant, which may prompt some users, especially children and teens, to trust and form relationships with chatbots.”

According to the release, the FTC wants to know what measures, if any, these companies have taken to determine the safety of their chatbots when serving as companions.

It is also seeking information on how the companies limit the products’ use by and potential negative effects on children and teens, and to inform users and parents of the risks associated with the products.

“The FTC is interested in particular on the impact of these chatbots on children and what actions companies are taking to mitigate potential negative impacts, limit or restrict children’s or teens’ use of these platforms, or comply with the Children’s Online Privacy Protection Act Rule,” the news release added.

As noted here last week when reports of the FTC’s efforts first emerged, some companies have already tried to address this issue.

For instance, OpenAI has said it would add teen accounts that can be monitored by parents. Character.AI has made similar changes, and Meta has added restrictions for people under 18 who use its AI products.

Those reports came the same day First Lady Melania Trump hosted a meeting of the White House Task Force on Artificial Intelligence Education. In a news release issued before the event, Trump said the rise of AI must be managed responsibly.

“During this primitive stage, it is our duty to treat AI as we would our own children—empowering, but with watchful guidance,” Trump said. “We are living in a moment of wonder, and it is our responsibility to prepare America’s children.”

Meanwhile, Character.AI CEO Karandeep Anand said last month he foresees a future where people have AI friends.

“They will not be a replacement for your real friends, but you will have AI friends, and you will be able to take learnings from those AI-friendly conversations into your real-life conversations,” Anand told the Financial Times.





Source link

Continue Reading

AI Insights

General Counsel’s Job Changing as More Companies Adopt AI

Published

on


The general counsel’s role is evolving to include more conversations around policy and business direction, as more companies deploy artificial intelligence, panelists at a University of California Berkeley conference said Thursday.

“We are not just lawyers anymore. We are driving a lot of the policy conversations, the business conversations, because of the geopolitical issues going on and because of the regulatory, or lack thereof, framework for products and services,” said Lauren Lennon, general counsel at Scale AI, a company that uses data to train AI systems.

Scattered regulation and fraying international alliances are also redefining the general counsel’s job, panelists …



Source link

Continue Reading

AI Insights

California bill regulating companion chatbots advances to Senate

Published

on


The California State Assembly approved legislation Tuesday that would place new safeguards on artificial intelligence-powered chatbots to better protect children and other vulnerable users.

Introduced in July by state Sen. Steve Padilla, Senate Bill 243 requires companies that operate chatbots marketed as “companions” to avoid exposing minors to sexual content, regularly remind users that they are speaking to an AI and not a person, as well as disclose that chatbots may not be appropriate for minors.

The bill passed the Assembly with bipartisan support and now heads to California’s Senate for a final vote.

“As we strive for innovation, we cannot forget our responsibility to protect the most vulnerable among us,” Padilla said in statement. “Safety must be at the heart of all developments around this rapidly changing technology. Big Tech has proven time and again, they cannot be trusted to police themselves.”

The push for regulation comes as tragic instances of minors harmed by chatbot interactions have made national headlines. Last year, Adam Raine, a teenager in California, died by suicide after allegedly being encouraged by OpenAI’s chatbot, ChatGPT. In Florida, 14-year-old Sewell Setzer formed an emotional relationship with a chatbot on the platform Character.ai before taking his own life.

A March study by the MIT Media Lab examining the relationship between AI chatbots and loneliness found that higher daily usage correlated with increased loneliness, dependence and “problematic” use, a term that researchers used to characterize addiction to using chatbots. The study revealed that companion chatbots can be more addictive than social media, due to their ability to figure out what users want to hear and provide that feedback.

Setzer’s mother, Megan Garcia, and Raine’s parents have filed separate lawsuits against Character.ai and OpenAI, alleging that the chatbots’ addictive and reward-based features did nothing to intervene when both teens expressed thoughts of self-harm.

The California legislation also mandates companies program AI chatbots to respond to signs of suicidal thoughts or self-harm, including directing users to crisis hotlines, and requires annual reporting on how the bots affect users’ mental health. The bill allows families to pursue legal action against companies that fail to comply.


Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.



Source link

Continue Reading

Trending