Connect with us

AI Insights

California bill regulating companion chatbots advances to Senate

Published

on


The California State Assembly approved legislation Tuesday that would place new safeguards on artificial intelligence-powered chatbots to better protect children and other vulnerable users.

Introduced in July by state Sen. Steve Padilla, Senate Bill 243 requires companies that operate chatbots marketed as “companions” to avoid exposing minors to sexual content, regularly remind users that they are speaking to an AI and not a person, as well as disclose that chatbots may not be appropriate for minors.

The bill passed the Assembly with bipartisan support and now heads to California’s Senate for a final vote.

“As we strive for innovation, we cannot forget our responsibility to protect the most vulnerable among us,” Padilla said in statement. “Safety must be at the heart of all developments around this rapidly changing technology. Big Tech has proven time and again, they cannot be trusted to police themselves.”

The push for regulation comes as tragic instances of minors harmed by chatbot interactions have made national headlines. Last year, Adam Raine, a teenager in California, died by suicide after allegedly being encouraged by OpenAI’s chatbot, ChatGPT. In Florida, 14-year-old Sewell Setzer formed an emotional relationship with a chatbot on the platform Character.ai before taking his own life.

A March study by the MIT Media Lab examining the relationship between AI chatbots and loneliness found that higher daily usage correlated with increased loneliness, dependence and “problematic” use, a term that researchers used to characterize addiction to using chatbots. The study revealed that companion chatbots can be more addictive than social media, due to their ability to figure out what users want to hear and provide that feedback.

Setzer’s mother, Megan Garcia, and Raine’s parents have filed separate lawsuits against Character.ai and OpenAI, alleging that the chatbots’ addictive and reward-based features did nothing to intervene when both teens expressed thoughts of self-harm.

The California legislation also mandates companies program AI chatbots to respond to signs of suicidal thoughts or self-harm, including directing users to crisis hotlines, and requires annual reporting on how the bots affect users’ mental health. The bill allows families to pursue legal action against companies that fail to comply.


Written by Sophia Fox-Sowell

Sophia Fox-Sowell reports on artificial intelligence, cybersecurity and government regulation for StateScoop. She was previously a multimedia producer for CNET, where her coverage focused on private sector innovation in food production, climate change and space through podcasts and video content. She earned her bachelor’s in anthropology at Wagner College and master’s in media innovation from Northeastern University.



Source link

AI Insights

Concordia-led research program is Indigenizing artificial intelligence | Education

Published

on


An initiative steered by Concordia researchers is challenging the conversation around the direction of artificial intelligence (AI). It charges that the current trajectory is inherently biased against non-Western modes of thinking about intelligence — especially those originating from Indigenous cultures. As a way of decolonising the future of AI, they have created the Abundant Intelligences research program: an international, multi-institutional and interdisciplinary program that seeks to rethink how we conceive of AI. The driving concept behind it is the incorporation of Indigenous knowledge systems to create an inclusive, robust concept of intelligence and intelligent action, and how that can be embedded into existing and future technologies.

The full concept is described in a recent paper for the journal AI & Society.

 “Artificial intelligence has inherited conceptual and intellectual ideas from past formulations of intelligence that took on certain colonial pathways to establish itself, such as emphasizing a kind of industrial or production focus,” says Ceyda Yolgörmez, a postdoctoral fellow with Abundant Intelligences and one of the paper’s authors.

They write that this scarcity mindset contributed to resource exploitation and extraction that has extended a legacy of Indigenous erasure that influences discussion around AI to this day, adds lead author Jason Edward Lewis. The professor in the Department of Design and Computation Arts is also the University Research Chair in Computational Media and the Indigenous Future Imaginary. “The Abundant Intelligences research program is about deconstructing the scarcity mindset and making room for many kinds of intelligence and ways we might think about it.”

The researchers believe this alternative approach can create an AI that is oriented toward human thriving, that preserves and supports Indigenous languages, addresses pressing environmental and sustainability issues, re-imagines public health solutions and more.

Relying on local intelligence

The community-based research program is directed from Concordia in Montreal but much of the local work will be done by individual research clusters (called pods) across Canada, in the United States and in New Zealand.

The pods will be anchored to Indigenous-centred research and media labs at Western University in Ontario, the University of Lethbridge in Alberta, the University of Hawai’i—West Oahu, Bard College in New York and Massey University in New Zealand.

They bring together Indigenous knowledge-holders, cultural practitioners, language keepers, educational institutions and community organizations with research scientists, engineers, artists and social scientists to develop new computational practices fitted to an Indigenous-centred perspective.

The researchers are also partnering with AI professionals and industry researchers, believing that the program will open new avenues of research and propose new research questions for mainstream AI research. “For example, how do you build a rigorous system out of a small amount of resource data like different Indigenous languages?” asks Yolgörmez.  “How do you make multi-agent systems that are robust, recognize and support non-human actors and integrate different sorts of activities within the body of a single system?”

Lewis asserts that their approach is both complementary and alternative to mainstream AI research, particularly regarding data sets like Indigenous languages that are much smaller than the ones currently being used by industry leaders. “There is a commitment to working with data from Indigenous communities in an ethical way, compared to simply scraping the internet,” he says. “This yields miniscule amounts of data compared to what the larger companies are working with, but it presents the potential to innovate different approaches when working with small languages. That can be useful to researchers who want to take a different approach than the mainstream.

“This is one of the strengths of the decolonial approach: it’s one way to get out of this tunnel vision belief that there is only one way of doing things.”

Hēmi Whaanga, professor at Massey University in New Zealand, also contributed to the paper.

Read the cited paper: “Abundant intelligences: placing AI within Indigenous knowledge frameworks.

— By Patrick Lejtenyi

Concordia University

@ConcordiaUnews

— AB





Source link

Continue Reading

AI Insights

Eckerd College launches new minor in AI studies – News

Published

on


It couldn’t have come at a better time. Students have become more and more reliant on AI for coursework, and national studies are sending up warning signals about the new and creative ways students are utilizing AI to complete assignments.

“AI is definitely a balancing act that I think so many of us in higher education are dealing with,” says Ramsey-Tobienne, who also oversees the College Academic Honor Council. “As professors, we have to decide how, if and when to use it, and we need to help our students develop into critical consumers of AI. Indeed, critical AI literacy is really the foundation of so much of what we’re doing in the minor.

“For better or worse, AI is not going anywhere,” Ramsey-Tobienne adds. “And I think we do ourselves a disservice if we’re not helping our students to understand how to navigate this new AI world.” 



Source link

Continue Reading

AI Insights

AI drug companies are struggling—but don’t blame the AI

Published

on


Moonshot hopes of artificial intelligence being used to expedite the development of drugs are coming back down to earth. 

More than $18 billion has flooded into more than 200 biotechnology companies touting AI to expedite development, with 75 drugs or vaccines entering clinical trials, according to Boston Consulting Group. Now, investor confidence—and funding—is starting to waver.

In 2021, venture capital investment in AI drug companies reached an apex with more than 40 deals being made worth about $1.8 billion. This year, there have been fewer than 20 deals worth about half of that peak sum, the Financial Times reported, citing data from Pitchbook. 

Some existing companies have struggled in the face of challenges. In May, biotech company Recursion tabled three of its prospective drugs in a cost-cutting effort following a merger with Exscientia, a similar biotech firm, last year. Fortune previously reported that none of Recursion’s discovered AI-compounds have reached the market as approved drugs. After a major restructuring in December 2024, biotech company BenevolentAI delisted from the Euronext Amsterdam stock exchange in March before merging with Osaka Holdings. 

A Recursion spokesperson told Fortune the decision to shelve the drugs was “data-driven” and a planned outcome of its merger with Exscientia.

“Our industry’s 90% failure rate is not acceptable when patients are waiting, and we believe approaches like ours that integrate cutting-edge tools and technologies will be best positioned for long-term success,” the spokesperson said in a statement.

BenevolentAI did not respond to a request for comment.

The struggles of the industry coincide with a broader conversation around the failure of generative AI to deliver more quickly on its lofty promises of productivity and efficiency. An MIT report last month found 95% of generative AI pilots at companies failed to accelerate revenue. A U.S. Census Bureau survey this month found AI adoption in large U.S. companies has declined from its 14% peak earlier this year to 12% as of August.

But the AI technology used to help develop drugs is far different than those from large language models used in most workplace initiatives and should therefore not be held to the same standards, according to Scott Schoenhaus, managing director and equity research analyst for KeyBanc Capital Markets Inc. Instead, the industry faces its own set of challenges.

“No matter how much data you have, human biology is still a mystery,” Schoenhaus told Fortune.

Macro and political factors drying up AI drug development funding

At the crux of the slowed funding and slower development results may not be the limitations of the technology itself, but rather a slew of broader factors, Schoenhaus said.

“Everyone acknowledges the funding environment has dried up,” he said. “The biotech market is heavily influenced by low interest rates. Lower interest rates equals more funding coming into biotechs, which is why we’re seeing funding for biotech at record lows over the last several years, because interest rates have remained elevated.”

It wasn’t always this way. Leveraging AI in drug development is not only thanks to growing access to semiconductor chips, but also how technology has allowed for quick and now cheap ways of mapping the entire human genome. In 2001, it cost more than $100 million to map the human genome. Two decades later, that undertaking cost about $1,000.

Beyond having the pandemic to thank for next-to-nothing interest rates in 2021, COVID also expedited partnerships between AI drug development start ups and Big Pharma companies. In early 2022 biotechnology startup AbCellera and Eli Lilly got emergency FDA approval for an antibody used in the early COVID vaccines, a tangible example of how the tech could be used to aid in drug discoveries.

But since then, there have been other industry hurdles, Schoenhaus said, including Big Pharma cutting back on research and development costs amid slowing demand, as well as uncertainty surrounding whether President Donald Trump would impose a tariff on pharmaceuticals as the U.S. and European Union tussled over a trade deal. Trump signed a memo this week threatening to ban direct-to-consumer advertising for prescription medications, theoretically driving down pharma revenues.

Limitations of AI

That’s not to say there haven’t been technological hiccups in the industry.

“There is scrutiny around the technology themselves,” Schoenhaus said. “Everyone’s waiting for these readouts to prove that.”

The next 12 months of emerging data from AI drug development startups will be critical in determining how successful these companies stand to be, Schoenhaus said. Some of the results so far have been mixed. For example, Recursion released data from a mid-stage clinical trial of a drug to treat a neurovascular condition in September last year, finding the drug was safe but that there was little evidence of how effective it was. Company shares fell double digits following the announcement. 

These companies are also limited by how they’re able to leverage AI. The drug development process is one that takes 10 years and is intentionally bottlenecked to ensure the safety and efficacy of the drugs in question, according to according to David Siderovski, chair of University of North Texas Health Science Center’s Department of Pharmacology & Neuroscience, who has previously worked with AI drug development companies in the private sector. Biotechnology companies using AI to make these processes more efficient are usually only tackling one small part of this bottleneck, such as being able to screen and identify a drug-like molecule faster than previously.

“There are so many stages that have to be jumped over before you can actually declare the [European Medicines Agency], or the FDA, or Health Canada, whoever it is, will designate this as a safe, approved drug to be marketed to patients out in the world,” Siderovski told Fortune. “That one early bottleneck of auditioning compounds is not the be-all and end-all of satisfying shareholders by announcing, ‘We have approval for this compound as a drug.’”

Smaller companies in the sector have also made a concerted effort to partner less with Big Pharma companies, preferring instead to build their own pipelines, even if it means no longer having access to the franchise resources of industry giants. 

“They want to be able to pursue their technology and show the validation of their platform sooner than later,” Schoenhaus said. “They’re not going to wait around for large pharma to pursue a partnered molecule. They’d rather just do it themselves and say, ‘Hey, look, our technology platform works.’”

Schoenhaus sees this strategy as a way for companies looking to prove themselves by perfecting the use of AI to better understand the slippery, mysterious, and still greatly unknown frontier of human biology.

“It’s just a very much more complex application of AI,” he said, “hence why I think we are still seeing these companies focus on their own internal pipelines so that they can really, squarely focus their resources on trying to better their technology.”



Source link

Continue Reading

Trending