Connect with us

AI Insights

Character.AI Explores Sale or New Funding Amid Rising Costs

Published

on


Character.AI is weighing its options between a potential sale or raising new capital, sources told The Information.

The maker of artificial intelligence (AI)-powered character chatbots has, in recent weeks, held discussions with possible buyers, bankers and staff. Executives have also talked with investors about raising “a few hundred million dollars at a valuation of more than $1 billion,” according to one person with direct knowledge of the conversations. 

Founders Noam Shazeer and Daniel De Freitas, both former Google researchers, agreed last September to rejoin the tech giant to work on Gemini. After they left, Character.AI’s employees took over ownership of the now roughly 70-person startup. 

If sold, the buyer would get its app and website, which host chatbots designed by the startup and independent creators. These bots resemble anime characters, celebrities and historical figures.

In June, Character.AI hired a new CEO, Karandeep Anand, a former Meta and Brex executive. This month, the startup launched a social feed for sharing AI-generated videos and collaborative content creation, while also selling ads from brands such as Yelp and Webtoon. 

Character.AI’s trajectory is an example of a new trend called a “reverse acquihire,” where founders and researchers of smaller AI firms move to Big Tech companies in exchange for licensing deals. Similar arrangements have played out at least six times since early 2024, often leaving the remaining startups struggling for survival. 

For example, Google in July hired top executives from coding startup Windsurf in a $2.4 billion licensing deal. Windsurf was then sold to Cognition at an undisclosed amount.

Character.AI makes most of its revenue by charging $9.99 per month for premium features such as voice calls with chatbots. The startup expects to reach $50 million in annualized revenue by the end of the year, up from about $30 million last month, according to news outlet. At a $1 billion valuation, the startup would be trading at 33 times recent revenue, which is roughly in line with other AI apps.

The platform has grown to 20 million monthly active users as of February, but the costs of running large-scale AI models continue to climb. It stopped developing its own models after the founders and technical leaders left, and now relies on open-source models from DeepSeek, Meta and others. This reduced development expenses but left Character.AI with ongoing operating costs estimated in the millions per month.

Meanwhile, the startup is facing two lawsuits alleging that it exposed children to harmful content. Texas Attorney General Ken Paxton also launched an investigation earlier this month into whether the startup misled children through deceptive marketing.

California also has been advancing a bill that would regulate the use of AI companion chatbots. Senate Bill 243 is one of the first major attempts in the U.S. to regulate AI buddies, especially for its impact on minors. The bill would require chatbot companies to ban reward systems that encourage use, send reminders to users that the chatbot isn’t human and perform regular audits, among other stipulations.



Source link

AI Insights

LifeLong Learning and TXST expand series on Artificial Intelligence

Published

on


Dr. Marianne Reese, Founder and Director of LifeLong Learning, conceived of the AI series due to AI’s exponential growth and the need for the public to understand its uses and limitations.

“AI is a relatively new tool that is being used in ways the public is often unaware of,” Reese noted. “We all need to know more about this powerful technology, understand AI’s positive and concerning applications, and learn the skills necessary to scrutinize the information it generates.

“AI will become increasingly prevalent, so we need to be informed consumers as AI impacts politics, medicine, business, finance and other areas of our lives,” Reese said.

The AI Learning Series is led by Dr. Kimberly Conner, Digital Strategy Lead for Information Technology at Texas State. Connor’s role is to help demystify innovation and make technology approachable for students, staff and faculty. With a rare combination of expertise in law, education and IT, Dr. Connor bridges the gap between complex digital tools and the people who use them.

Almost 80 lifelong learners attended the AI Series Kickoff Event on Tuesday, Aug. 19.

The Sept. 3 class covers AI use of our personal data and AI-generated misinformation and scams.

The Sept. 17 class features a comparison of different AI services (e.g., Chat GPT, Gemini).

The Oct. 1 class covers practical AI tools for daily life, with an exploration of AI applications for communication and creative projects.

The Oct. 15 class covers AI reliability & accuracy, AI limitations and and best practices for verification.

The Sept. 29 class covers AI for personal enrichment, such as enhancing hobbies and expanding personal interests.

The final class on Nov. 3 covers hands-on activities and features a closing presentation.

For more information visit their website at lllsanmarcos.org.



Source link

Continue Reading

AI Insights

China Calls for Regulation of Investment in Artificial Intelligence

Published

on


In a move reflecting a cautious strategic direction, China has called for curbing “excessive investment” and “random competition” in the artificial intelligence sector, despite its classification as a key driver of national economic growth and a critical competitive field with the United States.

Chang Kailin, a senior official at the National Development and Reform Commission – the highest economic planning body in the country – confirmed that Beijing will take a coordinated and integrated approach to developing artificial intelligence across various provinces, focusing on leveraging the advantages and local industrial resources of each region to avoid duplicating efforts, warning against “herd mentality” in investment without careful planning.

These statements come amid a contraction in China’s manufacturing industries for the fifth consecutive month, reflecting the pressures faced by the world’s second-largest economy, as policymakers attempt to avoid repeating past mistakes like those in the electric vehicle sector, which led to an oversupply of production capacity and subsequent deflationary pressures.

Chinese President Xi Jinping also warned last month against the rush of local governments towards artificial intelligence without proper planning, a clear indication of the Chinese leadership’s desire to regulate the pace of growth in this vital sector.

Despite these warnings, China continues to accelerate the development, application, and governance of artificial intelligence, as the government revealed a new action plan last week aimed at boosting this sector, which includes significant support for private companies and encouragement for the emergence of strong startups capable of global competition, which the National Committee described as a pursuit for the emergence of “black horses” in the innovation race, implicitly referring to notable success stories like the Chinese company DeepMind.

DeepMind gained international fame earlier this year after launching a powerful and low-cost artificial intelligence model, competing with the models of major American companies, igniting a wave of local and international interest in Chinese technologies.

In a separate context, a Bloomberg analysis showed that Chinese technology companies plan to install more than 115,000 artificial intelligence chips produced by the American company Nvidia in massive data centers being built in the desert regions of western China, indicating a continued effort to build strong artificial intelligence infrastructure despite regulatory constraints.

These steps come at a time when Beijing seeks to balance support for technological innovation with regulating investment chaos, in an attempt to shape a more sustainable path for the growth of artificial intelligence within China’s broader economic vision.



Source link

Continue Reading

AI Insights

A new research project is the first comprehensive effort to categorize all the ways AI can go wrong, and many of those behaviors resemble human psychiatric disorders.

Published

on


Scientists have suggested that when artificial intelligence (AI) goes rogue and starts to act in ways counter to its intended purpose, it exhibits behaviors that resemble psychopathologies in humans. That’s why they have created a new taxonomy of 32 AI dysfunctions so people in a wide variety of fields can understand the risks of building and deploying AI.

In new research, the scientists set out to categorize the risks of AI in straying from its intended path, drawing analogies with human psychology. The result is “Psychopathia Machinalis” — a framework designed to illuminate the pathologies of AI, as well as how we can counter them. These dysfunctions range from hallucinating answers to a complete misalignment with human values and aims.



Source link

Continue Reading

Trending