Connect with us

AI Insights

C3 AI Selected for Constellation ShortList™ for Artificial Intelligence and Machine Learning Best-of-Breed Platforms for Q3 2025

Published

on


Enterprise AI leader recognized for building, deploying and managing breakthrough AI and machine learning capabilities with flexibility and limitless scale

REDWOOD CITY, Calif., August 14, 2025–(BUSINESS WIRE)–C3 AI (NYSE: AI), the Enterprise AI application software company, was selected for the Constellation ShortList™ for Artificial Intelligence and Machine Learning Best-of-Breed Platforms for Q3 2025. C3 AI has now been named to five ShortLists in past 18 months, further positioning C3 AI as the leading enterprise AI software provider for accelerating digital transformation.

“At C3 AI, we provide services to build enterprise-scale AI applications more efficiently and cost-effectively. We’re in the business of solving real business problems and cultivating social and economic growth through our efforts,” said Thomas M. Siebel, Chairman and CEO, C3 AI. “Our ongoing recognition on Constellation ShortLists reaffirms what we know to be true: C3 AI’s ability to create custom AI and ML models is a model for the industry, and the best is yet to come.”

C3 AI is recognized among 15 other technology vendors and service providers for offering all the tools, notebooks, diverse data science libraries, and collaborative monitoring tools a company needs to build, deploy and manage custom machine learning models. The platforms on this ShortList use both traditional and automated methods, and are steadily introducing no-code/low-code and automated capabilities. With these new capabilities, data-savvy team members can build and deploy machine learning models without deep data science expertise.

“The vendors selected for this list are in a class of their own, chosen for their excellence in producing business value with flexibility and scale,” said R “Ray” Wang, CEO and founder at Constellation Research. “Comprehensive vetting and research inform our recommendations, revealing why the listed vendors are the best AI and machine learning service providers on the market, with platforms and offerings that buy-side clients can depend on.”

About C3.ai, Inc.

C3 AI (NYSE: AI) is the Enterprise AI application software company. C3 AI delivers a family of fully integrated products including the C3 AI Platform, an end-to-end platform for developing, deploying, and operating enterprise AI applications, C3 AI applications, a portfolio of industry-specific SaaS enterprise AI applications that enable the digital transformation of organizations globally, and C3 Generative AI, a suite of large AI transformer models for the enterprise.



Source link

AI Insights

PR News | Will Artificial Intelligence Destroy the Communications Industry?

Published

on






Simon Erskine Locke

I recently met a leader in the communications industry, and as we were chatting over coffee, he shared that he’s been hearing the phrase “two things can be true at the same time” a lot recently. This is also something I’ve been saying for a couple of years in discussions around politics, AI, and a variety of other issues.

In a polarized world in which opinions are shared as fact, data and statistics are made to fit ideologies and the truth doesn’t seem to matter, expressing the view that two seemingly contradictory perspectives can both be true is a pragmatic way to find common ground. It recognizes that there are different ways to look at the same issues.

While making the effort to recognize different perspectives is healthy, idealogues (on either side of the political spectrum) are rarely interested in recognizing that there may be another side to an argument. When you are devoted to a particular position, the idea of an alternate version — or even the acknowledgement that there may be grey between black and white — creates cognitive dissonance.

Why bring this up? In part, because many of the discussions around AI seem to be somewhat bipolar.

For many, AI is still the shiny new tool that will write great emails, automate the lengthy process of engaging with journalists, or lead to faster and easier content generation. For others, AI will kill jobs, dumb down the industry, or lead us to an existential doomsday in which the rise of content leads to the fall of engagement.

As someone who has spent significant time with AI companies, building tools, working with various LLMs, and discussing the impact of AI with lawmakers, I firmly believe that there are reasons to be optimistic and pessimistic. It’s not all black and white.

One way to frame the discussion of AI is to think of it like electricity. Electricity is key to powering the economy and it drives machines that do a lot of different things. Some of those are good. Some are not. Electricity gives us light, but it can also kill us.

AI, like electricity, is not intrinsically good or bad. It’s what we do with it that matters. As communicators, we have agency. We decide which choices will shape the future of the industry. We are not powerless.

We are responsible for making decisions about how AI is employed. And, consequently, if we get this wrong, shame on us. If communicators ultimately put the industry out of business by automating the engagement process with journalists, mass producing content to game LLM algorithms, and delegating thinking to chatbots — rather than helping the next generation of communicators hone their writing, editing, fact checking, and critical thinking skills — that will be on us.

Equally, if we don’t leverage AI, we will miss an opportunity. AI can help streamline workflows and its access to the vast body of knowledge on the internet can lead to smarter, more informed engagement with reporters and impactful content.

A key takeaway from conversations with AI startups is that they are now able to do things that were simply not possible two years ago. One is making the restaurant booking process more efficient, leading to greater longevity of the businesses they work with – which keeps staff employed. Another company’s voice technology is enabling local government to serve constituents at any time and in any language.

As with every other generational technology shift, some jobs will disappear, and others will be created. Communicators need to avoid both being Panglossian, and the trap of seeing AI as the end of days.

Finding the right use cases and effectively implementing the technology will be essential. The customer service line of a major financial institution states, “We are using AI to deliver exceptional customer service”, only to require the customer to repeat the same basic information three times. This underscores the distance between AI’s potential and the imperfect experience most of us see every day.

Pragmatic agency and corporate communications leaders will continue to experiment, invest time to understand what is now possible with AI. They will need to implement tools selectively, while carefully considering the impact of decisions on the industry in the years to come.

At this stage, there is an element of the blind leading the blind with AI. Startups are not omniscient. Communicators looking at applications as a magic bullet are going to be sorely disappointed. We are already seeing questions about the returns on the rush of gold into AI, significant gaps between the vision and experience, and the dark side of the technology in areas such as rising fraud and malicious deepfakes. As I have written previously AI is creating new problems to solve – and is a driving force behind new solutions including content provenance authentication.

Just because you can do something doesn’t mean you should — without careful consideration of use cases, consequences and implementation. AI has both enormous potential but also brings a whole new set of challenges and, potentially, existential risks. The idea that these two seemingly opposite things can be true underscores the weight of responsibility we have to get this right.

***

Simon Erskine Locke is founder & CEO of CommunicationsMatch™ and cofounder & CEO of Tauth.io, which provides trusted content authentication based on C2PA standards. He is a former head of communications functions at Prudential Financial, Morgan Stanley and Deutsche Bank, and founder of communications consultancies.





Source link

Continue Reading

AI Insights

Anthropic to pay authors $1.5 billion to settle lawsuit over pirated chatbot training material

Published

on


NEW YORK (AP) — Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by…

NEW YORK (AP) — Artificial intelligence company Anthropic has agreed to pay $1.5 billion to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot.

The landmark settlement, if approved by a judge as soon as Monday, could mark a turning point in legal battles between AI companies and the writers, visual artists and other creative professionals who accuse them of copyright infringement.

The company has agreed to pay authors about $3,000 for each of an estimated 500,000 books covered by the settlement.

“As best as we can tell, it’s the largest copyright recovery ever,” said Justin Nelson, a lawyer for the authors. “It is the first of its kind in the AI era.”

A trio of authors — thriller novelist Andrea Bartz and nonfiction writers Charles Graeber and Kirk Wallace Johnson — sued last year and now represent a broader group of writers and publishers whose books Anthropic downloaded to train its chatbot Claude.

A federal judge dealt the case a mixed ruling in June, finding that training AI chatbots on copyrighted books wasn’t illegal but that Anthropic wrongfully acquired millions of books through pirate websites.

If Anthropic had not settled, experts say losing the case after a scheduled December trial could have cost the San Francisco-based company even more money.

“We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business,” said William Long, a legal analyst for Wolters Kluwer.

U.S. District Judge William Alsup of San Francisco has scheduled a Monday hearing to review the settlement terms.

Books are known to be important sources of data — in essence, billions of words carefully strung together — that are needed to build the AI large language models behind chatbots like Anthropic’s Claude and its chief rival, OpenAI’s ChatGPT.

Alsup’s June ruling found that Anthropic had downloaded more than 7 million digitized books that it “knew had been pirated.” It started with nearly 200,000 from an online library called Books3, assembled by AI researchers outside of OpenAI to match the vast collections on which ChatGPT was trained.

Debut thriller novel “The Lost Night” by Bartz, a lead plaintiff in the case, was among those found in the Books3 dataset.

Anthropic later took at least 5 million copies from the pirate website Library Genesis, or LibGen, and at least 2 million copies from the Pirate Library Mirror, Alsup wrote.

The Authors Guild told its thousands of members last month that it expected “damages will be minimally $750 per work and could be much higher” if Anthropic was found at trial to have willfully infringed their copyrights. The settlement’s higher award — approximately $3,000 per work — likely reflects a smaller pool of affected books, after taking out duplicates and those without copyright.

On Friday, Mary Rasenberger, CEO of the Authors Guild, called the settlement “an excellent result for authors, publishers, and rightsholders generally, sending a strong message to the AI industry that there are serious consequences when they pirate authors’ works to train their AI, robbing those least able to afford it.”

Copyright
© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.



Source link

Continue Reading

AI Insights

Associate professor in ECE advances artificial intelligence collaboration across devices regardless of connection speed

Published

on


Can smart devices collaborate to train artificial intelligence (AI) models when they experience poor internet connections? Yes, and Xiaowen Gong, the Godbold Associate Professor in electrical and computer engineering, can prove it.

Gong’s recently completed National Science Foundation-funded research, “Quality-Aware Distributed Computation for Wireless Federated Learning: Channel-Aware User Selection, Mini-Batch Size Adaptation, and Scheduling,” demonstrates how smart devices can collaborate to build better AI models regardless of connection quality, turning network limitations from a barrier into a manageable constraint.

Originally funded and commissioned in 2021, his work paves the way for smarter, faster and safer technologies — powering innovations that could make robots more capable, augmented reality/virtual reality experiences more immersive, vehicles more autonomous and wireless systems more intelligent.

“Our algorithms enable federated learning in wireless networked systems where devices often have unreliable, time-varying and heterogeneous communication and computation capabilities,” Gong said. “Our research improves learning accuracy and accelerates the training process, all while enabling devices to participate with greater flexibility.”

Federated learning allows multiple devices — like smartphones, tablets or sensors — to collaboratively train an AI model without sharing their raw data. Instead of sending sensitive information to a central server, devices process data locally and share only the learning updates. This approach protects privacy while enabling AI systems to learn from diverse data sources.

“AI isn’t just something that lives in massive data centers anymore,” Gong said. “It’s happening on the devices we use every day, like phones, automobiles, and smart home systems,” Gong said. “Our work helps these devices learn together, even when their internet connections are not perfect. That means smarter predictions, faster responses and better performance in real-world conditions.”

Existing federated learning methods often do not perform well when devices have unreliable connections or different computational capabilities, leading to slower training and less accurate models.

Gong’s research tackles this problem through a method described as quality-aware distributed computation. The new algorithms intelligently select which devices participate in each training round and adjust how much work each device does based on its connection quality and computational power.

“Our methods not only improve the learning accuracy of federated learning but also accelerate the training process while allowing devices to participate in federated learning with much flexibility, even if some devices drop in and out,” he said.

“Imagine your smart assistant learning new things 30% faster, or your car reacting more quickly to changing traffic. That’s the kind of improvement we’re seeing. This isn’t just about speed. It’s about making AI more responsive and reliable in everyday life.”



Source link

Continue Reading

Trending