Connect with us

AI Insights

Google’s AI Overviews Hit Sour Note With Rolling Stone

Published

on


A magazine publisher is accusing Google’s AI summaries of using its work without consent.

Penske Media, which owns Rolling Stone, Billboard and Variety, has sued the tech giant in federal court in Washington, D.C., saying that its artificial intelligence (AI) search summaries are reducing its web traffic, Reuters reported Saturday (Sept. 13).

The report noted that the suit is the first time a major American publisher has sued Google over the AI-generated summaries that have come to top its search results.

In this case, Penske is accusing Google of only including publishers’ websites in search results if it can also use their articles in AI summaries. Absent this leverage, Google would be forced to pay media companies to republish their work or use it to train its AI systems, Reuters added, citing the lawsuit.

The suit added that Google was able to dictate these terms due to its dominant position in the online search market, citing a court ruling last year that found the company owned nearly 90% of that market.

“We have a responsibility to proactively fight for the future of digital media and preserve its integrity — all of which is threatened by Google’s current actions,” Penske said.

Reached for comments by PYMNTS, a Google spokesperson said that AI Overviews offer a better user experience and direct traffic to an array of sites, and that the company will defend itself against what it called “meritless” claims.

Google introduced AI Overviews in May 2024, saying they would frequently show up at the top of search results, often replacing traditional website links.

PYMNTS reported at the time that this change aimed to offer users speedier access to information but could alter how businesses think about search engine optimization (SEO) and online advertising.

The Penske lawsuit is one of several pieces of litigation involving news publishers, writers and media companies who accuse the AI sector of improper use of their material.

Last week, Anthropic agreed to pay $1.5 billion to settle a high-profile copyright violation lawsuit in a case involving a group of authors who accused the startup of illegally accessing their books.

This latest suit comes two months after the Independent Publishers Alliance filed an antitrust complaint with the European Commission, claiming that Google’s AI Overviews are an abuse of the company’s online search market power.

The alliance argues that by placing the summaries at the top of search results, Google disadvantages the publishers’ original content. 

“New AI experiences in Search enable people to ask even more questions, which creates new opportunities for content and businesses to be discovered,” a Google spokesperson said in response to the complaint.



Source link

AI Insights

xAI lays off 500 AI tutors working on Grok

Published

on


Elon Musk’s artificial intelligence startup xAI has laid off 500 workers from its data annotation team, which helps train its Grok chatbot.

The layoffs were earlier reported by Business Insider.

The AI company notified employees over email that it was planning to downsize its team of generalist AI tutors, according to messages viewed by the publication. The company said the “strategic pivot” meant prioritizing specialist AI tutors, while scaling back its focus on general AI tutor roles.

In response to the story, xAI directed reporters to a post on X, in which the company said it plans to expand its specialist AI tutor team by “10X” and intends to open roles on its careers page.

The human data annotator team at xAI plays a key role in teaching Grok to understand the world by labeling, contextualizing, and categorizing raw data used to train the chatbot. The email sent by xAI said that laid-off workers would be paid through either the end of their contract or Nov. 30, but their access to company systems would be terminated the day of the layoff notice.

Prior to the layoff, the xAI’s data annotation team was one of the largest, with 1,500 full-time and contract staff members, which included AI tutors. The reorganization of the data annotators team comes on the back of a leadership shake-up at the team that saw nine employees reportedly exit the firm last week.

As a sign of its changing approach to training Grok, xAI on Thursday asked some of the AI tutors to prepare for tests, Business Insider reported, that covered traditional domains such as STEM, coding, finance, and medicine, as well as quirkier specialties such as Grok’s “personality and model behavior” and doomscrollers.”

Musk launched xAI in 2023 to compete with OpenAI and Google DeepMind, which are racing to win the AI race. He introduced Grok as a safe and truthful alternative to what he accused competitors of building, “woke” chatbots prone to censorship.



Source link

Continue Reading

AI Insights

Google’s newest AI datacenter & its monstrous CO2 emissions

Published

on


The impact of the rise of AI on the environment is a very real concern, and it’s not one that’s going away in a hurry. Especially not when Google’s planned new datacenter in the UK looks set to emit the same quantity of Carbon Dioxide in a year as hundreds of flights every week would.

It comes via a report from The Guardian, which has seen the plans for the new facility and the very real carbon impact assessment.



Source link

Continue Reading

AI Insights

China doubts artificial intelligence use in submarines

Published

on


by Alimat Aliyeva

The integration of artificial intelligence into submarine
warfare may reduce the chances of crew survival by up to 5%,
according to a new report by the South China Morning Post (SCMP),
citing a study led by Meng Hao, a senior engineer at the Chinese
Institute of Helicopter Research and Development,
Azernews reports.

Researchers analyzed an advanced anti-submarine warfare (ASW)
system enhanced by AI, which is designed to detect and track even
the most stealthy submarines. The system relies on real-time
intelligent decision-making, allowing it to respond rapidly and
adaptively to underwater threats. According to the study, only one
out of twenty submarines may be able to avoid detection and attack
under such conditions — a major shift in naval combat dynamics.

“As global powers accelerate the militarization of AI, this
study suggests the era of ‘invisible’ submarines — long considered
the backbone of strategic deterrence — may be drawing to a close,”
SCMP notes.

Historically, stealth has been a submarine’s most valuable
asset, allowing them to operate undetected and deter adversaries
through uncertainty. However, the rise of AI-enabled systems
threatens to upend this balance by minimizing human response
delays, analyzing massive data sets, and predicting submarine
behavior with unprecedented precision.

The implications extend far beyond underwater warfare. In
August, Nick Wakeman, editor-in-chief of Defense One, reported that
the U.S. Army is also exploring AI for use in air operations
control systems. AI could enhance resilience to electronic warfare,
enable better integration of drones, and support the deployment of
autonomous combat platforms in contested airspace.

The growing role of AI in modern militaries — from the seabed to
the stratosphere — raises new questions not only about tactical
advantage, but also about ethical decision-making, autonomous
weapons control, and the future of human involvement in combat
scenarios.

As nations continue investing in next-generation warfare
technology, experts warn that AI may not just change how wars are
fought — it could redefine what survivability means on the modern
battlefield.



Source link

Continue Reading

Trending