Connect with us

AI Research

‘Existential crisis’: how Google’s shift to AI has upended the online news model | Newspapers & magazines

Published

on


When the chief executive of the Financial Times suggested at a media conference this summer that rival publishers might consider a “Nato for news” alliance to strengthen negotiations with artificial intelligence companies there was a ripple of chuckles from attendees.

Yet Jon Slade’s revelation that his website had seen a “pretty sudden and sustained” decline of 25% to 30% in traffic to its articles from readers arriving via internet search engines quickly made clear the serious nature of the threat the AI revolution poses.

Queries typed into sites such as Google, which accounts for more than 90% of the search market, have been central to online journalism since its inception, with news providers optimising headlines and content to ensure a top ranking and revenue-raising clicks.

But now Google’s AI Overviews, which sit at the top of the results page and summarise responses and often negate the need to follow links to content, as well as its recently launched AI Mode tab that answers queries in a chatbot format, have prompted fears of a “Google zero” future where traffic referrals dry up.

“This is the single biggest change to search I have seen in decades,” says one senior editorial tech executive. “Google has always felt like it would always be there for publishers. Now the one constant in digital publishing is undergoing a transformation that may completely change the landscape.”

Last week, the owner of the Daily Mail revealed in its submission to the Competition and Markets Authority’s consultation on Google’s search services that AI Overviews have fuelled a drop in click-through traffic to its sites by as much as 89%.

DMG Media and other leading news organisations, including Guardian Media Group and the magazine trade body the Periodical Publishers Association (PPA), have urged the competition watchdog to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers as part of its investigation into the tech firm’s search dominance.

Publishers – already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news – argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or “drop out of all search results”, according to several sources.

On top of the threat to funding, there are concerns about AI’s impact on accuracy. While Google has improved the quality of its overviews since earlier iterations advised users to eat rocks and add glue to pizza, problems with “hallucinations” – where AI presents incorrect or fabricated information as fact – remain, as do issues with in-built bias, when a computer rather than a human decides how to summarise sources.

Google Discover has replaced search as the main source of traffic click-throughs to content. Photograph: Samuel Gibbs/The Guardian

In January, Apple promised to update an AI feature that issued untrue summaries of BBC news alerts, stamped with the corporation’s logo, on its latest iPhones; alerts incorrectly claimed that the man accused of killing a US insurance boss had shot himself and that tennis star Rafael Nadal had come out as gay.

In a blogpost last month, Liz Reid, Google’s head of search, said the introduction of AI in search was “driving more queries and quality clicks”.

“This data is in contrast to third-party reports that inaccurately suggest dramatic declines in aggregate traffic,” she said. “[These reports] are often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in search.”

However, she also said that while overall traffic to all websites is “relatively stable” she admitted that the “vast” web means that user trends are shifting traffic to different sites “resulting in decreased traffic to some sites and increased traffic to others”.

In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content.

However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers’ overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies.

“Google Discover is of zero product importance to Google at all,” he says. “It allows Google to funnel more traffic to publishers as traffic from search declines … Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want.”

Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models.

The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the “value being scraped” out of the £125bn sector.

skip past newsletter promotion

The Make It Fair campaign in February focused on the threat to the creative industries from generative AI. Photograph: Geoffrey Swaine/Rex

Some publishers have struck bilateral licensing deals with AI companies – such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI – while others such as the BBC have taken action against AI companies alleging copyright theft.

“It is a two-pronged attack on publishers, a sort of pincer movement,” says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. “Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis.”

While publishers are pursuing action on multiple fronts – from dealmaking and legal action to regulatory lobbying – they are also implementing AI tools into newsrooms and creating their own query-answering tools. The Washington Post and the FT have launched their own AI-powered chatbots, Climate Answers and Ask FT, that source results only from their own content.

Christoph Zimmer, chief product officer at Germany’s Der Spiegel, says that while its traffic is currently stable he expects referrals from all platforms to decline.

“This is a continuation of a longstanding trend,” he says. “However, this affects brands that have not focused on building direct relationships and subscriptions in recent years even more strongly. Instead, they have relied on reach on platforms and sometimes generic content.

“What has always been true remains true – a focus on quality and distinct content, and having a human in charge rather than just in the loop.”

One publishing industry executive says the battle to strike deals to help train AI models to aggregate and summarise stories is rapidly being superseded by advances that are seeing models interpret live news.

“The first focus has been on licensing deals for training AI, to ‘speak English’, but that is becoming less important over time,” says the executive. “It is becoming about delivering the news, and for that you need accurate live sources. That is a potentially really lucrative market which publishers are thinking about negotiating next.”

Saj Merali, chief executive of the PPA, says a fair balance needs to be struck between a tech-driven change in consumers’ digital habits and the fair value of trusted news.

“What doesn’t seem to be at the heart of this is what consumers need,” she says. “AI needs trustworthy content. There is a shift in how consumers want to see information, but they have to have faith in what they are reading.

“The industry has been very resilient through quite major digital and technological changes, but it is really important we make sure there is a route to sustain models. At the moment the AI and tech community are showing no signs of supporting publisher revenue.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

A Unified Model for Robot Interaction, Reasoning and Planning


View a PDF of the paper titled Robix: A Unified Model for Robot Interaction, Reasoning and Planning, by Huang Fang and 8 other authors

View PDF

Abstract:We introduce Robix, a unified model that integrates robot reasoning, task planning, and natural language interaction within a single vision-language architecture. Acting as the high-level cognitive layer in a hierarchical robot system, Robix dynamically generates atomic commands for the low-level controller and verbal responses for human interaction, enabling robots to follow complex instructions, plan long-horizon tasks, and interact naturally with human within an end-to-end framework. Robix further introduces novel capabilities such as proactive dialogue, real-time interruption handling, and context-aware commonsense reasoning during task execution. At its core, Robix leverages chain-of-thought reasoning and adopts a three-stage training strategy: (1) continued pretraining to enhance foundational embodied reasoning abilities including 3D spatial understanding, visual grounding, and task-centric reasoning; (2) supervised finetuning to model human-robot interaction and task planning as a unified reasoning-action sequence; and (3) reinforcement learning to improve reasoning-action consistency and long-horizon task coherence. Extensive experiments demonstrate that Robix outperforms both open-source and commercial baselines (e.g., GPT-4o and Gemini 2.5 Pro) in interactive task execution, demonstrating strong generalization across diverse instruction types (e.g., open-ended, multi-stage, constrained, invalid, and interrupted) and various user-involved tasks such as table bussing, grocery shopping, and dietary filtering.

Submission history

From: Wei Li [view email]
[v1]
Mon, 1 Sep 2025 03:53:47 UTC (29,592 KB)
[v2]
Thu, 11 Sep 2025 12:40:54 UTC (29,592 KB)



Source link

Continue Reading

AI Research

Brown awarded $20 million to lead artificial intelligence research institute aimed at mental health support

Published

on


A $20 million grant from the National Science Foundation will support the new AI Research Institute on Interaction for AI Assistants, called ARIA, based at Brown to study human-artificial intelligence interactions and mental health. The initiative, announced in July, aims to help develop AI support for mental and behavioral health. 

“The reason we’re focusing on mental health is because we think this represents a lot of the really big, really hard problems that current AI can’t handle,” said Associate Professor of Computer Science and Cognitive and Psychological Sciences Ellie Pavlick, who will lead ARIA. After viewing news stories about AI chatbots’ damage to users’ mental health, Pavlick sees renewed urgency in asking, “What do we actually want from AI?”

The initiative is part of a bigger investment from the NSF to support the goals of the White House’s AI Action Plan, according to a NSF press release. This “public-private investment,” the press release says, will “sustain and enhance America’s global AI dominance.”

According to Pavlick, she and her fellow researchers submitted the proposal for ARIA “years ago, long before the administration change,” but the response was “very delayed” due to “a lot of uncertainty at (the) NSF.” 

One of these collaborators was Michael Frank, the director of the Center for Computational Brain Science at the Carney Institute and a professor of psychology. 

Frank, who was already working with Pavlick on projects related to AI and human learning, said that the goal is to tie together collaborations of members from different fields “more systematically and more broadly.”

According to Roman Feiman, an assistant professor of cognitive and psychological sciences and linguistics and another member of the ARIA team, the goal of the initiative is to “develop better virtual assistants.” But that goal includes various obstacles to ensure the machines “treat humans well,” behave ethically and remain controllable. 

Within the study, some “people work basic cognitive neuroscience, other people work more on human machine interaction (and) other people work more on policy and society,” Pavlick explained. 

Although the ARIA team consists of many faculty and students at Brown, according to Pavlick, other institutions like Carnegie Mellon University, University of New Mexico and Dartmouth are also involved. On top of “basic science” research, ARIA’s research also examines the best practices for patient safety and the legal implications of AI. 

“As everybody currently knows, people are relying on (large language models) a lot, and I think many people who rely on them don’t really know how best to use them, and don’t entirely understand their limitations,” Feiman said.

According to Frank, the goal is not to “replace human therapists,” but rather to assist them.

Assistant Professor of the Practice of Computer Science and Philosophy Julia Netter, who studies the ethics of technology and responsible computing and is not involved in ARIA, said that ARIA has “the right approach.” 

Netter said ARIA approach differs from previous research “in that it really tried to bring in experts from other areas, people who know about mental health” and others, rather than those who focus solely on computer science.

But the ethics of using AI in a mental health context is a “tricky question,” she added.

“This is an area that touches people at a point in time when they are very, very vulnerable,” Netter said, adding that any interventions that arise from this research should be “well-tested.” 

“You’re touching an area of a person’s life that really has the potential of making a huge difference, positive or negative,” she added.

Because AI is “not going anywhere,” Frank said he is excited to “understand and control it in ways that are used for good.”

“My hope is that there will be a shift from just trying stuff and seeing what gets a better product,” Feiman said. “I think there’s real potential for scientific enterprise — not just a profit-making enterprise — of figuring out what is actually the best way to use these things to improve people’s lives.”

Get The Herald delivered to your inbox daily.



Source link

Continue Reading

AI Research

BITSoM launches AI research and innovation lab to shape future leaders

Published

on


Mumbai: The BITS School of Management (BITSoM), under the aegis of BITS Pilani, a leading private university, will inaugurate its new BITSoM Research in AI and Innovation (BRAIN) Lab in its Kalyan Campus on Friday. The lab is designed to prepare future leaders for workplaces transformed by artificial intelligence, on Friday on its Kalyan campus.

BITSoM launches AI research and innovation lab to shape future leaders

While explaining the concept of the laboratory, professor Saravanan Kesavan, dean of BITSoM, said that the BRAIN Lab had three core pillars–teaching, research, and outreach. Kesavan said, “It provides MBA (masters in business administration) students a dedicated space equipped with high-performance AI computers capable of handling tasks such as computer vision and large-scale data analysis. Students will not only learn about AI concepts in theory but also experiment with real-world applications.” Kesavan added that each graduating student would be expected to develop an AI product as part of their coursework, giving them first-hand experience in innovation and problem-solving.

The BRAIN lab is also designed to be a hub of collaboration where researchers can conduct projects in partnership with various companies and industries, creating a repository of practical AI tools to use. Kesavan said, “The initial focus areas (of the lab) include manufacturing, healthcare, banking and financial services, and Global Capability Centres (subsidiaries of multinational corporations that perform specialised functions).” He added that the case studies and research from the lab will be made freely available to schools, colleges, researchers, and corporate partners, ensuring that the benefits of the lab reach beyond the BITSoM campus.

BITSoM also plans to use the BRAIN Lab as a launchpad for startups. An AI programme will support entrepreneurs in developing solutions as per their needs while connecting them to venture capital networks in India and Silicon Valley. This will give young companies the chance to refine their ideas with guidance from both academics and industry leaders.

The centre’s physical setup resembles a modern computer lab, with dedicated workspaces, collaborative meeting rooms, and brainstorming zones. It has been designed to encourage creativity, allowing students to visualise how AI works, customise tools for different industries, and allow their technical capabilities to translate into business impacts.

In the context of a global workplace that is embracing AI, Kesavan said, “Future leaders need to understand not just how to manage people but also how to manage a workforce that combines humans and AI agents. Our goal is to ensure every student graduating from BITSoM is equipped with the skills to build AI products and apply them effectively in business.”

Kesavan said that advisors from reputed institutions such as Harvard, Johns Hopkins, the University of Chicago, and industry professionals from global companies will provide guidance to students at the lab. Alongside student training, BITSoM also plans to run reskilling programmes for working professionals, extending its impact beyond the campus.



Source link

Continue Reading

Trending