Connect with us

AI Research

82% Are Skeptical, Yet Only 8% Always Check Sources

Published

on


Exploding Topics conducted original research into consumer attitudes to AI-generated online content.

Our survey of 1,000+ web users surfaced some fascinating insights.

We found that trust in AI is low, but that this hasn’t prevented an increased reliance on the technology.

Fast facts

  • 42.1% of web users have experienced inaccurate or misleading content in AI Overviews
  • Only 18.6% always or usually click through to the sources of AI Overviews
  • 21.6% of people think AI has made Google searches worse, but 28.9% have seen an improvement
  • Users are almost evenly split on whether AI-generated content makes the internet better or worse overall
  • But more than half are less likely to engage with content marked as AI-generated
  • Just 1 in 5 people want to see more AI-generated content online
  • Three-quarters of respondents are worried about the environmental impact of AI

Download the AI Trust Gap Report

Get full results plus AI sentiment analysis of attitudes to AI Overviews

Download Report

Most people have issues with AI Overviews

AI is changing the way we browse the internet. But what does all this mean for end-users?

We asked respondents to ignore any examples from social media, and concentrate on their own experience. Even so, 71.15% had personally experienced at least one significant mistake in an AI Overview.

The biggest theme was “inaccurate or misleading content”, experienced by 42.1% of search users. 35.82% have found AI Overviews to be “missing important context”, while 31.5% indicated “biased or one-sided answers”.

16.78% of people have even experienced unsafe or harmful advice from an AI Overview.

Quote: I am a healthcare professional and AI Overviews do not always provide evidence-based information

Females (34.44%) were significantly more likely than males (21.03%) to say they had not seen any significant mistakes in AI Overviews.

The results are concerning. More than 1 in 10 Google searches trigger an AI Overview, with that ratio more than doubling from January to March 2025.

There have been plenty of viral examples of inaccurate or confusing results.

undefined

We gave respondents the chance to describe their specific personal experiences with AI Overviews. There were some notable recurring themes. The crux of the matter is the quality of the information being surfaced. “Incorrect”, “wrong”, and “inaccurate” were all mentioned numerous times.

Quote: AI Overviews provide "misleading and incorrect results"

“Sometimes” was also mentioned a lot, reflecting that consistency is one of the biggest issues faced by AI Overviews.

Our AI sentiment analysis of nearly 400 user responses uncovered a troubling 4:1 negative-to-positive sentiment ratio. Download the full report to discover the specific issues driving user dissatisfaction.

Get More Search Traffic

Use trending keywords to create content your audience craves.

Semrush Logo
Exploding Topics Logo

Trust in AI Overviews is weak

Given that the majority of users have encountered at least one significant mistake in an AI Overview, it is not surprising that overall trust in the search tool is low.

Only 8.5% of respondents always trust AI Overviews.

More than 1 in 5 (21.05%) say that they never trust them.

By far the most significant attitude is that users only “sometimes trust” AI Overviews. 61.17% of participants chose this response, meaning around 82% of people are at least somewhat skeptical.

Survey results: trust in AI Overviews

Older people are the most skeptical of AI Overviews by a significant margin. Only 4.3% of respondents over the age of 60 always trust AI Overviews, and 30.94% never trust them.

But trust does not decrease on a linear scale in line with age. In fact, the next-most cautious age group is 18-29-year-olds, only 5.56% of whom always trust AI Overviews.

Conversely, people aged 30-44 are most likely to trust AI Overviews. This age group contains the highest proportion of respondents who will “always” do so, and the lowest who say that they “never” trust.

Chart showing trust in AI Overviews by age

Low trust in AI Overviews, but limited fact-checking

So 3 in 5 people only sometimes trust AI Overviews, and 1 in 5 people never trust them.

Yet despite that, across all age groups, more than 40% rarely or never click through from AI Overviews to the source material.

Just 7.71% report always following the links provided by AI Overviews, and only 10.97% usually do so.

Survey results: clicks to sources in AI Overviews

How do we reconcile that? Claire Broadley, lead editor at Exploding Topics and an SEO and content marketing professional of 15+ years, believes that users are balancing reliability with convenience:

“AI Overviews (and AI Mode) represent a move towards convenience. The danger is that some AI Overviews may be ‘good enough’, and some may be harmful.

“Given these results, it’s clear we can’t rely on people to read our content and check. Businesses will have to be on board with optimizing for AI search visibility, or they risk leaving it to Gemini to join the dots.”

The extent to which content professionals can rely on their audience clicking through from AI Overviews also appears to depend on the household income of the target market. High-earners showed a significantly higher propensity to visit the source material.

56% of respondents with a household income between $175,000 and $199,999 “always” or “usually” clicked on links provided in AI Overviews. 42.1% of respondents earning $200,000 or more did the same, way above the overall average.

Among those with a household income of $10,000 to $99,999, 46.53% “rarely” or “never” clicked on links. Only 14% did so “always” or “usually”.

Trust in AI Overviews segmented by income

People who routinely click the links to source material are far more likely to trust AI Overviews.

Among those who say they “always” follow the links in AI Overviews, 62.82% also say that they “always” trust those Overviews.

Conversely, among those who “never” follow links, only 1.96% report “always” trusting AI Overviews.

Want to Make Google
Love Your Site? 🔎

Most users would keep AI Overviews

It’s clear that audiences have a highly complex relationship with AI Overviews. They don’t completely trust them, but nor do they routinely fact-check them — and on balance, they would not get rid of them.

70.62% of people believe that Google search is either the same as or better than it was before the launch of AI Overviews.

Survey results: opinions about Google Search with AI Overviews

And given the choice to enable or disable AI Overviews, only 36.6% would turn them off. 43.03% would turn them on, with a little over 1 in 5 people undecided.

Survey results: would respondents enable or disable AI Overviews

This backs up what we heard earlier from Claire. Users know that the information they are receiving is less reliable, but the convenience trade-off is generally considered worth it.

Despite being the most inclined to fact-check, users with higher household incomes are also more likely to say they would enable AI Overviews if given a toggle option. Only 17.54% of users with household income of over $200,000 would turn the Overviews off.

“AI slop”: Are users really bothered?

We’ve seen a broad range of attitudes to the role of AI in web searches. But what about when the actual content online (and not just the search results) is AI-generated?

The derogatory term “AI slop” is used to refer to low-quality content flooding online spaces. Anecdotally, AI has produced unworkable knitting and crochet patterns, and entirely fictional posts in the Reddit “AITA” forum.

Many of the pages surfaced by web searches have also been crafted with the assistance of AI. The percentage of AI-generated content on Medium is as high as 37.03%.

However, overall user attitudes to artificially generated content are broadly balanced. 39.84% of people believe that AI-created content at least slightly improves the quality of the internet.

That’s more than the 36.94% of people who think AI-generated content has made the internet worse.

Survey results: AI effect on quality of the internet

On the other hand, people who believe that AI has made the internet worse tend to hold stronger convictions. The majority of those who have seen an improvement think it has been “slight”, whereas detractors are more likely to say things have “greatly” worsened.

Moreover, 50.3% of people would be less likely to engage with content marked as AI-generated. Only 18.51% would be more likely to engage.

Survey results: engagement with AI-generated content

Women in particular show less interest in engaging with AI content. 55.57% of women would be less likely to engage with content labeled as AI-generated, compared with 42.54% of men.

Regionally speaking, the Mid-Atlantic appears most receptive to engaging with AI-generated content. Only 34.03% would be put off by an AI label, while 34.72% would actually be more likely to engage.

Conversely, respondents from the West North Central have the least time for overtly AI-generated media. Only 6.38% would be more likely to engage with AI-labeled content, and 57.45% would be less likely.

The AI content Rubicon: This far, but no further

There is no clear consensus on whether AI content is currently a net positive or negative. However, the data takes clearer shape when it comes to desires for the future.

Only 21.78% of people want to see more AI-generated content online. Of those, just 10.79% want to see “much more”.

Meanwhile, more than 1 in 4 would like to see the amount of AI-generated content stay “about the same”.

Survey results: would you like to see an increase or decrease in AI content

And despite apparent ambivalence over the current impact of AI on the quality of the internet, 48.12% of users want to see “less” or “much less” AI content moving forward.

In other words, 74.06% of internet users would like to see either a pause or reversal in the amount of AI-generated content online.

There is a significant gender divide on that front. Whereas 15.89% of men wish to see “much more” AI-generated content, only 7.32% of women think the same.

At the other end, 42.54% of men would like to see “less” or “much less” AI-generated content in the future. That figure is almost 10 percentage points higher for women (51.91%).

AI-generated content views, segmented by gender

Additionally, we can see a similar age split to the one we observed in the tendency to follow the links in AI Overviews.

Those aged 30-44 (who checked the sources of AI Overviews least often) are also more likely to want more AI content on the internet in the future. Those aged 60+, who showed the highest levels of skepticism with AI Overviews, are correspondingly more keen on scaling back AI content.

84.89% of this older age group wanted the same amount or less AI content in future.

As with the AI Overviews results, respondents aged 18-29 actually showed up as the next-most AI-skeptic age group.

Only 8.33% favored “much more” AI content in the future, with a further 7.78% wanting “more”. Exactly 25% wanted “about the same”, 23.33% favored less, and 31.67% expressed a preference for “much less”.

Opinions on AI-generated content for 18-29 age group

And those who want more AI are also the most inclined to trust it. Over 90% of people who expressed a wish for “much more” AI content also said that they sometimes or always trusted AI Overviews.

72.48% of those who want “much more” AI also believe that Google has felt better since AI Overviews were introduced. Just 3.67% think it has felt worse.

Want to Spy on Your Competition?

Explore competitors’ website traffic stats, discover growth points, and expand your market share.

Semrush Logo
Exploding Topics Logo

Environmental fears around AI

AI Overviews and AI-generated web content both have an environmental impact. One estimate says that generating text takes about 30x more energy than extracting it from a source.

A significant majority of web users are concerned by the environmental consequences of AI. 74.46% are at least a little worried.

More than a third (34.46%) of respondents say that the environmental impact of AI worries them “a lot”.

Survey results: environmental impact of AI

There is also a highly curious pattern whereby those who want to see more AI are also the most conscious of its environmental impact.

Among those who said they wanted “much more” AI-generated content, 70.64% said that the environmental consequences worried them “a lot”.

Likewise, among those who “always trust” AI Overviews, 68.6% worry a lot about the environmental impact — well above the overall average.

Despite their overall AI skepticism, older people are the most likely to reject environmental concerns.

On average, only 14.21% of internet users aged 18-44 are “not at all” worried about AI’s effect on the environment. Among users aged 45 and over, 19.73% had no concerns, a figure which rises above 20% in the 60+ age group.

What the AI trust gap means for online content

For the most part, internet users are aware of the pitfalls attached to increased AI in search and web content.

They know that it comes with a risk of inaccurate or misleading results. They know that it cannot be wholly trusted. They are even significantly concerned by the environmental impact.

Yet despite all of this, the trade-off for practicality and convenience means that there is only limited appetite for a reversal of the AI developments we have already seen:

  • Most people would keep AI Overviews if given the choice, and “about the same” is the most popular answer when it comes to the future levels of AI content online.
  • Those who oppose the proliferation of AI Overviews and AI-generated content will often do so in strong terms. But it would be wrong to mistake this as the prevailing view.
  • That being said, users’ embrace of AI is qualified and tentative. Most people don’t want to see the internet taken up with more AI-generated content in the future, and anything labeled as AI will face an uphill battle for trust and engagement.

For content marketers, it is clearly necessary to adapt to the world of AI, which is not going anywhere soon. But at the same time, it is vital to recognize and harness the added authority that comes from a human author, and to ensure that all content — regardless of its provenance — is accurate, trustworthy, and valuable to the audience it is designed to serve.

Download the AI Trust Gap Report

Get full results plus AI sentiment analysis of attitudes to AI Overviews

Download Report

Methodology

The survey comprised 1,115 respondents. Of those, 1,027 said they were aware of an increase in AI-generated content, with the remainder being filtered out of the survey.

Respondents who moved beyond the screener question were asked a further 10 questions about AI and its impact. We also gathered demographic data.

There were 570 female respondents, 392 male respondents, and 10 non-binary respondents. 13 preferred to describe their gender identity in another way, and 25 preferred not to say.

Respondents came from adults throughout the USA across a wide range of ages. Median household income was $50,000-$74,999.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

I asked ChatGPT to help me pack for my vacation – try this awesome AI prompt that makes planning your travel checklist stress-free

Published

on


It’s that time of year again, when those of us in the northern hemisphere pack our sunscreen and get ready to venture to hotter climates in search of some much-needed Vitamin D.

Every year, I book a vacation, and every year I get stressed as the big day gets closer, usually forgetting to pack something essential, like a charger for my Nintendo Switch 2, or dare I say it, my passport.



Source link

Continue Reading

AI Research

Denodo Announces Plans to Further Support AI Innovation by Releasing Denodo DeepQuery, a Deep Research Capability — TradingView News

Published

on


PALO ALTO, Calif., July 07, 2025 (GLOBE NEWSWIRE) — Denodo, a leader in data management, announced the availability of the Denodo DeepQuery capability, now as a private preview, and generally available soon, enabling generative AI (GenAI) to go beyond retrieving facts to investigating, synthesizing, and explaining its reasoning. Denodo also announced the availability of Model Context Protocol (MCP) support as part of the Denodo AI SDK.

Built to address complex, open-ended business questions, DeepQuery will leverage live access to a wide spectrum of governed enterprise data across systems, departments, and formats. Unlike traditional GenAI solutions, which rephrase existing content, DeepQuery, a deep research capability, will analyze complex, open questions and search across multiple systems and sources to deliver well-structured, explainable answers rooted in real-time information. To help users operate this new capability to better understand complex current events and situations, DeepQuery will also leverage external data sources to extend and enrich enterprise data with publicly available data, external applications, and data from trading partners.

DeepQuery, beyond what’s possible using traditional generative AI (GenAI) chat or retrieval augmented generation (RAG), will enable users to ask complex, cross-functional questions that would typically take analysts days to answer—questions like, “Why did fund outflows spike last quarter?” or “What’s driving changes in customer retention across regions?” Rather than piecing together reports and data exports, DeepQuery will connect to live, governed data across different systems, apply expert-level reasoning, and deliver answers in minutes.

Slated to be packaged with the Denodo AI SDK, which streamlines AI application development with pre-built APIs, DeepQuery is being developed as a fully extensible component of the Denodo Platform, enabling developers and AI teams to build, experiment with, and integrate deep research capabilities into their own agents, copilots, or domain-specific applications.

“With DeepQuery, Denodo is demonstrating forward-thinking in advancing the capabilities of AI,” said Stewart Bond, Research VP, Data Intelligence and Integration Software at IDC. “DeepQuery, driven by deep research advances, will deliver more accurate AI responses that will also be fully explainable.”

Large language models (LLMs), business intelligence tools, and other applications are beginning to offer deep research capabilities based on public Web data; pre-indexed, data-lakehouse-specific data; or document-based retrieval, but only Denodo is developing deep research capabilities, in the form of DeepQuery, that are grounded in enterprise data across all systems, data that is delivered in real-time, structured, and governed. These capabilities are enabled by the Denodo Platform’s logical approach to data management, supported by a strong data virtualization foundation.

Denodo DeepQuery is currently available in a private preview mode. Denodo is inviting select organizations to join its AI Accelerator Program, which offers early access to DeepQuery capabilities, as well as the opportunity to collaborate with our product team to shape the future of enterprise GenAI.

“As a Denodo partner, we’re always looking for ways to provide our clients with a competitive edge,” said Nagaraj Sastry, Senior Vice President, Data and Analytics at Encora. “Denodo DeepQuery gives us exactly that. Its ability to leverage real-time, governed enterprise data for deep, contextualized insights sets it apart. This means we can help our customers move beyond general AI queries to truly intelligent analysis, empowering them to make faster, more informed decisions and accelerating their AI journey.”

Denodo also announced support of Model Context Protocol (MCP), and an MCP Server implementation is now included in the latest version of the Denodo AI SDK. As a result, all AI agents and apps based on the Denodo AI SDK can be integrated with any MCP-compliant client, providing customers with a trusted data foundation for their agentic AI ecosystems based on open standards.

“AI’s true potential in the enterprise lies not just in generating responses, but in understanding the full context behind them,” said Angel Viña, CEO and Founder of Denodo. “With DeepQuery, we’re unlocking that potential by combining generative AI with real-time, governed access to the entire corporate data ecosystem, no matter where that data resides. Unlike siloed solutions tied to a single store, DeepQuery leverages enriched, unified semantics across distributed sources, allowing AI to reason, explain, and act on data with unprecedented depth and accuracy.”

Additional Information

  • Denodo Platform: What’s New
  • Blog Post: Smarter AI Starts Here: Why DeepQuery Is the Next Step in GenAI Maturity
  • Demo: Watch a short video of this capability in action.

About Denodo

Denodo is a leader in data management. The award-winning Denodo Platform is the leading logical data management platform for transforming data into trustworthy insights and outcomes for all data-related initiatives across the enterprise, including AI and self-service. Denodo’s customers in all industries all over the world have delivered trusted AI-ready and business-ready data in a third of the time and with 10x better performance than with lakehouses and other mainstream data platforms alone. For more information, visit denodo.com.

Media Contacts

pr@denodo.com



Source link

Continue Reading

AI Research

Sakana AI: Think LLM dream teams, not single models

Published

on


Enterprises may want to start thinking of large language models (LLMs) as ensemble casts that can combine knowledge and reasoning to complete tasks, according to Japanese AI lab Sakana AI.

Sakana AI in a research paper outlined a method called Multi-LLM AB-MCTS (Adaptive Branching Monte Carlo Tree Search) that uses a collection of LLMs to cooperate, perform trial-and-error and leverage strengths to solve complex problems.

In a post, Sakana AI said:

“Frontier AI models like ChatGPT, Gemini, Grok, and DeepSeek are evolving at a breathtaking pace amidst fierce competition. However, no matter how advanced they become, each model retains its own individuality stemming from its unique training data and methods. We see these biases and varied aptitudes not as limitations, but as precious resources for creating collective intelligence. Just as a dream team of diverse human experts tackles complex problems, AIs should also collaborate by bringing their unique strengths to the table.”

Sakana AI said AB-MCTS is a method for inference-time scaling to enable frontier AIs to cooperate and revisit problems and solutions. Sakana AI released the algorithm as an open source framework called TreeQuest, which has a flexible API that allows users to use AB-MCTS for tasks with multiple LLMs and custom scoring.

What’s interesting is that Sakana AI gets out of that zero-sum LLM argument. The companies behind LLM training would like you to think there’s one model to rule them all. And you’d do the same if you were spending so much on training models and wanted to lock in customers for scale and returns.

Sakana AI’s deceptively simple solution can only come from a company that’s not trying to play LLM leapfrog every few minutes. The power of AI is in the ability to maximize the potential of each LLM. Sakana AI said:

“We saw examples where problems that were unsolvable by any single LLM were solved by combining multiple LLMs. This went beyond simply assigning the best LLM to each problem. In (an) example, even though the solution initially generated by o4-mini was incorrect, DeepSeek-R1-0528 and Gemini-2.5-Pro were able to use it as a hint to arrive at the correct solution in the next step. This demonstrates that Multi-LLM AB-MCTS can flexibly combine frontier models to solve previously unsolvable problems, pushing the limits of what is achievable by using LLMs as a collective intelligence.”

A few thoughts:

  • Sakana AI’s research and move to emphasize collective intelligence over on LLM and stack is critical to enterprises that need to create architectures that don’t lock them into one provider.
  • AB-MCTS could play into what agentic AI needs to become to be effective and complement emerging standards such as Model Context Protocol (MCP) and Agent2Agent.
  • If combining multiple models to solve problems becomes frictionless, the costs will plunge. Will you need to pay up for OpenAI when you can leverage LLMs like DeepSeek combined with Gemini and a few others? 
  • Enterprises may want to start thinking about how to build decision engines instead of an overall AI stack. 
  • We could see a scenario where a collective of LLMs achieves superintelligence before any one model or provider. If that scenario plays out, can LLM giants maintain valuations?
  • The value in AI may not be in the infrastructure or foundational models in the long run, but the architecture and approaches.

More:



Source link

Continue Reading

Trending