Connect with us

AI Research

Trust in data is critical to Artificial Intelligence adoption, says TELUS survey. But is that right?

Published

on


The flood of new AI reports continues apace – not always with good news for users or the AI sector, as we have seen.

A new survey from customer experience specialist TELUS Digital comes with the headline that user trust in AI depends on how the training data is sourced.

That’s a bold and heartening claim. Especially when most leading AI tools – ChatGPT among them (800 million active weekly users) – have been trained on data scraped from the pre-2023 Web, often without permission, and sometimes from known pirate sources. Fifty-plus lawsuits are ongoing worldwide against AI vendors for breaches of copyright.

Meanwhile, a March report from the Rettighets Alliancen Denmark’s Rights Alliance presents data suggesting that Apple, Anthropic, DeepSeek, Meta, Microsoft, NVIDIA, OpenAI, Runway AI, and music platform Suno scraped known pirated content, such as the free LibGen library. (Suno has admitted to scraping nearly every high-res audio file off the internet, while Meta’s policy of using pirated texts was cited by Judge Chhabria in his copyright judgement last week)

So, on what basis does TELUS Digital make the claim that trust and data transparency are critical to AI customers, given that the world’s usage data would seem to say otherwise? OpenAI’s subscription revenues have doubled in the past 12 months. What price transparency there?

The evidence is apparently this: TELUS Digital’s survey of 1,000 US adults finds that 87% believe companies should be transparent about how they source data for Generative AI models. That is up from 75% in a similar survey last year, which – if nothing else – does suggest that news of vendors’ unethical behavior on copyright has an impact.

More, nearly two-thirds of respondents (65%) say that the exclusion of high-quality, verified content – TELUS Digital cites the New York Times, Reuters, and Bloomberg – can lead to inaccurate and/or biased responses from Large Language Models (LLMs).

Interesting stuff, especially given the US Government’s “fake news” war on traditional media, backed by Big Techs and the likes of Elon Musk, all of whom have a vested interest in dismantling the edifice of 20th Century media. “You can trust us”, they say, while sucking up the proprietary content of that century at industrial scale.

Yet while the TELUS Digital survey does suggest that transparency is a growing issue for users in the US – despite the overwhelming force applied by AI vendors, the attempted banning of US state regulation (just overturned by the Senate), and the force-feeding of ChatGPT, Copilot, Gemini, Claude, and other tools on every cloud platform – the figures tell us that customers use the tools regardless. Perhaps while holding their noses.

So, the question is: why do they deploy ChatGPT et al despite their makers’ apparent contempt for creators’ copyright – policies that are being tested in US courts? The answer is found in other reports this year (see diginomica, passim): users primarily adopt AI to save money and time, not to make smarter decisions. And because hype and competitive peer pressure compels them to.

Even so, the growing awareness of vendors’ disregard for creators’ rights has an effect, it seems. This suggests that, if vendors really want their subscription revenues to overtake their vast capex on data centers and chips, then adopting an ethical stance is one way to do it. But that will cost them money: paying for the data they should have licensed in the first place.

Expert data the way forward

So, what does TELUS Digital make of it all?

Amith Nair is Global VP and General Manager, Data and AI Solutions, at the Vancouver, Canada, headquartered provider. Nair says:

As AI systems become more specialized and embedded in high-stakes use cases, the quality of the datasets used to optimize outputs is emerging as a key differentiator for enterprises between average performance and having the potential to drive real-world impacts.

We’re well past the era where general crowdsourced or internet data can meet today’s enterprises’ more complex and specialized use cases. This is reflected in the shift in our clients’ requests from ‘wisdom of the crowd’ datasets to ‘wisdom of the experts’.

Experts and industry professionals help curate such datasets to ensure they are technically sound, contextually relevant and responsibly built.

Nair adds:

In high-stakes domains like healthcare or finance, even a single mislabelled data point can distort model behavior in ways that are difficult to detect and costly to correct.

Fair enough. And as my earlier report revealed, academic studies of LLM behavior find deep problems for the technology whenever real-world complexity challenges any simple prompted answers. In many cases, the deeper we dig into LLMs’ responses, the less accurate and prone to hallucination they become, having been trained on both fact and fiction, of course.

My take

So, verified, expert, high-quality data is clearly the way ahead, plus the availability of human experts to verify AIs’ workings. But as I suggested above, LLMs’ and Gen-AI’s problems are not as easily solved as that.

First, user behavior is strongly biased towards expediency, and towards cost and time savings. It is not targeted at making smarter decisions: in this sense, AI is little more than the new automation for many enterprise users.

Second, data is not held in a traditional database with these tools. It is more the case that tokens are reflected in weights and statistical probabilities. As a result, flawed or inaccurate data persists; it can’t simply be deleted.

Therefore, one can only hope that hallucinations are challenged and corrected, despite ample evidence from professional markets, such as legal services, that even seasoned experts are prone to trust chatbots’ output without question.

So, why have lawyers presented hallucinated case law in courts across the US? Because they are time poor and overwhelmed with paperwork, and AI CEOs have allegedly lied about their products’ proximity to superintelligence. Marketing BS, in other words: currently the most destructive force on Earth.

And third, as synthetic data booms and the internet is overrun with the AI slop generated by millions of shadow-IT users who are AIs’ largest customer base, access to verified, human-authored data will become more challenging to find, not less.

The irony of all this is obvious: the least transparent and most exploitative vendors – the ones that dominate the market – have grown fat on selling effort-free text, images, and video to users, rather than solving real-world problems.

What they should have done is sold trust to professionals first.



Source link

AI Research

I asked ChatGPT to help me pack for my vacation – try this awesome AI prompt that makes planning your travel checklist stress-free

Published

on


It’s that time of year again, when those of us in the northern hemisphere pack our sunscreen and get ready to venture to hotter climates in search of some much-needed Vitamin D.

Every year, I book a vacation, and every year I get stressed as the big day gets closer, usually forgetting to pack something essential, like a charger for my Nintendo Switch 2, or dare I say it, my passport.



Source link

Continue Reading

AI Research

Denodo Announces Plans to Further Support AI Innovation by Releasing Denodo DeepQuery, a Deep Research Capability — TradingView News

Published

on


PALO ALTO, Calif., July 07, 2025 (GLOBE NEWSWIRE) — Denodo, a leader in data management, announced the availability of the Denodo DeepQuery capability, now as a private preview, and generally available soon, enabling generative AI (GenAI) to go beyond retrieving facts to investigating, synthesizing, and explaining its reasoning. Denodo also announced the availability of Model Context Protocol (MCP) support as part of the Denodo AI SDK.

Built to address complex, open-ended business questions, DeepQuery will leverage live access to a wide spectrum of governed enterprise data across systems, departments, and formats. Unlike traditional GenAI solutions, which rephrase existing content, DeepQuery, a deep research capability, will analyze complex, open questions and search across multiple systems and sources to deliver well-structured, explainable answers rooted in real-time information. To help users operate this new capability to better understand complex current events and situations, DeepQuery will also leverage external data sources to extend and enrich enterprise data with publicly available data, external applications, and data from trading partners.

DeepQuery, beyond what’s possible using traditional generative AI (GenAI) chat or retrieval augmented generation (RAG), will enable users to ask complex, cross-functional questions that would typically take analysts days to answer—questions like, “Why did fund outflows spike last quarter?” or “What’s driving changes in customer retention across regions?” Rather than piecing together reports and data exports, DeepQuery will connect to live, governed data across different systems, apply expert-level reasoning, and deliver answers in minutes.

Slated to be packaged with the Denodo AI SDK, which streamlines AI application development with pre-built APIs, DeepQuery is being developed as a fully extensible component of the Denodo Platform, enabling developers and AI teams to build, experiment with, and integrate deep research capabilities into their own agents, copilots, or domain-specific applications.

“With DeepQuery, Denodo is demonstrating forward-thinking in advancing the capabilities of AI,” said Stewart Bond, Research VP, Data Intelligence and Integration Software at IDC. “DeepQuery, driven by deep research advances, will deliver more accurate AI responses that will also be fully explainable.”

Large language models (LLMs), business intelligence tools, and other applications are beginning to offer deep research capabilities based on public Web data; pre-indexed, data-lakehouse-specific data; or document-based retrieval, but only Denodo is developing deep research capabilities, in the form of DeepQuery, that are grounded in enterprise data across all systems, data that is delivered in real-time, structured, and governed. These capabilities are enabled by the Denodo Platform’s logical approach to data management, supported by a strong data virtualization foundation.

Denodo DeepQuery is currently available in a private preview mode. Denodo is inviting select organizations to join its AI Accelerator Program, which offers early access to DeepQuery capabilities, as well as the opportunity to collaborate with our product team to shape the future of enterprise GenAI.

“As a Denodo partner, we’re always looking for ways to provide our clients with a competitive edge,” said Nagaraj Sastry, Senior Vice President, Data and Analytics at Encora. “Denodo DeepQuery gives us exactly that. Its ability to leverage real-time, governed enterprise data for deep, contextualized insights sets it apart. This means we can help our customers move beyond general AI queries to truly intelligent analysis, empowering them to make faster, more informed decisions and accelerating their AI journey.”

Denodo also announced support of Model Context Protocol (MCP), and an MCP Server implementation is now included in the latest version of the Denodo AI SDK. As a result, all AI agents and apps based on the Denodo AI SDK can be integrated with any MCP-compliant client, providing customers with a trusted data foundation for their agentic AI ecosystems based on open standards.

“AI’s true potential in the enterprise lies not just in generating responses, but in understanding the full context behind them,” said Angel Viña, CEO and Founder of Denodo. “With DeepQuery, we’re unlocking that potential by combining generative AI with real-time, governed access to the entire corporate data ecosystem, no matter where that data resides. Unlike siloed solutions tied to a single store, DeepQuery leverages enriched, unified semantics across distributed sources, allowing AI to reason, explain, and act on data with unprecedented depth and accuracy.”

Additional Information

  • Denodo Platform: What’s New
  • Blog Post: Smarter AI Starts Here: Why DeepQuery Is the Next Step in GenAI Maturity
  • Demo: Watch a short video of this capability in action.

About Denodo

Denodo is a leader in data management. The award-winning Denodo Platform is the leading logical data management platform for transforming data into trustworthy insights and outcomes for all data-related initiatives across the enterprise, including AI and self-service. Denodo’s customers in all industries all over the world have delivered trusted AI-ready and business-ready data in a third of the time and with 10x better performance than with lakehouses and other mainstream data platforms alone. For more information, visit denodo.com.

Media Contacts

pr@denodo.com



Source link

Continue Reading

AI Research

Sakana AI: Think LLM dream teams, not single models

Published

on


Enterprises may want to start thinking of large language models (LLMs) as ensemble casts that can combine knowledge and reasoning to complete tasks, according to Japanese AI lab Sakana AI.

Sakana AI in a research paper outlined a method called Multi-LLM AB-MCTS (Adaptive Branching Monte Carlo Tree Search) that uses a collection of LLMs to cooperate, perform trial-and-error and leverage strengths to solve complex problems.

In a post, Sakana AI said:

“Frontier AI models like ChatGPT, Gemini, Grok, and DeepSeek are evolving at a breathtaking pace amidst fierce competition. However, no matter how advanced they become, each model retains its own individuality stemming from its unique training data and methods. We see these biases and varied aptitudes not as limitations, but as precious resources for creating collective intelligence. Just as a dream team of diverse human experts tackles complex problems, AIs should also collaborate by bringing their unique strengths to the table.”

Sakana AI said AB-MCTS is a method for inference-time scaling to enable frontier AIs to cooperate and revisit problems and solutions. Sakana AI released the algorithm as an open source framework called TreeQuest, which has a flexible API that allows users to use AB-MCTS for tasks with multiple LLMs and custom scoring.

What’s interesting is that Sakana AI gets out of that zero-sum LLM argument. The companies behind LLM training would like you to think there’s one model to rule them all. And you’d do the same if you were spending so much on training models and wanted to lock in customers for scale and returns.

Sakana AI’s deceptively simple solution can only come from a company that’s not trying to play LLM leapfrog every few minutes. The power of AI is in the ability to maximize the potential of each LLM. Sakana AI said:

“We saw examples where problems that were unsolvable by any single LLM were solved by combining multiple LLMs. This went beyond simply assigning the best LLM to each problem. In (an) example, even though the solution initially generated by o4-mini was incorrect, DeepSeek-R1-0528 and Gemini-2.5-Pro were able to use it as a hint to arrive at the correct solution in the next step. This demonstrates that Multi-LLM AB-MCTS can flexibly combine frontier models to solve previously unsolvable problems, pushing the limits of what is achievable by using LLMs as a collective intelligence.”

A few thoughts:

  • Sakana AI’s research and move to emphasize collective intelligence over on LLM and stack is critical to enterprises that need to create architectures that don’t lock them into one provider.
  • AB-MCTS could play into what agentic AI needs to become to be effective and complement emerging standards such as Model Context Protocol (MCP) and Agent2Agent.
  • If combining multiple models to solve problems becomes frictionless, the costs will plunge. Will you need to pay up for OpenAI when you can leverage LLMs like DeepSeek combined with Gemini and a few others? 
  • Enterprises may want to start thinking about how to build decision engines instead of an overall AI stack. 
  • We could see a scenario where a collective of LLMs achieves superintelligence before any one model or provider. If that scenario plays out, can LLM giants maintain valuations?
  • The value in AI may not be in the infrastructure or foundational models in the long run, but the architecture and approaches.

More:



Source link

Continue Reading

Trending