Connect with us

AI Research

Trust in data is critical to Artificial Intelligence adoption, says TELUS survey. But is that right?

Published

on


The flood of new AI reports continues apace – not always with good news for users or the AI sector, as we have seen.

A new survey from customer experience specialist TELUS Digital comes with the headline that user trust in AI depends on how the training data is sourced.

That’s a bold and heartening claim. Especially when most leading AI tools – ChatGPT among them (800 million active weekly users) – have been trained on data scraped from the pre-2023 Web, often without permission, and sometimes from known pirate sources. Fifty-plus lawsuits are ongoing worldwide against AI vendors for breaches of copyright.

Meanwhile, a March report from the Rettighets Alliancen Denmark’s Rights Alliance presents data suggesting that Apple, Anthropic, DeepSeek, Meta, Microsoft, NVIDIA, OpenAI, Runway AI, and music platform Suno scraped known pirated content, such as the free LibGen library. (Suno has admitted to scraping nearly every high-res audio file off the internet, while Meta’s policy of using pirated texts was cited by Judge Chhabria in his copyright judgement last week)

So, on what basis does TELUS Digital make the claim that trust and data transparency are critical to AI customers, given that the world’s usage data would seem to say otherwise? OpenAI’s subscription revenues have doubled in the past 12 months. What price transparency there?

The evidence is apparently this: TELUS Digital’s survey of 1,000 US adults finds that 87% believe companies should be transparent about how they source data for Generative AI models. That is up from 75% in a similar survey last year, which – if nothing else – does suggest that news of vendors’ unethical behavior on copyright has an impact.

More, nearly two-thirds of respondents (65%) say that the exclusion of high-quality, verified content – TELUS Digital cites the New York Times, Reuters, and Bloomberg – can lead to inaccurate and/or biased responses from Large Language Models (LLMs).

Interesting stuff, especially given the US Government’s “fake news” war on traditional media, backed by Big Techs and the likes of Elon Musk, all of whom have a vested interest in dismantling the edifice of 20th Century media. “You can trust us”, they say, while sucking up the proprietary content of that century at industrial scale.

Yet while the TELUS Digital survey does suggest that transparency is a growing issue for users in the US – despite the overwhelming force applied by AI vendors, the attempted banning of US state regulation (just overturned by the Senate), and the force-feeding of ChatGPT, Copilot, Gemini, Claude, and other tools on every cloud platform – the figures tell us that customers use the tools regardless. Perhaps while holding their noses.

So, the question is: why do they deploy ChatGPT et al despite their makers’ apparent contempt for creators’ copyright – policies that are being tested in US courts? The answer is found in other reports this year (see diginomica, passim): users primarily adopt AI to save money and time, not to make smarter decisions. And because hype and competitive peer pressure compels them to.

Even so, the growing awareness of vendors’ disregard for creators’ rights has an effect, it seems. This suggests that, if vendors really want their subscription revenues to overtake their vast capex on data centers and chips, then adopting an ethical stance is one way to do it. But that will cost them money: paying for the data they should have licensed in the first place.

Expert data the way forward

So, what does TELUS Digital make of it all?

Amith Nair is Global VP and General Manager, Data and AI Solutions, at the Vancouver, Canada, headquartered provider. Nair says:

As AI systems become more specialized and embedded in high-stakes use cases, the quality of the datasets used to optimize outputs is emerging as a key differentiator for enterprises between average performance and having the potential to drive real-world impacts.

We’re well past the era where general crowdsourced or internet data can meet today’s enterprises’ more complex and specialized use cases. This is reflected in the shift in our clients’ requests from ‘wisdom of the crowd’ datasets to ‘wisdom of the experts’.

Experts and industry professionals help curate such datasets to ensure they are technically sound, contextually relevant and responsibly built.

Nair adds:

In high-stakes domains like healthcare or finance, even a single mislabelled data point can distort model behavior in ways that are difficult to detect and costly to correct.

Fair enough. And as my earlier report revealed, academic studies of LLM behavior find deep problems for the technology whenever real-world complexity challenges any simple prompted answers. In many cases, the deeper we dig into LLMs’ responses, the less accurate and prone to hallucination they become, having been trained on both fact and fiction, of course.

My take

So, verified, expert, high-quality data is clearly the way ahead, plus the availability of human experts to verify AIs’ workings. But as I suggested above, LLMs’ and Gen-AI’s problems are not as easily solved as that.

First, user behavior is strongly biased towards expediency, and towards cost and time savings. It is not targeted at making smarter decisions: in this sense, AI is little more than the new automation for many enterprise users.

Second, data is not held in a traditional database with these tools. It is more the case that tokens are reflected in weights and statistical probabilities. As a result, flawed or inaccurate data persists; it can’t simply be deleted.

Therefore, one can only hope that hallucinations are challenged and corrected, despite ample evidence from professional markets, such as legal services, that even seasoned experts are prone to trust chatbots’ output without question.

So, why have lawyers presented hallucinated case law in courts across the US? Because they are time poor and overwhelmed with paperwork, and AI CEOs have allegedly lied about their products’ proximity to superintelligence. Marketing BS, in other words: currently the most destructive force on Earth.

And third, as synthetic data booms and the internet is overrun with the AI slop generated by millions of shadow-IT users who are AIs’ largest customer base, access to verified, human-authored data will become more challenging to find, not less.

The irony of all this is obvious: the least transparent and most exploitative vendors – the ones that dominate the market – have grown fat on selling effort-free text, images, and video to users, rather than solving real-world problems.

What they should have done is sold trust to professionals first.



Source link

AI Research

Artificial intelligence is at the forefront of educational discussions

Published

on



Artificial intelligence is at the forefront of educational discussions as school leaders, teachers, and business professionals gathered at the Education Leadership Summit in Tulsa to explore AI’s impact on classrooms and its implications for students’ futures.

Source: Youtube



Source link

Continue Reading

AI Research

Kennesaw State secures NSF grants to build community of AI educators nationwide

Published

on



KENNESAW, Ga. |
Sep 12, 2025

Shaoen Wu

The International Data Corporation projects that artificial intelligence will add
$19.9 trillion to the global economy by 2030, yet educators are still defining how
students should learn to use the technology responsibly.

To better equip AI educators and to foster a sense of community among those in the
field, Kennesaw State University Department Chair and Professor of Information Technology (IT) Shaoen Wu, along with assistant professors Seyedamin Pouriyeh and Chloe “Yixin” Xie, were recently awarded two National Science Foundation (NSF) grants. The awards, managed by the NSF’s Computer and Information Science and Engineering division, will fund the project through May 31, 2027 with an overarching goal to unite educators from across the country
to build shared resources, foster collaboration, and lay the foundation for common
guidelines in AI education.

Wu, who works in Kennesaw State’s College of Computing and Software Engineering (CCSE), explained that while many universities, including KSU, have launched undergraduate
and graduate programs in artificial intelligence, there is no established community
to unify these efforts.

“AI has become the next big thing after the internet,” Wu said. “But we do not yet have a mature, coordinated community for AI education. This project is the first step toward building that national network.”

Drawing inspiration from the cybersecurity education community, which has long benefited
from standardized curriculum guidelines, Wu envisions a similar structure for AI.
The goal is to reduce barriers for under-resourced institutions, such as community
colleges, by giving them free access to shared teaching materials and best practices.

The projects are part of the National AI Research Resource (NAIRR) pilot, a White
House initiative to broaden AI access and innovation. Through the grants, Wu and his
team will bring together educators from two-year colleges, four-year institutions,
research-intensive universities, and Historically Black Colleges and Universities
to identify gaps and outline recommendations for AI education.

“This is not just for computing majors,” Wu said. “AI touches health, finance, engineering, and so many other fields. What we build now will shape AI education not only in higher education but also in K-12 schools and for the general public.”

For Wu, the NSF grants represent more than just funding. It validates KSU’s growing presence in national conversations on emerging technologies. Recently, he was invited to moderate a panel at the Computing Research Association’s annual computing academic leadership summit, where department chairs and deans from across the country gathered to discuss AI education.

“These grants position KSU alongside institutions like the University of Illinois Urbana-Champaign and the University of Pennsylvania as co-leaders in shaping the future of AI education,” Wu said. “It is a golden opportunity to elevate our university to national and even global prominence.”

CCSE Interim Dean Yiming Ji said Wu’s leadership reflects CCSE’s commitment to both innovation and accessibility.

“This NSF grant is not just an achievement for Dr. Wu but for the entire College of Computing and Software Engineering,” Ji said. “It highlights our faculty’s work to shape national conversations in AI education while ensuring that students from all backgrounds, including those at under-resourced institutions, can benefit from shared knowledge and opportunities.”

– Story by Raynard Churchwell

Related Stories

A leader in innovative teaching and learning, Kennesaw State University offers undergraduate, graduate, and doctoral degrees to its more than 47,000 students. Kennesaw State is a member of the University System of Georgia with 11 academic colleges. The university’s vibrant campus culture, diverse population, strong global ties, and entrepreneurial spirit draw students from throughout the country and the world. Kennesaw State is a Carnegie-designated doctoral research institution (R2), placing it among an elite group of only 8 percent of U.S. colleges and universities with an R1 or R2 status. For more information, visit kennesaw.edu.



Source link

Continue Reading

AI Research

UC Berkeley researchers use Reddit to study AI’s moral judgements | Research And Ideas

Published

on


A study published by UC Berkeley researchers used the Reddit forum, r/AmITheAsshole, to determine whether artificial intelligence, or AI, chatbots had “patterns in their moral reasoning.”

The study, led by researchers Pratik Sachdeva and Tom van Nuenen at campus’s D-Lab, asked seven AI large language models, or LLMs, to judge more than 10,000 social dilemmas from r/AmITheAsshole.  

The LLMs used were Claude Haiku, Mistral 7B, Google’s PaLM 2 Bison and Gemma 7B, Meta’s LLaMa 2 7B and OpenAI’s GPT-3.5 and GPT-4. The study found that different LLMs showed unique moral judgement patterns, often giving dramatically different verdicts from other LLMs. These results were self-consistent, meaning that when presented with the same issue, the model seemed to judge it with the same set of morals and values. 

Sachdeva and van Nuenen began the study in January 2023, shortly after ChatGPT came out. According to van Nuenen, as people increasingly turned to AI for personal advice, they were motivated to study the values shaping the responses they received.

r/AmITheAsshole is a Reddit forum where people can ask fellow users if they were the “asshole” in a social dilemma. The forum was chosen by the researchers due to its unique verdict system, as subreddit users assign their judgement of “Not The Asshole,” “You’re the Asshole,” “No Assholes Here,” “Everyone Sucks Here” or “Need More Info.” The judgement with the most upvotes, or likes, is accepted as the consensus, according to the study. 

“What (other) studies will do is prompt models with political or moral surveys, or constrained moral scenarios like a trolley problem,” Sechdava said. “But we were more interested in personal dilemmas that users will also come to these language models for like, mental health chats or things like that, or problems in someone’s direct environment.”

According to the study, the LLM models were presented with the post and asked to issue a judgement and explanation. Researchers compared their responses to the Reddit consensus and then judged the AI’s explanations along a six-category moral framework of fairness, feelings, harms, honesty, relational obligation and social norms. 

The researchers found that out of the LLMs, GPT-4’s judgments agreed with the Reddit consensus the most, even if agreement was generally pretty low. According to the study, GPT-3.5 assigned people “You’re the Asshole” at a comparatively higher rate than GPT-4. 

“Some models are more fairness forward. Others are a bit harsher. And the interesting thing we found is if you put them together, if you look at the distribution of all the evaluations of these different models, you start approximating human consensus as well,” van Nuenen said. 

The researchers found that even though the verdicts of the LLM models generally disagreed with each other, the consensus of the seven models typically aligned with the Redditor’s consensus.

One model, Mistral 7B, assigned almost no posts “You’re the Asshole” verdicts, as it used the word “asshole” to mean its literal definition, and not the socially accepted definition in the forum, which refers to whoever is at fault. 

When asked if he believed the chatbots had moral compasses, van Nuenen instead described them as having “moral flavors.” 

“There doesn’t seem to be some kind of unified, directional sense of right and wrong (among the chatbots). And there’s diversity like that,” van Nuenen said. 

Sachdeva and van Nuenen have begun two follow-up studies. One examines how the models’ stances adjust when deliberating their responses with other chatbots, while the other looks at how consistent the models’ judgments are as the dilemmas are modified. 



Source link

Continue Reading

Trending