AI Research
AI-generated responses are undermining crowdsourced research studies

Some people who take part in online research projects are using AI to save time
Daniele D’Andreti/Unsplash
Online questionnaires are being swamped by AI-generated responses – potentially polluting a vital data source for scientists.
Platforms like Prolific pay participants small sums for answering questions posed by researchers. They are popular among academics as an easy way to gather participants for behavioural studies.
Anne-Marie Nussberger and her colleagues at the Max Planck Institute for Human Development in Berlin, Germany, decided to investigate how often respondents use artificial intelligence after noticing examples in their own work. “The incidence rates that we were observing were really shocking,” she says.
They found that 45 per cent of participants who were asked a single open-ended question on Prolific copied and pasted content into the box – an indication, they believe, that people were putting the question to an AI chatbot to save time.
Further investigation of the contents of the responses suggested more obvious tells of AI use, such as “overly verbose” or “distinctly non-human” language. “From the data that we collected at the beginning of this year, it seems that a substantial proportion of studies is contaminated,” she says.
In a subsequent study using Prolific, the researchers added traps designed to snare those using chatbots. Two reCAPTCHAs – small, pattern-based tests designed to distinguish humans from bots – caught out 0.2 per cent of participants. A more advanced reCAPTCHA, which used information about users’ past activity as well as current behaviour, weeded out another 2.7 per cent of participants. A question in text that was invisible to humans but readable to bots asking them to include the word “hazelnut” in their response, captured another 1.6 per cent, while preventing any copying and pasting identified another 4.7 per cent of people.
“What we need to do is not distrust online research completely, but to respond and react,” says Nussberger. That is the responsibility of researchers, who should treat answers with more suspicion and take countermeasures to stop AI-enabled behaviour, she says. “But really importantly, I also think that a lot of responsibility is on the platforms. They need to respond and take this problem very seriously.”
Prolific didn’t respond to New Scientist’s request for comment.
“The integrity of online behavioural research was already being challenged by participants of survey sites misrepresenting themselves or using bots to gain cash or vouchers, let alone the validity of remote self-reported responses to understand complex human psychology and behaviour,” says Matt Hodgkinson, a freelance consultant in research ethics. “Researchers either need to collectively work out ways to remotely verify human involvement or return to the old-fashioned approach of face-to-face contact.”
Topics:
AI Research
Mistral AI Nears Close of Funding Round Lifting Valuation to $14B

Artificial intelligence (AI) startup Mistral AI is reportedly nearing the close of a funding round in which it would raise €2 billion (about $2.3 billion) and be valued at €12 billion (about $14 billion).
AI Research
PPS Weighs Artificial Intelligence Policy

Portland Public Schools folded some guidance on artificial intelligence into its district technology policy for students and staff over the summer, though some district officials say the work is far from complete.
The guidelines permit certain district-approved AI tools “to help with administrative tasks, lesson planning, and personalized learning” but require staff to review AI-generated content, check accuracy, and take personal responsibility for any content generated.
The new policy also warns against inputting personal student information into tools, and encourages users to think about inherent bias within such systems. But it’s still a far cry from a specific AI policy, which would have to go through the Portland School Board.
Part of the reason is because AI is such an “active landscape,” says Liz Large, a contracted legal adviser for the district. “The policymaking process as it should is deliberative and takes time,” Large says. “This was the first shot at it…there’s a lot of work [to do].”
PPS, like many school districts nationwide, is continuing to explore how to fold artificial intelligence into learning, but not without controversy. AsThe Oregonian reported in August, the district is entering a partnership with Lumi Story AI, a chatbot that helps older students craft their own stories with a focus on comics and graphic novels (the pilot is offered at some middle and high schools).
There’s also concern from the Portland Association of Teachers. “PAT believes students learn best from humans, instead of AI,” PAT president Angela Bonilla said in an Aug. 26 video. “PAT believes that students deserve to learn the truth from humans and adults they trust and care about.”
Willamette Week’s reporting has concrete impacts that change laws, force action from civic leaders, and drive compromised politicians from public office.
Help us dig deeper.
AI Research
Artificial intelligence investing is on the rise since 2013

FARGO, N.D. (KVRR) — “Artificial intelligence is one of the big new waves in the economy. Right now they say that artificial intelligence is worth about $750 billion in our economy right now. But they expect it to quadruple within about eight or nine years,” said Paul Meyers, President and Financial Advisor at Legacy Wealth Management in Fargo.
According to a Stanford study, since 2013, the United States has been the leading global AI private investor. In 2024, the U.S. invested $109.1 billion in AI. While on a global scale, the corporate AI investment reached $252.3 billion.
“Artificial intelligence is already in our daily lives. And I think it’s just going to become a bigger and bigger part of it. I think we still have control over it. That’s a good thing. But artificial intelligence is helpful to all of us, regardless of what industry you’re in, and we need to be ready for it,” said Meyers.
Recently, Applied Digital has seen a dip in its stock by nearly 4%. The company’s 50-day average price is $12.49, and its 200-day moving average price is $9.07. Their latest report in July reported their earnings per share being $0.12 for the quarter.
“This company has grown quite a bit as a stock this year. For investors in this company, they’re up ninety-four percent this year. And I would say that you know there’s some positives and some negatives, some causes for concern, and some causes for optimism, it’s not a slam dunk,” said Meyers.
At the city council meeting on Tuesday night, Don Flaherty, Mayor of Ellendale, shared that they had not received any financial benefits from Applied Digital and won’t see any until 2026. While Harwood has yet to finalize their decision on the proposal.
-
Business5 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions