Connect with us

AI Research

AI-generated responses are undermining crowdsourced research studies

Published

on


Some people who take part in online research projects are using AI to save time

Daniele D’Andreti/Unsplash

Online questionnaires are being swamped by AI-generated responses – potentially polluting a vital data source for scientists.

Platforms like Prolific pay participants small sums for answering questions posed by researchers. They are popular among academics as an easy way to gather participants for behavioural studies.

Anne-Marie Nussberger and her colleagues at the Max Planck Institute for Human Development in Berlin, Germany, decided to investigate how often respondents use artificial intelligence after noticing examples in their own work. “The incidence rates that we were observing were really shocking,” she says.

They found that 45 per cent of participants who were asked a single open-ended question on Prolific copied and pasted content into the box – an indication, they believe, that people were putting the question to an AI chatbot to save time.

Further investigation of the contents of the responses suggested more obvious tells of AI use, such as “overly verbose” or “distinctly non-human” language. “From the data that we collected at the beginning of this year, it seems that a substantial proportion of studies is contaminated,” she says.

In a subsequent study using Prolific, the researchers added traps designed to snare those using chatbots. Two reCAPTCHAs – small, pattern-based tests designed to distinguish humans from bots – caught out 0.2 per cent of participants. A more advanced reCAPTCHA, which used information about users’ past activity as well as current behaviour, weeded out another 2.7 per cent of participants. A question in text that was invisible to humans but readable to bots asking them to include the word “hazelnut” in their response, captured another 1.6 per cent, while preventing any copying and pasting identified another 4.7 per cent of people.

“What we need to do is not distrust online research completely, but to respond and react,” says Nussberger. That is the responsibility of researchers, who should treat answers with more suspicion and take countermeasures to stop AI-enabled behaviour, she says. “But really importantly, I also think that a lot of responsibility is on the platforms. They need to respond and take this problem very seriously.”

Prolific didn’t respond to New Scientist’s request for comment.

“The integrity of online behavioural research was already being challenged by participants of survey sites misrepresenting themselves or using bots to gain cash or vouchers, let alone the validity of remote self-reported responses to understand complex human psychology and behaviour,” says Matt Hodgkinson, a freelance consultant in research ethics. “Researchers either need to collectively work out ways to remotely verify human involvement or return to the old-fashioned approach of face-to-face contact.”

Topics:



Source link

AI Research

Nvidia says ‘We never deprive American customers in order to serve the rest of the world’ — company says GAIN AI Act addresses a problem that doesn’t exist

Published

on


The bill, which aimed to regulate shipments of AI GPUs to adversaries and prioritize U.S. buyers, as proposed by U.S. senators earlier this week, made quite a splash in America. To a degree, Nvidia issued a statement claiming that the U.S. was, is, and will remain its primary market, implying that no regulations are needed for the company to serve America.

“The U.S. has always been and will continue to be our largest market,” a statement sent to Tom’s Hardware reads. “We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips. While it may have good intentions, this bill is just another variation of the AI Diffusion Rule and would have similar effects on American leadership and the U.S. economy.”



Source link

Continue Reading

AI Research

OpenAI Projects $115 Billion Cash Burn by 2029

Published

on


OpenAI has sharply raised its projected cash burn through 2029 to $115 billion, according to The Information. This marks an $80 billion increase from previous estimates, as the company ramps up spending to fuel the AI behind its ChatGPT chatbot.

The company, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, about $1.5 billion higher than its earlier forecast. The surge in spending comes as OpenAI seeks to maintain its lead in the rapidly growing artificial intelligence market.


To control these soaring costs, OpenAI plans to develop its own data center server chips and facilities to power its technology.


The company is partnering with U.S. semiconductor giant Broadcom to produce its first AI chip, which will be used internally rather than made available to customers, as reported by The Information.


In addition to this initiative, OpenAI has expanded its partnership with Oracle, committing to a 4.5-gigawatt data center capacity to support its growing operations.


This is part of OpenAI’s larger plan, the Stargate initiative, which includes a $500 billion investment and is also supported by Japan’s SoftBank Group. Google Cloud has also joined the group of suppliers supporting OpenAI’s infrastructure.


OpenAI’s projected cash burn will more than double in 2024, reaching over $17 billion. It will continue to rise, with estimates of $35 billion in 2027 and $45 billion in 2028, according to The Information.

Tags





Source link

Continue Reading

AI Research

PromptLocker scared ESET, but it was an experiment

Published

on


The PromptLocker malware, which was considered the world’s first ransomware created using artificial intelligence, turned out to be not a real attack at all, but a research project at New York University.

On August 26, ESET announced that detected the first sample of artificial intelligence integrated into ransomware. The program was called PromptLocker. However, as it turned out, it was not the case: researchers from the Tandon School of Engineering at New York University were responsible for creating this code.

The university explained that PromptLocker — is actually part of an experiment called Ransomware 3.0, which was conducted by a team from the Tandon School of Engineering. A representative of the school told the publication that a sample of the experimental code was uploaded to the VirusTotal platform for malware analysis. It was there that ESET specialists discovered it, mistaking it for a real threat.

According to ESET, the program used Lua scripts generated on the basis of strictly defined instructions. These scripts allowed the malware to scan the file system, analyze the contents, steal selected data, and perform encryption. At the same time, the sample did not implement destructive capabilities — a logical step, given that it was a controlled experiment.

Nevertheless, the malicious code did function. New York University confirmed that their AI-based simulation system was able to go through all four classic stages of a ransomware attack: mapping the system, identifying valuable files, stealing or encrypting data, and creating a ransomware message. Moreover, it was able to do this on various types of systems — from personal computers and corporate servers to industrial controllers.

Should you be concerned? Yes, but with an important caveat: there is a big difference between an academic proof-of-concept demonstration and a real attack carried out by malicious actors. However, such research can be a good opportunity for cybercriminals, as it shows not only the principle of operation but also the real costs of its implementation.



Source link

Continue Reading

Trending