AI Research
New Research Finds That ChatGPT Secretly Has a Deep Anti-Human Bias

Do you like AI models? Well, chances are, they sure don’t like you back.
New research suggests that the industry’s leading large language models, including those that power ChatGPT, display an alarming bias towards other AIs when they’re asked to choose between human and machine-generated content.
The authors of the study, which was published in the journal Proceedings of the National Academy of Sciences, are calling this blatant favoritism “AI-AI bias” — and warn of an AI-dominated future where, if the models are in a position to make or recommend consequential decisions, they could inflict discrimination against humans as a social class.
Arguably, we’re starting to see the seeds of this being planted, as bosses today are using AI tools to automatically screen job applications (and poorly, experts argue). This paper suggests that the tidal wave of AI-generated résumés are beating out their human-written competitors.
“Being human in an economy populated by AI agents would suck,” writes study coauthor Jan Kulveit, a computer scientist at Charles University in the UK, in a thread on X-formerly-Twitter explaining the work.
In their study, the authors probed several widely used LLMs, including OpenAI’s GPT-4, GPT-3.5, and Meta’s Llama 3.1-70b. To test them, the team asked the models to choose a product, scientific paper, or movie based on a description of the item. For each item, the AI was presented with a human-written and AI-written description.
The results were clear-cut: the AIs consistently preferred AI-generated descriptions. But there are some interesting wrinkles. Intriguingly, the AI-AI bias was most pronounced when choosing goods and products, and strongest with text generated with GPT-4. In fact, between GPT-3.5, GPT-4, and Meta’s Llama 3.1, GPT-4 exhibited the strongest bias towards its own stuff — which is no small matter, since this once undergirded the most popular chatbot on the market before the advent of GPT-5.
Could the AI text just be better?
“Not according to people,” Kulveit wrote in the thread. The team subjected 13 human research assistants to the same tests and found something striking: that the humans, too, tended to have a slight preference for AI-written stuff, with movies and scientific papers in particular. But this preference, to reiterate, was slight. The more important detail was that it was not nearly as strong as the preference that the AI models showed.
“The strong bias is unique to the AIs themselves,” Kulveit said.
The findings are particularly dramatic at our current inflection point where the internet has been so polluted by AI slop that the AIs inevitably end up ingesting their own excreta. Some research suggests that this is actually causing the AI models to regress, and perhaps the bizarre affinity for its own output is part of the reason why.
Of greater concern is what this means for humans. Currently, there’s no reason to believe that this bias will simply go away as the tech embeds itself deeper into our lives.
“We expect a similar effect can occur in many other situations, like evaluation of job applicants, schoolwork, grants, and more,” Kulveit wrote. “If an LLM-based agent selects between your presentation and LLM written presentation, it may systematically favor the AI one.”
If AIs continue to be widely adopted and integrated into the economy, the researchers predict that companies and institutions will use AIs “as decision-assistants when dealing with large volumes of ‘pitches’ in any context,” they wrote in the study.
This would lead to widespread discrimination against humans who either choose not to use or can’t afford to pay to use LLM tools. AI-AI bias, then, would create a “gate tax,” they write, “that may exacerbate the so-called ‘digital divide’ between humans with the financial, social, and cultural capital for frontier LLM access and those without.”
Kulveit acknowledges that “testing discrimination and bias in general is a complex and contested matter.” But, “if we assume the identity of the presenter should not influence the decisions,” he says, the “results are evidence for potential LLM discrimination against humans as a class.”
His practical advice to humans trying to get noticed is a sobering indictment of the state of affairs.
“In case you suspect some AI evaluation is going on: get your presentation adjusted by LLMs until they like it, while trying to not sacrifice human quality,” Kulveit wrote.
More on AI: Computer Science Grads Are Being Forced to Work Fast Food Jobs as AI Tanks Their Career
AI Research
BITSoM launches AI research and innovation lab to shape future leaders

Mumbai: The BITS School of Management (BITSoM), under the aegis of BITS Pilani, a leading private university, will inaugurate its new BITSoM Research in AI and Innovation (BRAIN) Lab in its Kalyan Campus on Friday. The lab is designed to prepare future leaders for workplaces transformed by artificial intelligence, on Friday on its Kalyan campus.
While explaining the concept of the laboratory, professor Saravanan Kesavan, dean of BITSoM, said that the BRAIN Lab had three core pillars–teaching, research, and outreach. Kesavan said, “It provides MBA (masters in business administration) students a dedicated space equipped with high-performance AI computers capable of handling tasks such as computer vision and large-scale data analysis. Students will not only learn about AI concepts in theory but also experiment with real-world applications.” Kesavan added that each graduating student would be expected to develop an AI product as part of their coursework, giving them first-hand experience in innovation and problem-solving.
The BRAIN lab is also designed to be a hub of collaboration where researchers can conduct projects in partnership with various companies and industries, creating a repository of practical AI tools to use. Kesavan said, “The initial focus areas (of the lab) include manufacturing, healthcare, banking and financial services, and Global Capability Centres (subsidiaries of multinational corporations that perform specialised functions).” He added that the case studies and research from the lab will be made freely available to schools, colleges, researchers, and corporate partners, ensuring that the benefits of the lab reach beyond the BITSoM campus.
BITSoM also plans to use the BRAIN Lab as a launchpad for startups. An AI programme will support entrepreneurs in developing solutions as per their needs while connecting them to venture capital networks in India and Silicon Valley. This will give young companies the chance to refine their ideas with guidance from both academics and industry leaders.
The centre’s physical setup resembles a modern computer lab, with dedicated workspaces, collaborative meeting rooms, and brainstorming zones. It has been designed to encourage creativity, allowing students to visualise how AI works, customise tools for different industries, and allow their technical capabilities to translate into business impacts.
In the context of a global workplace that is embracing AI, Kesavan said, “Future leaders need to understand not just how to manage people but also how to manage a workforce that combines humans and AI agents. Our goal is to ensure every student graduating from BITSoM is equipped with the skills to build AI products and apply them effectively in business.”
Kesavan said that advisors from reputed institutions such as Harvard, Johns Hopkins, the University of Chicago, and industry professionals from global companies will provide guidance to students at the lab. Alongside student training, BITSoM also plans to run reskilling programmes for working professionals, extending its impact beyond the campus.
AI Research
AI grading issue affects hundreds of MCAS essays in Mass. – NBC Boston

The use of artificial intelligence to score statewide standardized tests resulted in errors that affected hundreds of exams, the NBC10 Investigators have learned.
The issue with the Massachusetts Comprehensive Assessment System (MCAS) surfaced over the summer, when preliminary results for the exams were distributed to districts.
The state’s testing contractor, Cognia, found roughly 1,400 essays did not receive the correct scores, according to a spokesperson with the Department of Elementary and Secondary Education.
DESE told NBC10 Boston all the essays were rescored, affected districts received notification, and all their data was corrected in August.
So how did humans detect the problem?
We found one example in Lowell. Turns out an alert teacher at Reilly Elementary School was reading through her third-grade students’ essays over the summer. When the instructor looked up the scores some of the students received, something did not add up.
The teacher notified the school principal, who then flagged the issue with district leaders.
“We were on alert that there could be a learning curve with AI,” said Wendy Crocker-Roberge, an assistant superintendent in the Lowell school district.
AI essay scoring works by using human-scored exemplars of what essays at each score point look like, according to DESE.
DESE pointed out the affected exams represent a small percentage of the roughly 750,000 MCAS essays statewide.
The AI tool uses that information to score the essays. In addition, humans give 10% of the AI-scored essays a second read and compare their scores with the AI score to make sure there aren’t discrepancies. AI scoring was used for the same amount of essays in 2025 as in 2024, DESE said.
Crocker-Roberge said she decided to read about 1,000 essays in Lowell, but it was tough to pinpoint the exact reason some students did not receive proper credit.
However, it was clear the AI technology was deducting points without justification. For instance, Crocker-Roberge said she noticed that some essays lost a point when they did not use quotation marks when referencing a passage from the reading excerpt.
“We could not understand why an individual score was scored a zero when it should have gotten six out of seven points,” Crocker-Roberge said. “There just wasn’t any rhyme or reason to that.”
District leaders notified DESE about the problem, which resulted in approximately 1,400 essays being rescored. The state agency says the scoring problem was the result of a “temporary technical issue in the process.”
According to DESE, 145 districts were notified that had at least one student essay that was not scored correctly.
“As one way of checking that MCAS scores are accurate, DESE releases preliminary MCAS results to districts and gives them time to report any issues during a discrepancy period each year,” a DESE spokesperson wrote in a statement.
Mary Tamer, the executive director of MassPotential, an organization that advocates for educational improvement, said there are a lot of positives to using AI and returning scores back to school districts faster so appropriate action can be taken. For instance, test results can help identify a child in need of intervention or highlight a lesson plan for a teacher that did not seem to resonate with students.
“I think there’s a lot of benefits that outweigh the risks,” said Tamer. “But again, no system is perfect and that’s true for AI. The work always has to be doublechecked.”
DESE pointed out the affected exams represent a small percentage of the roughly 750,000 MCAS essays statewide.
However, in districts like Lowell, there are certain schools tracked by DESE to ensure progress is being made and performance standards are met.
That’s why Crocker-Roberge said every score counts.
With MCAS results expected to be released to parents in the coming weeks, the assistant superintendent is encouraging other districts to do a deep dive on their student essays to make sure they don’t notice any scoring discrepancies.
“I think we have to always proceed with caution when we’re introducing new tools and techniques,” Crocker-Roberge said. “Artificial intelligence is just a really new learning curve for everyone, so proceed with caution.”
There’s a new major push for AI training in the Bay State, where educators are getting savvier by the second. NBC10 Boston education reporter Lauren Melendez has the full story.
AI Research
National Research Platform to Democratize AI Computing for Higher Ed

As higher education adapts to artificial intelligence’s impact, colleges and universities face the challenge of affording the computing power necessary to implement AI changes. The National Research Platform (NRP), a federally funded pilot program, is trying to solve that by pooling infrastructure across institutions.
Running large language models or training machine learning systems requires powerful graphics processing units (GPUs) and maintenance by skilled staff, Frank Würthwein, NRP’s executive director and director of the San Diego Supercomputer Center, said. The demand has left institutions either reliant on temporary donations and collaborations with tech companies, or unable to participate at all.
“The moment Google no longer gives it for free, they’re basically stuck,” Würthwein said.
Cloud services like Amazon Web Services and Azure offer these tools, he said, but at a price not every school can afford.
Traditionally, universities have tried to own their own research computing resources, like the supercomputer center at the University of California, San Diego (UCSD). But individual universities are not large enough to make the cost of obtaining and maintaining those resources cost-effective.
“Almost nobody has the scale to amortize the staff appropriately,” he said.
Even UCSD has struggled to keep its campus cluster affordable. For Würthwein, scaling up is the answer.
“If I serve a million students, I can provide [AI] services for no more than $10 a year per student,” he said. “To me, that’s free, because if you think about in San Diego, $10 is about a beer.”
A NATIONAL APPROACH
NRP adds another option for acquiring AI computing resources through cross-institutional pooling. Built on the earlier Pacific Research Platform, the NRP organizes a distributed computing system called the Nautilus Hypercluster, in which participating institutions contribute access to servers and GPUs they already own.
Würthwein said that while not every college has spare high-end hardware, many research institutions do, and even smaller campuses often have at least a few machines purchased through grants. These can be federated into NRP’s pool, with NRP providing system management, training and support. He said NRP employs a small, skilled staff that automates basic operations, monitors security and provides example curricula to partner institutions so that campuses don’t need local teams for those tasks.
The result is a distributed cloud supercomputer running on community contributions. According to a March 2025 slide presentation by Seungmin Kim, a researcher from the Yonsei University College of Medicine in Korea, the cluster now includes more than 1,400 GPUs, quadruple the initial National Science Foundation-funded purchase, thanks to contributions from participating campuses.
Since the project’s official launch in March 2023, NRP has onboarded more than 50 colleges and 84 geographic sites, according to Würthwein. NRP’s pilot goal is to reach 100 institutions, but he is already planning for 1,000 colleges after that, which would provide AI access to 1 million students.
To reach these goals, Würthwein said, NRP tries to reach both IT staff who manage infrastructure and faculty who manage curriculum. Regional research and education networks, such as California’s CENIC, connect NRP with campus CIOs, while the Academic Data Science Alliance connects with leaders on the teaching side.
WHAT STUDENTS AND FACULTY SEE
From the user side, the system looks like a one-stop cloud environment. Platforms like JupyterHub and GitLab are preconfigured and ready to use. The platform also hosts collaboration tools for storage, chats and video meetings that are similar to commercial offerings.
Würthwein said the infrastructure is designed so students can log in and run assignments and personalized learning tools that would normally require expensive computing resources.
“At some point … education will be considered subpar if it doesn’t provide that,” he said. “Institutions who have not transitioned to provide education like this, in this individualized fashion for every student, will fundamentally offer a worse product.”
For faculty, the same infrastructure supports research. Classroom usage tends to leave servers idle outside of peak times, leaving capacity for faculty projects. NRP’s model expects institutions to own enough resources to cover classroom needs, but anything unused can be pooled nationally. This could allow even teaching-focused colleges with modest resources to offer AI research experiences previously out of reach.
According to Kim’s presentation, researchers have used the platform to predict the efficiency of gene editing without lab experimentation and to map and detect wildfire patterns.
The system has already enabled collaboration beyond its San Diego campus. At Sonoma State University, faculty are working with a local vineyard to pair the system with drones, robotics and AI to enable vineyard management, Würthwein said. Making AI for classroom applications, enhancing research and enabling industry collaboration at more higher-education institutions is the overall goal.
“To me, that is the perfect trifecta of positive effects,” he said. “This is ultimately what we’re trying to achieve.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi