AI Research
AI will soon be able to audit all published research—what will that mean for public trust in science?

Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinize research before it is published. This helps safeguard the accuracy of the written record.
Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource intensive.
Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science?
Peer review isn’t catching everything
In recent decades, the digital age and disciplinary diversification have sparked an explosion in the number of scientific papers being published, the number of journals in existence, and the influence of for-profit publishing.
This has opened the doors for exploitation. Opportunistic “paper mills” sell quick publication with minimal review to academics desperate for credentials, while publishers generate substantial profits through huge article-processing fees.
Corporations have also seized the opportunity to fund low-quality research and ghostwrite papers intended to distort the weight of evidence, influence public policy and alter public opinion in favor of their products.
These ongoing challenges highlight the insufficiency of peer review as the primary guardian of scientific reliability. In response, efforts have sprung up to bolster the integrity of the scientific enterprise.
Retraction Watch actively tracks withdrawn papers and other academic misconduct. Academic sleuths and initiatives such as Data Collada identify manipulated data and figures.
Investigative journalists expose corporate influence. A new field of meta-science (science of science) attempts to measure the processes of science and to uncover biases and flaws.
Not all bad science has a major impact, but some certainly does. It doesn’t just stay within academia; it often seeps into public understanding and policy.
In a recent investigation, we examined a widely-cited safety review of the herbicide glyphosate, which appeared to be independent and comprehensive. In reality, documents produced during legal proceedings against Monsanto revealed that the paper had been ghostwritten by Monsanto employees and published in a journal with ties to the tobacco industry.
Even after this was exposed, the paper continued to shape citations, policy documents and Wikipedia pages worldwide.
When problems like this are uncovered, they can make their way into public conversations, where they are not necessarily perceived as triumphant acts of self-correction. Rather, they may be taken as proof that something is rotten in the state of science. This “science is broken” narrative undermines public trust.
AI is already helping police the literature
Until recently, technological assistance in self-correction was mostly limited to plagiarism detectors. But things are changing. Machine-learning services such as ImageTwin and Proofig now scan millions of figures for signs of duplication, manipulation and AI generation.
Natural language processing tools flag “tortured phrases“—the telltale word salads of paper mills. Bibliometric dashboards such as one by Semantic Scholar trace whether papers are cited in support or contradiction.
AI—especially agentic, reasoning-capable models increasingly proficient in mathematics and logic—will soon uncover more subtle flaws.
For example, the Black Spatula Project explores the ability of the latest AI models to check published mathematical proofs at scale, automatically identifying algebraic inconsistencies that eluded human reviewers. Our own work mentioned above also substantially relies on large language models to process large volumes of text.
Given full-text access and sufficient computing power, these systems could soon enable a global audit of the scholarly record. A comprehensive audit will likely find some outright fraud and a much larger mass of routine, journeyman work with garden-variety errors.
We do not know yet how prevalent fraud is, but what we do know is that an awful lot of scientific work is inconsequential. Scientists know this; it’s much discussed that a good deal of published work is never or very rarely cited.
To outsiders, this revelation may be as jarring as uncovering fraud, because it collides with the image of dramatic, heroic scientific discovery that populates university press releases and trade press treatments.
What might give this audit added weight is its AI author, which may be seen as (and may in fact be) impartial and competent, and therefore reliable.
As a result, these findings will be vulnerable to exploitation in disinformation campaigns, particularly since AI is already being used to that end.
Reframing the scientific ideal
Safeguarding public trust requires redefining the scientist’s role in more transparent, realistic terms. Much of today’s research is incremental, career‑sustaining work rooted in education, mentorship and public engagement.
If we are to be honest with ourselves and with the public, we must abandon the incentives that pressure universities and scientific publishers, as well as scientists themselves, to exaggerate the significance of their work. Truly ground-breaking work is rare. But that does not render the rest of scientific work useless.
A more humble and honest portrayal of the scientist as a contributor to a collective, evolving understanding will be more robust to AI-driven scrutiny than the myth of science as a parade of individual breakthroughs.
A sweeping, cross-disciplinary audit is on the horizon. It could come from a government watchdog, a think tank, an anti-science group or a corporation seeking to undermine public trust in science.
Scientists can already anticipate what it will reveal. If the scientific community prepares for the findings—or better still, takes the lead—the audit could inspire a disciplined renewal. But if we delay, the cracks it uncovers may be misinterpreted as fractures in the scientific enterprise itself.
Science has never derived its strength from infallibility. Its credibility lies in the willingness to correct and repair. We must now demonstrate that willingness publicly, before trust is broken.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Citation:
AI will soon be able to audit all published research—what will that mean for public trust in science? (2025, July 25)
retrieved 25 July 2025
from https://techxplore.com/news/2025-07-ai-published-science.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.
AI Research
Agencies and industry announce efforts to further Presidential AI Challenge

First Lady Melania Trump and multiple cabinet leaders on Thursday unveiled the next steps in the White House’s Presidential AI Challenge — a program mandated in an April executive order and launched Aug. 26 — and how the Trump administration is planning to keep the U.S. at the forefront of AI innovation and education.
The remarks were made at the second White House Task Force on Artificial Intelligence Education meeting and were accompanied by pledges from government agencies and the private sector to advance AI education, as mandated by the order.
“We are here today to talk about our future in the most real sense imaginable: how America’s children can be prepared to build our country tomorrow with the cutting edge tools of today,” White House Office of Science and Technology Policy Director Michael Kratsios said during the meeting. “We are proud and grateful to announce new steps in fulfilling the mission of this task force and the president’s vision for this AI challenge.”
Those upcoming steps include the release of toolkits, webinars, classroom guides and more, as well as agency action items intended to help cultivate a strong American foundation in AI education within academia and the workforce. These include sector-specific, applied AI training materials and ways to incorporate AI in American classrooms.
“Our goal is to empower states and schools to begin exploring AI integration in a way that works best for their communities,” Education Secretary Linda McMahon said during the meeting. “Ed is fully aligned with the Presidential AI Challenge, and is encouraging students and educators to explore AI technologies with curiosity and with creativity. It’s not one of those things to be afraid of. Let’s embrace it.”
Secretary of Agriculture Brooke Rollins spotlighted the expansive partnerships between the agency and external entities to bring AI systems into agrarian workflows.
“Far too often for those living and working in our rural parts of our country, that often those are left behind and do not always have the same access to the most recent technological innovations that our urban counterparts across the country do,” Rollins said. “We cannot let that happen with AI.”
USDA will focus on bringing AI systems into agricultural workflows and education, particularly for predictive analyses based on existing agriculture knowledge and data. Sensor systems, robotics and automation are all areas that are slated to modernize the agricultural industry, with help from private sector partners like Microsoft and academia, including Iowa State University and Texas State University.
Secretary of Labor Lori Chavez-DeRemer said her agency is expanding AI access and literacy through several vehicles — notably via apprenticeship opportunities, part of Labor and Commerce’s joint Talent Strategy that was released earlier in August.
“On-the-job training programs will help fill the mortgage paying jobs that AI will create, while also enhancing the unique skills required to succeed in various industries,” Chavez-DeRemer said. “Expanding these opportunities is a key component of our strategy to reach the president’s goal of 1 million new, active apprentices across the United States.”
Chavez-DeRemer also previewed pending partnerships to help disseminate AI education and training materials across the country, along with future best practices for effective AI literacy training.
Several private sector companies were also in attendance to explain their commitments towards supporting the initiative, noting that developing and expanding AI education is necessary to keep up with the demands of the growing AI-centric labor market. Alphabet CEO Sundar Pichai and IBM CEO Arvind Krishna announced their companies’ individual billion- and million-dollar commitments, respectively, to bolster AI education within academia and the existing workforce.
“This is all in the service of helping the next generation to solve problems, fuel innovation and build an incredible future,” Pichai said. “These are all goals we all share. We are incredibly thankful for the partnership and the leadership from the first lady, the president and the administration, and for showing us the way.”
The updates to the Presidential AI Challenge reflect the Trump administration’s no-holds-barred approach to both incorporating AI and machine learning into the government and ensuring the U.S. will lead in new AI technologies at the global level.
AI Research
UWF receives $100,000 grant from Air Force to advance AI and robotics research

PENSACOLA, Fla. — The University of West Florida was just awarded a major grant to help innovate cutting-edge technology in Artificial Intelligence.
The US Air Force Research Laboratory awarded $100,000 to UWF’s Intelligent Systems and Robotics doctorate program.
The grant supports research in Artificial Intelligence and robotics while training PhD students.
The funding was awarded to explore how these systems can support military operations, but also be applied to issues we could face here locally like DISA.
Unlike generative AI in apps like ChatGPT, this research focuses on “reinforcement learning.”
“It’s action-driven. It’s designed to produce strategies versus content and text or visual content,” said Dr. Kristen “Brent” Venable with UWF.
Dr. Venable is leading the research.
Her team is designing simulations that teach autonomous systems like robots and drones how to adapt to the environment around them without human help — enabling the drones to make a decision on their own.
“So if we deployed them and let them go autonomously, sometimes far away, they should be able to decide whether to communicate, whether to go in a certain direction,” she said.
The initial goal of the grant is to help the US military leverage machine learning.
But Dr. Venable says the technology has potential to help systems like local emergency management during a disaster.
“You can see how this could be applied for disaster response,” she said. “Think about having some drones that have to fly over a zone and find people to be rescued or assets that need to be restored.”
Dr. Venable says UWF is poised to deliver on their promises to innovate the technology.
The doctorate program was created with Pensacola’s Institute for Human and Machine Cognition, giving students access to world-class AI and robotics research.
Over the last five years, the program has expanded to more than 30 students.
“We are very well positioned because the way we are, in some sense, lean and mean is attractive to funding agencies,” Dr. Venable said. “Because we can deliver results while training the next generation.”
The local investment by the Air Force comes as artificial intelligence takes center stage nationally.
On Thursday, First Lady Melania Trump announced a presidential AI challenge for students and educators.
President Trump has also signed an executive order to expand AI education.
Dr. Venable says she’s confident the administration’s push for research will benefit the university’s efforts, as the one-year grant will only go so far.
“I think the administration is correctly identifying as a key factor in having the US lead on the research,” she said. “It’s a good seedling to start the conversation for one year.”
The research conducted at UWF and the IHMC are helping put the area on the map as an intelligence hub.
Dr. Venable says they’re actively discussing how to apply for more grants to help with this ongoing research.
AI Research
NSF Seeks to Advance AI Research Via New Operations Center
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics