AI Research
AI Tool Flags Predatory Journals, Building a Firewall for Science

Summary: A new AI system developed by computer scientists automatically screens open-access journals to identify potentially predatory publications. These journals often charge high fees to publish without proper peer review, undermining scientific credibility.
The AI analyzed over 15,000 journals and flagged more than 1,000 as questionable, offering researchers a scalable way to spot risks. While the system isn’t perfect, it serves as a crucial first filter, with human experts making the final calls.
Key Facts
- Predatory Publishing: Journals exploit researchers by charging fees without quality peer review.
- AI Screening: The system flagged over 1,000 suspicious journals out of 15,200 analyzed.
- Firewall for Science: Helps preserve trust in research by protecting against bad data.
Source: University of Colorado
A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.
The study, published Aug. 27 in the journal “Science Advances,” tackles an alarming trend in the world of research.
Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers—for a hefty fee.
Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.
“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”
His group’s new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?
Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.
But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.
“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”
The shake down
When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality—or, at least, that’s the goal.
A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.
Often, they target researchers outside of the United States and Europe, such as in China, India and Iran—countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.
“They will say, ‘If you pay $500 or $1,000, we will review your paper,’” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”
A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ).
Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)
But keeping pace with the spread of those publications has been daunting for humans.
To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.
Among those journals, the AI initially flagged more than 1,400 as potentially problematic.
Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.
“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”
A firewall for science
Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.
“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”
The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.
The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data—what he calls a “firewall for science.”
“As a computer scientist, I often give the example of when a new smartphone comes out,” he said.
“We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”
About this AI and science research news
Author: Daniel Strain
Source: University of Colorado
Contact: Daniel Strain – University of Colorado
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Estimating the predictability of questionable open-access journals” by Daniel Acuña et al. Science Advances
Abstract
Estimating the predictability of questionable open-access journals
Questionable journals threaten global research integrity, yet manual vetting can be slow and inflexible.
Here, we explore the potential of artificial intelligence (AI) to systematically identify such venues by analyzing website design, content, and publication metadata.
Evaluated against extensive human-annotated datasets, our method achieves practical accuracy and uncovers previously overlooked indicators of journal legitimacy.
By adjusting the decision threshold, our method can prioritize either comprehensive screening or precise, low-noise identification.
At a balanced threshold, we flag over 1000 suspect journals, which collectively publish hundreds of thousands of articles, receive millions of citations, acknowledge funding from major agencies, and attract authors from developing countries.
Error analysis reveals challenges involving discontinued titles, book series misclassified as journals, and small society outlets with limited online presence, which are issues addressable with improved data quality.
Our findings demonstrate AI’s potential for scalable integrity checks, while also highlighting the need to pair automated triage with expert review.
AI Research
Chinese Startup DeepSeek Challenges Silicon Valley AI Dominance with Research Focus

In the rapidly evolving world of artificial intelligence, Chinese startup DeepSeek is emerging as a formidable player, prioritizing cutting-edge research over immediate commercial gains. Founded in 2023, the company has quickly gained attention for its innovative approaches to large language models, challenging the dominance of Silicon Valley giants. Unlike many U.S.-based firms that chase profitability through aggressive monetization, DeepSeek’s strategy emphasizes foundational advancements in AI architecture, drawing praise from industry observers for its long-term vision.
This focus on research has allowed DeepSeek to develop models that excel in efficiency and performance, particularly in training and inference processes. For instance, their proprietary techniques in sparse activation and optimized
AI Research
3 ways AI kiosks are rewriting the civic engagement playbook

Across the country, public agencies face a common challenge: how to deliver vital services equitably in the face of limited resources, rising expectations, and increasingly diverse populations.
Traditional government service models — centralized, bureaucratic, and often paper-based — struggle to keep pace with the needs of rural residents, multilingual communities and military families, whose mobility and time constraints demand flexibility.
But a new generation of civic infrastructure is beginning to take shape, one that blends artificial intelligence with physical access points in the communities that need them most. Intelligent self-service kiosks are emerging as a practical tool for expanding access to justice and other essential services, without adding administrative burden or requiring residents to navigate unfamiliar digital portals at home.
El Paso County, Texas, offers one compelling case study. In June 2024, the County launched a network of AI-enabled kiosks that allow residents to complete court-related tasks, from submitting forms and payments to accessing legal guidance, in both English and Spanish. The kiosks are placed in strategic community locations, including the Tigua Indian Reservation and Fort Bliss, enabling access where it’s needed most.
Three lessons from this rollout may prove instructive for government leaders elsewhere:
1. Meet People Where They Are…Literally
Too often, civic access depends on residents coming to centralized locations during limited hours. For working families, rural residents and military personnel, that model simply doesn’t work.
Placing kiosks in trusted, high-traffic locations like base welcome centers or community annexes removes that barrier and affirms a simple principle: access shouldn’t be an ordeal.
At Fort Bliss, for example, the kiosk allows service members to fulfill court-related obligations without taking leave or leaving the base at all. In just one month, nearly 500 military residents used the kiosk. Meanwhile, over 670 transactions have been completed on the Ysleta del Sur Pueblo (also known as the Tigua Indian Reservation), where access to public transportation is a challenge.
2. Design for Inclusion, Not Just Efficiency
While technology can streamline service delivery, it can also unintentionally exclude those with limited digital literacy or English proficiency. Multilingual A.I. interfaces and accessible user flows are both technical features and equity enablers.
In El Paso County, 20% of kiosk interactions have occurred in Spanish. This uptake highlights the importance of designing systems that reflect the communities they serve, rather than assuming one-size-fits-all access.
3. Think Beyond Digitization and Aim for Democratization
Many digital transformation efforts focus on moving services online, but that shift often leaves behind those without broadband, personal devices, or comfort with navigating complex websites. By embedding smart kiosks in the public realm, governments can provide digital tools without requiring digital privilege.
Moreover, these tools can reduce workload for front-line staff by automating routine transactions, freeing up human workers to focus on complex or high-touch cases. In that way, technology doesn’t replace the human element, it protects and supports it.
The El Paso County model is not the first of its kind, but its thoughtful implementation across geographically and demographically diverse communities offers a replicable roadmap. Other jurisdictions from Miami to Ottawa County, Michigan are piloting similar solutions tailored to local needs.
Ultimately, the path forward isn’t about flashy tech or buzzwords. It’s about pragmatism. It’s about recognizing that trust in government is built not through rhetoric but through responsiveness, and that sometimes, responsiveness looks like a kiosk in a community center that speaks your language and knows what you need.
For public officials considering a similar approach, the advice is simple: start with the barriers your residents face, then work backward. Let inclusion, not efficiency, guide your design. And remember that innovation in public service doesn’t always mean moving faster. Sometimes, it means stopping to ask who’s still being left behind.
Pritesh Bhavsar is the founding technology leader at Advanced Robot Solutions.
AI Research
GEAT) Announces Official Re-Launch of Wall Street Stats Mobile Applications with Advanced AI and Machine Learning Features

RENO, Nev., Sept. 02, 2025 (GLOBE NEWSWIRE) — GreetEat Corporation (OTC: GEAT), a forward-thinking technology company dedicated to building next-generation platforms, today announced the official re-launch of its subsidiary Wall Street Stats (WallStreetStats.io) applications on both iOS and Android. The updated apps deliver a powerful suite of new tools designed to empower investors with deeper insights, smarter analytics, and a cutting-edge user experience.
The new release introduces an upgraded platform driven by artificial intelligence and machine learning, providing users with:
- Detailed Quotes & Company Profiles – Comprehensive financial data with intuitive visualization.
- Summarized Market Intelligence – AI-powered data aggregation and automated summarization for faster decision-making.
- Sentiment Analysis via Reddit & Social Platforms – Machine learning models that detect, classify, and quantify investor sentiment in real time.
- Trending Stocks, Top Gainers, Top Losers, and Most Active Lists – AI-curated market movers updated dynamically throughout the day.
- Smart Watchlists – Personalized watchlists enhanced by predictive analytics and recommendation algorithms.
- AI-Driven Market Predictions – Leveraging natural language processing (NLP), deep learning, and behavioral pattern recognition to uncover emerging investment opportunities.
“Wall Street Stats was designed to go beyond traditional financial data and offer an AI-first experience that empowers both retail and professional investors,” said Victor Sima, CTO of GreetEat Corporation. “With this re-launch, we’ve combined the best of real-time market intelligence with machine learning powered insights that make data more actionable, intuitive, and predictive. This is just the beginning of our vision to democratize Wall Street – level analytics for everyone.”
The platform’s enhanced features are aimed at giving investors a competitive edge by uncovering hidden patterns, predicting momentum, and providing smarter investment signals. With natural language processing, predictive modeling, and real-time data analytics, Wall Street Stats represents a new era in financial technology innovation.
The applications are now available for download on both the Apple App Store and Google Play Store.
About GreetEat Corporation
GreetEat Corporation (OTC: GEAT) is a technology-driven platform designed to bring people together through virtual dining. Whether for business meetings, celebrations, or personal connections, GreetEat blends video conferencing with meal delivery to create meaningful, shared experiences anywhere in the world. In addition to GreetEat.com, the company also owns WallStreetStats.io, a cutting-edge fintech app that leverages AI and machine learning to analyze social sentiment, market trends, and trading signals in real time, available on both Android and iOS stores.
For Investor Relations or Media Inquiries:
GreetEat Corporation
Email: investors@GreetEat.com
Website: www.GreetEat.com
Connect with GreetEat Corporation
Website: www.GreetEat.com
Website: www.WallStreetStats.io
Follow us on social media:
Follow us on social media:
Download the apps with the below links:
Apple App Store and Google Play Store.
Forward-Looking Statements: This press release contains forward-looking statements within the meaning of the Private Securities Litigation Reform Act of 1995. These forward-looking statements are based on current expectations, estimates, and projections about the company’s business and industry, management’s beliefs, and certain assumptions made by the management. Such statements involve risks and uncertainties that could cause actual results to differ materially from those in the forward-looking statements. The company undertakes no obligation to update or revise any forward-looking statements, whether as a result of new information, future events, or otherwise.
-
Business4 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi