AI Research
Computer science research papers show fastest uptake of AI use in writing, analysis finds

Powered by artificial intelligence (AI), large language models are trained on vast amounts of text and can therefore respond to human requests in the natural language.
Researchers from Stanford University and other institutes in the US looked at 1,121,912 pre-print papers in the archives ‘arXiv’ and ‘bioRxiv’, and published papers across Nature journals from January 2020 to September 2024.
Focussing on how often words commonly used by AI systems appeared in the papers, the team estimated the involvement of a large language model — ChatGPT in this study — in modifying content in a research paper.
Results published in the journal Nature Human Behaviour “suggest a steady increase in LLM (large language model) usage, with the largest and fastest growth estimated for computer science papers (up to 22%).”
The researchers also estimated a greater reliance on AI systems among pre-print papers in the archive ‘bioRxiv’ written by authors from regions known to have a lower number of English-language speakers, such as China and continental Europe.
However, papers related to mathematics and those published across Nature journals showed a lower evidence of use of AI in modifying content, according to the analysis.
The study team said that shorter papers and authors posting pre-prints more often showed a higher rate of AI use in writing papers, suggesting that researchers trying to produce a higher quantity of writing are more likely to rely on LLMs.
“These results may be an indicator of the competitive nature of certain research areas and the pressure to publish quickly,” the team said.
The researchers also looked at a smaller number of papers to understand how scholars disclose use of AI in their writing.
An inspection of randomly selected 200 computer science papers that were uploaded to the pre-print archive ‘arXiv’ in February 2024 revealed that “only two out of the 200 papers explicitly disclosed the use of LLMs during paper writing”.
Future studies looking at disclosure statements might help to understand researchers’ motivation for using AI in writing papers.
For example, policies around disclosing LLM usage in academic writing may still be unclear, or scholars may have other motivations for intentionally avoiding to disclose use of AI, the authors said.
A recent study, published in the journal Science, estimated that at least 13% of research abstracts published in 2024 could have taken help from a large language model, as they included more of ‘style’ words seen to be favoured by these AI systems.
Researchers from the University of Tubingen, Germany, who analysed more than 15 million biomedical papers published from 2010 to 2024, said that AI models have caused a drastic shift in the vocabulary used in academic writing.
AI Research
UWF receives $100,000 grant from Air Force to advance AI and robotics research

PENSACOLA, Fla. — The University of West Florida was just awarded a major grant to help innovate cutting-edge technology in Artificial Intelligence.
The US Air Force Research Laboratory awarded $100,000 to UWF’s Intelligent Systems and Robotics doctorate program.
The grant supports research in Artificial Intelligence and robotics while training PhD students.
The funding was awarded to explore how these systems can support military operations, but also be applied to issues we could face here locally like DISA.
Unlike generative AI in apps like ChatGPT, this research focuses on “reinforcement learning.”
“It’s action-driven. It’s designed to produce strategies versus content and text or visual content,” said Dr. Kristen “Brent” Venable with UWF.
Dr. Venable is leading the research.
Her team is designing simulations that teach autonomous systems like robots and drones how to adapt to the environment around them without human help — enabling the drones to make a decision on their own.
“So if we deployed them and let them go autonomously, sometimes far away, they should be able to decide whether to communicate, whether to go in a certain direction,” she said.
The initial goal of the grant is to help the US military leverage machine learning.
But Dr. Venable says the technology has potential to help systems like local emergency management during a disaster.
“You can see how this could be applied for disaster response,” she said. “Think about having some drones that have to fly over a zone and find people to be rescued or assets that need to be restored.”
Dr. Venable says UWF is poised to deliver on their promises to innovate the technology.
The doctorate program was created with Pensacola’s Institute for Human and Machine Cognition, giving students access to world-class AI and robotics research.
Over the last five years, the program has expanded to more than 30 students.
“We are very well positioned because the way we are, in some sense, lean and mean is attractive to funding agencies,” Dr. Venable said. “Because we can deliver results while training the next generation.”
The local investment by the Air Force comes as artificial intelligence takes center stage nationally.
On Thursday, First Lady Melania Trump announced a presidential AI challenge for students and educators.
President Trump has also signed an executive order to expand AI education.
Dr. Venable says she’s confident the administration’s push for research will benefit the university’s efforts, as the one-year grant will only go so far.
“I think the administration is correctly identifying as a key factor in having the US lead on the research,” she said. “It’s a good seedling to start the conversation for one year.”
The research conducted at UWF and the IHMC are helping put the area on the map as an intelligence hub.
Dr. Venable says they’re actively discussing how to apply for more grants to help with this ongoing research.
AI Research
NSF Seeks to Advance AI Research Via New Operations Center
AI Research
UCR Researchers Strengthen AI Defenses Against Malicious Rewiring

As generative artificial intelligence (AI) technologies evolve and establish their presence in devices as commonplace as smartphones and automobiles, a significant concern arises. These powerful models, born from intricate architectures running on robust cloud servers, often undergo significant reductions in their operational capacities when adapted for lower-powered devices. One of the most alarming consequences of these reductions is that critical safety mechanisms can be lost in this transition. Researchers from the University of California, Riverside (UCR) have identified this issue and have innovated a solution aimed at preserving AI safety even as its operational framework is simplified for practical use.
The reduction of generative AI models entails the removal of certain internal processing layers, which are vital for maintaining safety standards. While smaller models are favored for their enhanced speed and efficiency, this trimming can inadvertently strip away the underlying mechanisms that prevent the generation of harmful outputs such as hate speech or instructions on illicit activities. This represents a double-edged sword: the very modifications aimed at optimizing functional performance may render these models susceptible to misuse.
The challenge lies not only in the effectiveness of the AI systems but also in the very nature of open-source models, which are inherently different from proprietary systems. Open-source AI models can be easily accessed, modified, and deployed by anyone, significantly enhancing transparency and encouraging academic growth. However, this openness also invites a plethora of risks, as oversight becomes difficult when these models deviate from their original design. In situations devoid of continuous monitoring and moderation, the potential misuse of these technologies grows exponentially.
In the context of their research, the UCR team concentrated on the degradation of safety features that occurs when AI models are downsized. Amit Roy-Chowdhury, the senior author of the study and a professor at UCR, articulates the concern quite clearly: “Some of the skipped layers turn out to be essential for preventing unsafe outputs.” This statement highlights the potential dangers of a seemingly innocuous tweak aimed at optimizing computational ability. The crux of the issue is that removal of layers may lead a model to generate dangerous outputs—including inappropriate content or even detailed instructions for harmful activities like bomb-making—when it encounters complex prompts.
The researchers’ strategy involved a novel approach to retraining the internal structure of the AI model. Instead of relying on external filters or software patches, which are often quickly circumvented or ineffective, the research team sought to embed a foundational understanding of risk within the core architecture of the model itself. By reassessessing how the model identifies and interprets dangerous content, the researchers were able to instill a level of intrinsic safety, ensuring that even after layers were removed, the model retained its ability to refuse harmful queries.
The core of their testing utilized LLaVA 1.5, a sophisticated vision-language model that integrates both textual and visual data. The researchers discovered that certain combinations of innocuous images with malicious inquiries could effectively bypass initial safety measures. Their findings were alarming; in a particular instance, the modified model furnished dangerously specific instructions for illicit activities. This critical incident underscored the pressing need for an effective method to safeguard against such vulnerabilities in AI systems.
Nevertheless, after implementing their retraining methodology, the researchers noted a significant improvement in the model’s safety metrics. The retrained AI demonstrated a consistent and unwavering refusal to engage with perilous queries, even when its architecture was substantially diminished. This illustrates a momentous leap forward in AI safety, where the model’s internal conditioning ensures proactive, protective behavior from the onset.
Bachu, one of the graduate students and co-lead authors, describes this focus as a form of “benevolent hacking.” By proactively reinforcing the fortifications of AI models, the risk of vulnerability exploitation diminishes. The long-term ambition behind this research is to establish methodologies that guarantee safety across every internal layer of the AI architecture. This approach aims to craft a more resilient framework, capable of operating securely in varied real-world conditions.
The implications of this research span beyond the technical realm; they touch upon ethical considerations and societal impacts as AI continues to infiltrate daily life. As generative AI becomes ubiquitous in our gadgets and tools, ensuring that these technologies do not propagate harm is not only a technological challenge but a moral imperative. There exists a delicate balance between innovation and responsibility, and pioneering research such as that undertaken at UCR is pivotal in traversing this complex landscape.
Roy-Chowdhury encapsulates the team’s vision by asserting, “There’s still more work to do. But this is a concrete step toward developing AI in a way that’s both open and responsible.” His words resonate deeply within the ongoing discourse surrounding generative AI, as the conversation evolves from mere implementation to a collaborative effort aimed at securing the future of AI development. The landscape of AI technologies is ever-shifting, and through continued research and exploration, academic institutions such as UCR signal the emergence of a new era where safety and openness coalesce. Their commitment to fostering a responsible and transparent AI ecosystem offers a bright prospect for future developments in the field.
The research was conducted within a collaborative environment, drawing insights not only from professors but also a dedicated team of graduate students. This collective approach underscores the significance of interdisciplinary efforts in tackling complex challenges posed by emerging technologies. The team, consisting of Amit Roy-Chowdhury, Saketh Bachu, Erfan Shayegani, and additional doctoral students, collaborated to create a robust framework aimed at revolutionizing how we view AI safety in dynamic environments.
Through their contributions, the University of California, Riverside stands at the forefront of AI research, championing methodologies that underline the importance of safety amid innovation. Their work serves as a blueprint for future endeavors that prioritize responsible AI development, inspiring other researchers and institutions to pursue similar paths. As generative AI continues to evolve, the principles established by this research will likely have a lasting impact, shaping the fundamental understanding of safety in AI technologies for generations to come.
Ultimately, as society navigates this unfolding narrative in artificial intelligence, the collaboration between academia and industry will be vital. The insights gained from UCR’s research can guide policies and frameworks that ensure the safe and ethical deployment of AI across various sectors. By embedding safety within the core design of AI models, we can work towards a future where these powerful tools enhance our lives without compromising our values or security.
While the journey towards achieving comprehensive safety in generative AI is far from complete, advancements like those achieved by the UCR team illuminate the pathway forward. As they continue to refine their methodologies and explore new horizons, the research serves as a clarion call for vigilance and innovation in equal measure. As we embrace a future that increasingly intertwines with artificial intelligence, let us collectively advocate for an ecosystem that nurtures creativity and safeguards humanity.
Subject of Research: Preserving AI Safeguards in Reduced Models
Article Title: UCR’s Groundbreaking Approach to Enhancing AI Safety
News Publication Date: October 2023
Web References: arXiv paper
References: International Conference on Machine Learning (ICML)
Image Credits: Stan Lim/UCR
Keywords
Tags: AI safety mechanismsgenerative AI technology concernsinnovations in AI safety standardsinternal processing layers in AImalicious rewiring in AI modelsopen-source AI model vulnerabilitiesoperational capacity reduction in AIoptimizing functional performance in AIpreserving safety in low-powered devicesrisks of smaller AI modelssafeguarding against harmful AI outputsUCR research on AI defenses
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics