Connect with us

AI Research

Is it OK for AI to write science papers? Nature survey shows researchers are split

Published

on


How much is the artificial intelligence (AI) revolution altering the process of communicating science? With generative AI tools such as ChatGPT improving so rapidly, attitudes about using them to write research papers are also evolving. The number of papers with signs of AI use is rising rapidly (D. Kobak et al. Preprint at arXiv https://doi.org/pkhp; 2024), raising questions around plagiarism and other ethical concerns.

To capture a sense of researchers’ thinking on this topic, Nature posed a variety of scenarios to some 5,000 academics around the world, to understand which uses of AI are considered ethically acceptable.

The survey results suggest that researchers are sharply divided on what they feel are appropriate practices. Whereas academics generally feel it’s acceptable to use AI chatbots to help to prepare manuscripts, relatively few report actually using AI for this purpose — and those who did often say they didn’t disclose it.

Past surveys reveal that researchers also use generative AI tools to help them with coding, to brainstorm research ideas and for a host of other tasks. In some cases, most in the academic community already agree that such applications are either appropriate or, as in the case of generating AI images, unacceptable. Nature’s latest poll focused on writing and reviewing manuscripts — areas in which the ethics aren’t as clear-cut.

A divided landscape

Nature’s survey laid out several scenarios in which a fictional academic, named Dr Bloggs, had used AI without disclosing it — such as to generate the first draft of a paper, to edit their own draft, to craft specific sections of the paper and to translate a paper. Other scenarios involved using AI to write a peer review or to provide suggestions about a manuscript Dr Bloggs was reviewing (see Supplementary information for full survey, data and methodology, and you can also test yourself against some of the survey questions).

Survey participants were asked what they thought was acceptable and whether they had used AI in these situations, or would be willing to. They were not informed about journal policies, because the intent was to reveal researchers’ underlying opinions. The survey was anonymous.

The 5,229 respondents were contacted in March, through e-mails sent to randomly chosen authors of research papers recently published worldwide and to some participants in Springer Nature’s market-research panel of authors and reviewers, or through an invitation from Nature’s daily briefing newsletter. They do not necessarily represent the views of researchers in general, because of inevitable response bias. However, they were drawn from all around the world — of those who stated a country, 21% were from the United States, 10% from India and 8% from Germany, for instance — and represent various career stages and fields. (Authors in China are under-represented, mainly because many didn’t respond to e-mail invitations).

The survey suggests that current opinions on AI use vary among academics — sometimes widely. Most respondents (more than 90%) think it is acceptable to use generative AI to edit one’s research paper or to translate it. But they differ on whether the AI use needs to be disclosed, and in what format: for instance, through a simple disclosure, or by giving details about the prompts given to an AI tool.

Editing and translating with AI. Bar chart. 35% of respondents think it is appropriate to use an AI tool for help with editing a research article. 14% think it is appropriate to use an AI tool to translate their article into another language.

When it comes to generating text with AI —for instance, to write all or part of one’s paper — views are more divided. In general, a majority (65%) think it is ethically acceptable, but about one-third are against it.

Drafting or summarizing with ai. Bar chart. 13% of respondents think it is appropriate to use an AI tool for help with the first draft of a research article. 16% think it is appropriate to use an AI tool generate a summary of a research paper.

Asked about using AI to draft specific sections of a paper, most researchers felt it was acceptable to do this for the paper’s abstract, but more were opposed to doing so for other sections.

Opinions about using AI to draft different sections of a paper. Bar chart. Nature’s survey asked people to consider the following scenario: “Dr Bloggs uses an AI tool to write a section of a research paper, then edits the results. They don’t disclose that they used AI.” Respondents showed most concern about using AI for the results section.

Although publishers generally agree that substantive AI use in academic writing should be declared, the response from Nature’s survey suggests that not all researchers have the same opinion, says Alex Glynn, a research literacy and communications instructor at the University of Louisville in Kentucky. “Does the disconnect reflect a lack of familiarity with the issue or a principled disagreement with the publishing community?”

Using AI to generate an initial peer-review report was more frowned upon — with more than 60% of respondents saying it was not appropriate (about one-quarter of these cited privacy concerns). But the majority (57%) felt it was acceptable to use AI to assist in peer review by answering questions about a manuscript.

Opinions on AI in peer review. Bar chart. 5% of respondents think it is appropriate to use an AI tool to conduct an initial peer review of a manuscript. 17% think it is appropriate to ask an AI tool specific questions about a paper they are reviewing.

“I’m glad to see people seem to think using AI to draft a peer-review report is not acceptable, but I’m more surprised by the number of people who seem to think AI assistance for human reviewers is also out of bounds,” says Chris Leonard, a scholarly-communications consultant who writes about developments in AI and peer review in his newsletter, Scalene. (Leonard also works as a director of product solutions at Cactus Communications, a multinational firm in Mumbai, India.) “That hybrid approach is perfect to catch things reviewers may have missed.”

AI still used only by a minority

In general, few academics said they had actually used AI for the scenarios Nature posed. The most popular category was using AI to edit one’s research paper, but only around 28% said they had done this (another 43%, however, said they’d be willing to). Those numbers dropped to around 8% for writing a first draft, making summaries of other articles for use in one’s own paper, translating a paper and supporting peer review.

Experiences with Using AI to write papers. Bar chart. 18% of researchers said they had actually used generative AI to edit a paper. 4% to translate a paper. 4% to make summaries and 4% to write a first draft.

A mere 4% of respondents said they’d used AI to conduct an initial peer review.

Opinions on AI in peer review. Bar chart. 5% of respondents think it is appropriate to use an AI tool to conduct an initial peer review of a manuscript. 17% think it is appropriate to ask an AI tool specific questions about a paper they are reviewing.

Overall, about 65% reported that they had never used AI in any of the scenarios given, with people earlier in their careers being more likely to have used AI at least for one case. But when respondents did say they had used AI, they more often than not said they hadn’t disclosed it at the time.

“These results validate what we have also heard from researchers — that there’s great enthusiasm but low adoption of AI to support the research process,” says Josh Jarrett, a senior vice-president at Wiley, the multinational scholarly publisher, which has also surveyed researchers about use of AI.

Split opinions

When given the opportunity to comment on their views, researchers’ opinions varied drastically. On the one hand, some said that the broad adoption of generative AI tools made disclosure unnecessary. “AI will be, if not already is, a norm just like using a calculator,” says Aisawan Petchlorlian, a biomedical researcher at Chulalongkorn University in Bangkok. “‘Disclosure’ will not be an important issue.”

On the other hand, some said that AI use would always be unacceptable. “I will never condone using generative AI for writing or reviewing papers, it is pathetic cheating and fraud,” said an Earth-sciences researcher in Canada.

Others were more ambivalent. Daniel Egan, who studies infectious diseases at the University of Cambridge, UK, says that although AI is a time-saver and excellent at synthesizing complex information from multiple sources, relying on it too heavily can feel like cheating oneself. “By using it, we rob ourselves of the opportunities to learn through engaging with these sometimes laborious processes.”

Respondents also raised a variety of concerns, from ethical questions around plagiarism and breaching trust and accountability in the publishing and peer-review process to worries about AI’s environmental impact.

Some said that although they generally accepted that the use of these tools could be ethical, their own experience revealed that AI often produced sub-par results — false citations, inaccurate statements and, as one person described it, “well-formulated crap”. Respondents also noted that the quality of an AI response could vary widely depending on the specific tool that was used.

There were also some positives: many respondents pointed out that AI could help to level the playing field for academics for whom English was not a first language.

Several also explained why they supported certain uses, but found others unacceptable. “I use AI to self-translate from Spanish to English and vice versa, complemented with intensive editing of the text, but I would never use AI to generate work from scratch because I enjoy the process of writing, editing and reviewing,” says a humanities researcher from Spain. “And I would never use AI to review because I would be horrified to be reviewed by AI.”

Career stage and location

Perhaps surprisingly, academics’ opinions didn’t generally seem to differ widely by their geographical location, research field or career stage. However, respondents’ self-reported experience with AI for writing or reviewing papers did correlate strongly with having favourable opinions of the scenarios, as might be expected.

Career stage did seem to matter when it came to the most popular use of AI — to edit papers. Here, younger researchers were both more likely to think the practice acceptable, and more likely to say they had done it.

Impact of career stage for editing with AI. Bar chart. In many scenarios, respondents’ career stage made little difference to reported AI use, or opinions on its appropriateness. However, younger respondents were more in favour of using AI to edit articles, and more likely to say they had done it.

And respondents from countries where English is not a first language were generally more likely than those in English-speaking nations to have used AI in the scenarios. Their underlying opinions on the ethics of AI use, however, did not seem to differ greatly.

AI use: Impact of country. Bar chart. Few clear trends were discernible by country or world region. However, for some scenarios, respondents based in countries where English was not a first language were more likely to say that they had used AI (although there was less difference in opinions about the ethics of using AI).

Related surveys

Various researchers and publishers have conducted surveys of AI use in the academic community, looking broadly at how AI might be used in the scientific process. In January, Jeremy Ng, a health researcher at the Ottawa Hospital Research Institute in Canada, and his colleagues published a survey of more than 2,000 medical researchers, in which 45% of respondents said they had previously used AI chatbots (J. Y. Ng et al. Lancet Dig. Health 7, e94–e102; 2025). Of those, more than two-thirds said they had used it for writing or editing manuscripts — meaning that, overall, around 31% of the people surveyed had used AI for this purpose. That is slightly more than in Nature’s survey.

“Our findings revealed enthusiasm, but also hesitation,” Ng says. “They really reinforced the idea that there’s not a lot of consensus around how, where or for what these chatbots should be used for scientific research.”

In February, Wiley published a survey examining AI use in academia by nearly 5,000 researchers around the world (see go.nature.com/438yngu). Among other findings, this revealed that researchers felt most uses of AI (such as writing up documentation and increasing the speed and ease of peer review) would be commonly accepted in the next few years. But less than half of the respondents said they had actually used AI for work, with 40% saying they’d used it for translation and 38% for proofreading or editing of papers.



Source link

AI Research

I asked ChatGPT to help me pack for my vacation – try this awesome AI prompt that makes planning your travel checklist stress-free

Published

on


It’s that time of year again, when those of us in the northern hemisphere pack our sunscreen and get ready to venture to hotter climates in search of some much-needed Vitamin D.

Every year, I book a vacation, and every year I get stressed as the big day gets closer, usually forgetting to pack something essential, like a charger for my Nintendo Switch 2, or dare I say it, my passport.



Source link

Continue Reading

AI Research

Denodo Announces Plans to Further Support AI Innovation by Releasing Denodo DeepQuery, a Deep Research Capability — TradingView News

Published

on


PALO ALTO, Calif., July 07, 2025 (GLOBE NEWSWIRE) — Denodo, a leader in data management, announced the availability of the Denodo DeepQuery capability, now as a private preview, and generally available soon, enabling generative AI (GenAI) to go beyond retrieving facts to investigating, synthesizing, and explaining its reasoning. Denodo also announced the availability of Model Context Protocol (MCP) support as part of the Denodo AI SDK.

Built to address complex, open-ended business questions, DeepQuery will leverage live access to a wide spectrum of governed enterprise data across systems, departments, and formats. Unlike traditional GenAI solutions, which rephrase existing content, DeepQuery, a deep research capability, will analyze complex, open questions and search across multiple systems and sources to deliver well-structured, explainable answers rooted in real-time information. To help users operate this new capability to better understand complex current events and situations, DeepQuery will also leverage external data sources to extend and enrich enterprise data with publicly available data, external applications, and data from trading partners.

DeepQuery, beyond what’s possible using traditional generative AI (GenAI) chat or retrieval augmented generation (RAG), will enable users to ask complex, cross-functional questions that would typically take analysts days to answer—questions like, “Why did fund outflows spike last quarter?” or “What’s driving changes in customer retention across regions?” Rather than piecing together reports and data exports, DeepQuery will connect to live, governed data across different systems, apply expert-level reasoning, and deliver answers in minutes.

Slated to be packaged with the Denodo AI SDK, which streamlines AI application development with pre-built APIs, DeepQuery is being developed as a fully extensible component of the Denodo Platform, enabling developers and AI teams to build, experiment with, and integrate deep research capabilities into their own agents, copilots, or domain-specific applications.

“With DeepQuery, Denodo is demonstrating forward-thinking in advancing the capabilities of AI,” said Stewart Bond, Research VP, Data Intelligence and Integration Software at IDC. “DeepQuery, driven by deep research advances, will deliver more accurate AI responses that will also be fully explainable.”

Large language models (LLMs), business intelligence tools, and other applications are beginning to offer deep research capabilities based on public Web data; pre-indexed, data-lakehouse-specific data; or document-based retrieval, but only Denodo is developing deep research capabilities, in the form of DeepQuery, that are grounded in enterprise data across all systems, data that is delivered in real-time, structured, and governed. These capabilities are enabled by the Denodo Platform’s logical approach to data management, supported by a strong data virtualization foundation.

Denodo DeepQuery is currently available in a private preview mode. Denodo is inviting select organizations to join its AI Accelerator Program, which offers early access to DeepQuery capabilities, as well as the opportunity to collaborate with our product team to shape the future of enterprise GenAI.

“As a Denodo partner, we’re always looking for ways to provide our clients with a competitive edge,” said Nagaraj Sastry, Senior Vice President, Data and Analytics at Encora. “Denodo DeepQuery gives us exactly that. Its ability to leverage real-time, governed enterprise data for deep, contextualized insights sets it apart. This means we can help our customers move beyond general AI queries to truly intelligent analysis, empowering them to make faster, more informed decisions and accelerating their AI journey.”

Denodo also announced support of Model Context Protocol (MCP), and an MCP Server implementation is now included in the latest version of the Denodo AI SDK. As a result, all AI agents and apps based on the Denodo AI SDK can be integrated with any MCP-compliant client, providing customers with a trusted data foundation for their agentic AI ecosystems based on open standards.

“AI’s true potential in the enterprise lies not just in generating responses, but in understanding the full context behind them,” said Angel Viña, CEO and Founder of Denodo. “With DeepQuery, we’re unlocking that potential by combining generative AI with real-time, governed access to the entire corporate data ecosystem, no matter where that data resides. Unlike siloed solutions tied to a single store, DeepQuery leverages enriched, unified semantics across distributed sources, allowing AI to reason, explain, and act on data with unprecedented depth and accuracy.”

Additional Information

  • Denodo Platform: What’s New
  • Blog Post: Smarter AI Starts Here: Why DeepQuery Is the Next Step in GenAI Maturity
  • Demo: Watch a short video of this capability in action.

About Denodo

Denodo is a leader in data management. The award-winning Denodo Platform is the leading logical data management platform for transforming data into trustworthy insights and outcomes for all data-related initiatives across the enterprise, including AI and self-service. Denodo’s customers in all industries all over the world have delivered trusted AI-ready and business-ready data in a third of the time and with 10x better performance than with lakehouses and other mainstream data platforms alone. For more information, visit denodo.com.

Media Contacts

pr@denodo.com



Source link

Continue Reading

AI Research

Sakana AI: Think LLM dream teams, not single models

Published

on


Enterprises may want to start thinking of large language models (LLMs) as ensemble casts that can combine knowledge and reasoning to complete tasks, according to Japanese AI lab Sakana AI.

Sakana AI in a research paper outlined a method called Multi-LLM AB-MCTS (Adaptive Branching Monte Carlo Tree Search) that uses a collection of LLMs to cooperate, perform trial-and-error and leverage strengths to solve complex problems.

In a post, Sakana AI said:

“Frontier AI models like ChatGPT, Gemini, Grok, and DeepSeek are evolving at a breathtaking pace amidst fierce competition. However, no matter how advanced they become, each model retains its own individuality stemming from its unique training data and methods. We see these biases and varied aptitudes not as limitations, but as precious resources for creating collective intelligence. Just as a dream team of diverse human experts tackles complex problems, AIs should also collaborate by bringing their unique strengths to the table.”

Sakana AI said AB-MCTS is a method for inference-time scaling to enable frontier AIs to cooperate and revisit problems and solutions. Sakana AI released the algorithm as an open source framework called TreeQuest, which has a flexible API that allows users to use AB-MCTS for tasks with multiple LLMs and custom scoring.

What’s interesting is that Sakana AI gets out of that zero-sum LLM argument. The companies behind LLM training would like you to think there’s one model to rule them all. And you’d do the same if you were spending so much on training models and wanted to lock in customers for scale and returns.

Sakana AI’s deceptively simple solution can only come from a company that’s not trying to play LLM leapfrog every few minutes. The power of AI is in the ability to maximize the potential of each LLM. Sakana AI said:

“We saw examples where problems that were unsolvable by any single LLM were solved by combining multiple LLMs. This went beyond simply assigning the best LLM to each problem. In (an) example, even though the solution initially generated by o4-mini was incorrect, DeepSeek-R1-0528 and Gemini-2.5-Pro were able to use it as a hint to arrive at the correct solution in the next step. This demonstrates that Multi-LLM AB-MCTS can flexibly combine frontier models to solve previously unsolvable problems, pushing the limits of what is achievable by using LLMs as a collective intelligence.”

A few thoughts:

  • Sakana AI’s research and move to emphasize collective intelligence over on LLM and stack is critical to enterprises that need to create architectures that don’t lock them into one provider.
  • AB-MCTS could play into what agentic AI needs to become to be effective and complement emerging standards such as Model Context Protocol (MCP) and Agent2Agent.
  • If combining multiple models to solve problems becomes frictionless, the costs will plunge. Will you need to pay up for OpenAI when you can leverage LLMs like DeepSeek combined with Gemini and a few others? 
  • Enterprises may want to start thinking about how to build decision engines instead of an overall AI stack. 
  • We could see a scenario where a collective of LLMs achieves superintelligence before any one model or provider. If that scenario plays out, can LLM giants maintain valuations?
  • The value in AI may not be in the infrastructure or foundational models in the long run, but the architecture and approaches.

More:



Source link

Continue Reading

Trending