AI Research
Nanomaterials research in the age of AI-generated images

With simple prompts it is possible to generate fake microscopy images of nanomaterials that are virtually indistinguishable from real images. Should we worry?
In a sobering Comment article published in this issue, several academics raise concerns about the misuse of generative artificial intelligence (AI), specifically in nanomaterials synthesis papers. Using simple prompts and just a few hours of training, the authors show that an AI tool can produce atomic force microscopy and electron microscopy images of nanomaterials that are indistinguishable from the real ones. They also show AI-generated images of ‘fantasy nanomaterials’ (for example, ‘nanocheetos’). Readers are encouraged to test whether they can distinguish between the real and the fake images.
Credit: Javier Zayas Photography / Moment / Getty Images
Whilst unsurprising, this Comment serves as a stark reminder of the ease with which fake microscopy images can nowadays be produced. Whether researchers will use AI to generate fake images in papers is the cogent issue for the scientific community. What can be done against this unethical use of generative AI?
The best place to start is education. The learning curve of any professional scientist starts during PhD training, but bachelor’s and master’s degree students already acquire behaviours from their surroundings. A healthy lab culture that emphasizes scientific rigor, attention to detail and good practice, such as data handling and curation, goes a long way towards forging generations of scientists who understand what is acceptable and what is not in science. Research integrity courses should be mandatory in all PhD programmes worldwide. Whether there are enough qualified instructors to deliver them is another matter.
As a global endeavour that feeds on exchanging ideas among international collaborators, scientific research has developed a shared set of ethical behaviours1,2. Misconduct is centred around three main practices: plagiarism, falsification and fabrication. AI-generated microscopy images, like those shown in the Comment, would constitute image fabrication.
Whilst it is concerning that not even a highly trained human can recognize fake AI-generated images, we should also note that AI tools can be used to identify them. Indeed, AI tools are used to detect image fabrication, falsification and plagiarism by many publishers, including Springer Nature3. In Nature and the Nature Portfolio journals, life-science papers are routinely screened using a commercial AI tool (Proofig) prior to acceptance. If potential image manipulation is detected, authors will be guided to resolve any identified problem. A similar process is in place in the Science journal family4.
Importantly, peer review, in which peer researchers evaluate research for validity, ethical design and merit, was never designed to catch fraudsters. We do not ask our reviewers to examine data for possible manipulation or to repeat experiments, because science is based on trust. And it should remain that way. Retaining trust in science is a collective responsibility and requires contributions from researchers, publishers, universities, research-based businesses, government and non-government bodies alike. A stronger collaboration between AI-tools developers and science integrity experts needs to be fostered.
Publishers are being called on to check that what is published is reproducible, trustworthy science. In Nature Portfolio journals, reporting summaries, checklists for specific topics (for example, lasers or solar cells), enabling or mandating data reposition, quality checks and careful editing to moderate conclusions occur in the submission-to-publication journey of a manuscript with no or minimal reviewer involvement. For post-publication concerns, Springer Nature has a dedicated research integrity team that oversees policies and procedures in accordance with the guidelines of COPE (Committee on Publication Ethics) and investigates these cases.
The sophistication of images produced using AI tools means that copying and pasting noise traces or cropping out unwanted parts of an image is now obsolete. But in the age of AI too, the words of Richard Feynman loom large5: “We’ve learned from experience that the truth will come out. Other experimenters will repeat your experiment and find out whether you were wrong or right. Nature’s phenomena will agree or they’ll disagree with your theory. And, although you may gain some temporary fame and excitement, you will not gain a good reputation as a scientist if you haven’t tried to be very careful in this kind of work.”
AI arrives at a fertile time in the history of science, when high-throughput experiments generate big datasets that the human brain struggles to process, and science-driven policies are needed to address pressing and complex societal issues. The potential of AI tools is still to be fully appreciated by researchers, but every field will be profoundly transformed by their use6. Researchers should become adept at using AI tools to increase their creativity and productivity, rather than generate fake results.
AI Research
Pentagon research official wants to have AI on every desktop in 6 to 9 months

The Pentagon is angling to introduce artificial intelligence across its workforce within nine months following the reorganization of its key AI office.
Emil Michael, under secretary of defense for research and engineering at the Department of Defense, talked about the agency’s plans for introducing AI to its operations as it continues its modernization journey.
“We want to have an AI capability on every desktop — 3 million desktops — in six or nine months,” Michael said during a Politico event on Tuesday. “We want to have it focus on applications for corporate use cases like efficiency, like you would use in your own company … for intelligence and for warfighting.”
This announcement follows the recent shakeups and restructuring of the Pentagon’s main artificial intelligence office. A senior defense official said the Chief Digital and Artificial Intelligence Office will serve as a new addition to the department’s research portfolio.
Michael also said he is “excited” about the restructured CDAO, adding that its new role will pivot to a focus on research that is similar to the Defense Advanced Research Projects Agency and Missile Defense Agency. This change is intended to enhance research and engineering priorities that will help advance AI for use by the armed forces and not take agency focus away from AI deployment and innovation.
“To add AI to that portfolio means it gets a lot of muscle to it,” he said. “So I’m spending at least a third of my time –– maybe half –– rethinking how the AI deployment strategy is going to be at DOD.”
Applications coming out of the CDAO and related agencies will then be tailored to corporate workloads, such as efficiency-related work, according to Michael, along with intelligence and warfighting needs.
The Pentagon first stood up the CDAO and brought on its first chief digital and artificial intelligence officer in 2022 to advance the agency’s AI efforts.
The restructuring of the CDAO this year garnered attention due to its pivotal role in investigating the defense applications of emerging technologies and defense acquisition activities. Job cuts within the office added another layer of concern, with reports estimating a 60% reduction in the CDAO workforce.
AI Research
Pentagon CTO wants AI on every desktop in 6 to 9 months

The Pentagon aims to get AI tools to its entire workforce next year, the department’s chief technical officer said one month after being given control of its main AI office.
“We want to have an AI capability on every desktop — 3 million desktops — in six or nine months,” Emil Michael, defense undersecretary for research and engineering, said at a Politico event on Tuesday. “We want to have it focus on applications for corporate use cases like efficiency, like you would use in your own company…for intelligence and for warfighting.”
Four weeks ago, the Chief Digital and Artificial Intelligence Office was demoted from reporting to Deputy Defense Secretary Stephen Feinberg to Michael, a subordinate.
Michael said CDAO will become a research body like the Defense Advanced Research Projects Agency and Missile Defense Agency. He said the change is meant to boost research and engineering into AI for the military, but not reduce its efforts to deploy AI and make innovations.
“To add AI to that portfolio means it gets a lot of muscle to it,” he said. “So I’m spending at least a third of my time—maybe half—rethinking how the AI-deployment strategy is going to be at DOD.”
He said applications would emerge from the CDAO and related agencies that will be tailored to corporate workloads.
The Pentagon created the CDAO in 2022 to advance the agency’s AI efforts and look into defense applications for emerging technologies. The office’s restructuring earlier this year garnered attention. Job cuts within the office added another layer of concern, with reports estimating a 60% reduction in the CDAO workforce.
AI Research
Panelists Will Question Who Controls AI | ACS CC News
Artificial intelligence (AI) has become one of the fastest-growing technologies in the world today. In many industries, individuals and organizations are racing to better understand AI and incorporate it into their work. Surgery is no exception, and that is why Clinical Congress 2025 has made AI one of the six themes of its Opening Day Thematic Sessions.
The first full day of the conference, Sunday, October 5, will include two back-to-back Panel Sessions on AI. The first session, “Using ChatGPT and AI for Beginners” (PS104), offers a foundation for surgeons not yet well versed in AI. The second, “AI: Who Is In Control?” (PS 110), will offer insights into the potential upsides and drawbacks of AI use, as well as its limitations and possible future applications, so that surgeons can involve this technology in their clinical care safely and effectively.
“AI: Who Is In Control?” will be moderated by Anna N. Miller, MD, FACS, an orthopaedic surgeon at Dartmouth Hitchcock Medical Center in Lebanon, New Hampshire, and Gabriel Brat, MD, MPH, MSc, FACS, a trauma and acute care surgeon at Beth Israel Deaconess Medical Center and an assistant professor at Harvard Medical School, both in Boston, Massachusetts.
In an interview, Dr. Brat shared his view that the use of AI is not likely to replace surgeons or decrease the need for surgical skills or decision-making. “It’s not an algorithm that’s going to be throwing the stitch. It’s still the surgeon.”
Nonetheless, he said that the starting presumption of the session is that AI is likely to be highly transformative to the profession over time.
“Once it has significant uptake, it’ll really change elements of how we think about surgery,” he said, including creating meaningful opportunities for improvements.
The key question of the session, therefore, is not whether to engage with AI, but to do so in ways that ensure the best outcomes: “We as surgeons need to have a role in defining how to do so safely and effectively. Otherwise, people will start to use these tools, and we will be swept along with a movement as opposed to controlling it.”
To that end, Dr. Brat explained that the session will offer “a really strong translational focus by people who have been in the trenches working with these technologies.” He and Dr. Miller have specifically chosen an “all-star panel” designed to represent academia, healthcare associations, and industry.
The panelists include Rachael A. Callcut, MD, MSPH, FACS, who is the division chief of trauma, acute care surgery and surgical critical care as well as associate dean of data science and innovation at the University of California-Davis Health in Sacramento, California. She will share the perspective on AI from academic surgery.
Genevieve Melton-Meaux, MD, PhD, FACS, FACMI, the inaugural ACS Chief Health Informatics Officer, will present on AI usage in healthcare associations. She also is a colorectal surgeon and the senior associate dean for health informatics and data science at the University of Minnesota and chief health informatics and AI officer for Fairview Health Services, both in Minneapolis.
Finally, Khan Siddiqui, MD, a radiologist and serial entrepreneur who is the cofounder, chairman, and CEO of a company called HOPPR AI, will present the view from industry. HOPPR AI is a for-profit company focused on building AI apps for medical imaging. As a radiologist, Dr. Siddiqui represents a medical specialty that is thought to likely undergo sweeping change as AI is incorporated into image-reading and diagnosis. His comments will focus on professional insights relevant to surgeons.
Their presentations will provide insights on general usage of AI at present, as well as predictions on what the landscape for AI in healthcare will look like in approximately 5 years. The session will include advice on what approaches to AI may be most effective for surgeons interested in ensuring positive outcomes and avoiding negative ones.
Additional information on AI usage pervades Clinical Congress 2025. In addition to various sessions that will comment on AI throughout the 4 days of the conference, various researchers will present studies that involve AI in their methods, starting presumptions, and/or potential applications to practice.
Access the Interactive Program Planner for more details about Clinical Congress 2025 sessions.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries