AI Research
DeepMind’s latest research at NeurIPS 2022

Advancing best-in-class large models, compute-optimal RL agents, and more transparent, ethical, and fair AI systems
The thirty-sixth International Conference on Neural Information Processing Systems (NeurIPS 2022) is taking place from 28 November – 9 December 2022, as a hybrid event, based in New Orleans, USA.
NeurIPS is the world’s largest conference in artificial intelligence (AI) and machine learning (ML), and we’re proud to support the event as Diamond sponsors, helping foster the exchange of research advances in the AI and ML community.
Teams from across DeepMind are presenting 47 papers, including 35 external collaborations in virtual panels and poster sessions. Here’s a brief introduction to some of the research we’re presenting:
Best-in-class large models
Large models (LMs) – generative AI systems trained on huge amounts of data – have resulted in incredible performances in areas including language, text, audio, and image generation. Part of their success is down to their sheer scale.
However, in Chinchilla, we have created a 70 billion parameter language model that outperforms many larger models, including Gopher. We updated the scaling laws of large models, showing how previously trained models were too large for the amount of training performed. This work already shaped other models that follow these updated rules, creating leaner, better models, and has won an Outstanding Main Track Paper award at the conference.
Building upon Chinchilla and our multimodal models NFNets and Perceiver, we also present Flamingo, a family of few-shot learning visual language models. Handling images, videos and textual data, Flamingo represents a bridge between vision-only and language-only models. A single Flamingo model sets a new state of the art in few-shot learning on a wide range of open-ended multimodal tasks.
And yet, scale and architecture aren’t the only factors that are important for the power of transformer-based models. Data properties also play a significant role, which we discuss in a presentation on data properties that promote in-context learning in transformer models.
Optimising reinforcement learning
Reinforcement learning (RL) has shown great promise as an approach to creating generalised AI systems that can address a wide range of complex tasks. It has led to breakthroughs in many domains from Go to mathematics, and we’re always looking for ways to make RL agents smarter and leaner.
We introduce a new approach that boosts the decision-making abilities of RL agents in a compute-efficient way by drastically expanding the scale of information available for their retrieval.
We’ll also showcase a conceptually simple yet general approach for curiosity-driven exploration in visually complex environments – an RL agent called BYOL-Explore. It achieves superhuman performance while being robust to noise and being much simpler than prior work.
Algorithmic advances
From compressing data to running simulations for predicting the weather, algorithms are a fundamental part of modern computing. And so, incremental improvements can have an enormous impact when working at scale, helping save energy, time, and money.
We share a radically new and highly scalable method for the automatic configuration of computer networks, based on neural algorithmic reasoning, showing that our highly flexible approach is up to 490 times faster than the current state of the art, while satisfying the majority of the input constraints.
During the same session, we also present a rigorous exploration of the previously theoretical notion of “algorithmic alignment”, highlighting the nuanced relationship between graph neural networks and dynamic programming, and how best to combine them for optimising out-of-distribution performance.
Pioneering responsibly
At the heart of DeepMind’s mission is our commitment to act as responsible pioneers in the field of AI. We’re committed to developing AI systems that are transparent, ethical, and fair.
Explaining and understanding the behaviour of complex AI systems is an essential part of creating fair, transparent, and accurate systems. We offer a set of desiderata that capture those ambitions, and describe a practical way to meet them, which involves training an AI system to build a causal model of itself, enabling it to explain its own behaviour in a meaningful way.
To act safely and ethically in the world, AI agents must be able to reason about harm and avoid harmful actions. We’ll introduce collaborative work on a novel statistical measure called counterfactual harm, and demonstrate how it overcomes problems with standard approaches to avoid pursuing harmful policies.
Finally, we’re presenting our new paper which proposes ways to diagnose and mitigate failures in model fairness caused by distribution shifts, showing how important these issues are for the deployment of safe ML technologies in healthcare settings.
See the full range of our work at NeurIPS 2022 here.
AI Research
Dogs and drones join forest battle against eight-toothed beetle

Esme Stallard and Justin RowlattClimate and science team

It is smaller than your fingernail, but this hairy beetle is one of the biggest single threats to the UK’s forests.
The bark beetle has been the scourge of Europe, killing millions of spruce trees, yet the government thought it could halt its spread to the UK by checking imported wood products at ports.
But this was not their entry route of choice – they were being carried on winds straight over the English Channel.
Now, UK government scientists have been fighting back, with an unusual arsenal including sniffer dogs, drones and nuclear waste models.
They claim the UK has eradicated the beetle from at risk areas in the east and south east. But climate change could make the job even harder in the future.
The spruce bark beetle, or Ips typographus, has been munching its way through the conifer trees of Europe for decades, leaving behind a trail of destruction.
The beetles rear and feed their young under the bark of spruce trees in complex webs of interweaving tunnels called galleries.
When trees are infested with a few thousand beetles they can cope, using resin to flush the beetles out.
But for a stressed tree its natural defences are reduced and the beetles start to multiply.
“Their populations can build to a point where they can overcome the tree defences – there are millions, billions of beetles,” explained Dr Max Blake, head of tree health at the UK government-funded Forestry Research.
“There are so many the tree cannot deal with them, particularly when it is dry, they don’t have the resin pressure to flush the galleries.”
Since the beetle took hold in Norway over a decade ago it has been able to wipe out 100 million cubic metres of spruce, according to Rothamsted Research.
‘Public enemy number one’
As Sitka spruce is the main tree used for timber in the UK, Dr Blake and his colleagues watched developments on continental Europe with some serious concern.
“We have 725,000 hectares of spruce alone, if this beetle was allowed to get hold of that, the destructive potential means a vast amount of that is at risk,” said Andrea Deol at Forestry Research. “We valued it – and it’s a partial valuation at £2.9bn per year in Great Britain.”
There are more than 1,400 pests and diseases on the government’s plant health risk register, but Ips has been labelled “public enemy number one”.
The number of those diseases has been accelerating, according to Nick Phillips at charity The Woodland Trust.
“Predominantly, the reason for that is global trade, we’re importing wood products, trees for planting, which does sometimes bring ‘hitchhikers’ in terms of pests and disease,” he said.
Forestry Research had been working with border control for years to check such products for Ips, but in 2018 made a shocking discovery in a wood in Kent.
“We found a breeding population that had been there for a few years,” explained Ms Deol.
“Later we started to pick up larger volumes of beetles in [our] traps which seemed to suggest they were arriving by other means. All of the research we have done now has indicated they are being blown over from the continent on the wind,” she added.

The team knew they had to act quickly and has been deploying a mixture of techniques that wouldn’t look out of place in a military operation.
Drones are sent up to survey hundreds of hectares of forest, looking for signs of infestation from the sky – as the beetle takes hold, the upper canopy of the tree cannot be fed nutrients and water, and begins to die off.
But next is the painstaking work of entomologists going on foot to inspect the trees themselves.
“They are looking for a needle in a haystack, sometimes looking for single beetles – to get hold of the pioneer species before they are allowed to establish,” Andrea Deol said.
In a single year her team have inspected 4,500 hectares of spruce on the public estate – just shy of 7,000 football pitches.
Such physically-demanding work is difficult to sustain and the team has been looking for some assistance from the natural and tech world alike.

When the pioneer Spruce bark beetles find a suitable host tree they release pheromones – chemical signals to attract fellow beetles and establish a colony.
But it is this strong smell, as well as the smell associated with their insect poo – frass – that makes them ideal to be found by sniffer dogs.
Early trials so far have been successful. The dogs are particularly useful for inspecting large timber stacks which can be difficult to inspect visually.
The team is also deploying cameras on their bug traps, which are now able to scan daily for the beetles and identify them in real time.
“We have [created] our own algorithm to identify the insects. We have taken about 20,000 images of Ips, other beetles and debris, which have been formally identified by entomologists, and fed it into the model,” said Dr Blake.
Some of the traps can be in difficult to access areas and previously had only been checked every week by entomologists working on the ground.
The result of this work means that the UK has been confirmed as the first country to have eradicated Ips Typographus in its controlled areas, deemed to be at risk from infestation, and which covers the south east and east England.
“What we are doing is having a positive impact and it is vital that we continue to maintain that effort, if we let our guard down we know we have got those incursion risks year on year,” said Ms Deol.

And those risks are rising. Europe has seen populations of Ips increase as they take advantage of trees stressed by the changing climate.
Europe is experiencing more extreme rainfall in winter and milder temperatures meaning there is less freezing, leaving the trees in waterlogged conditions.
This coupled with drier summers leaves them stressed and susceptible to falling in stormy weather, and this is when Ips can take hold.
With larger populations in Europe the risk of Ips colonies being carried to the UK goes up.
The team at Forestry Research has been working hard to accurately predict when these incursions may occur.
“We have been doing modelling with colleagues at the University of Cambridge and the Met Office which have adapted a nuclear atmospheric dispersion model to Ips,” explained Dr Blake. “So, [the model] was originally used to look at nuclear fallout and where the winds take it, instead we are using the model to look at how far Ips goes.”
Nick Phillips at The Woodland Trust is strongly supportive of the government’s work but worries about the loss of ancient woodland – the oldest and most biologically-rich areas of forest.
Commercial spruce have long been planted next to such woods, and every time a tree hosting spruce beetle is found, it and neighbouring, sometimes ancient trees, have to be removed.
“We really want the government to maintain as much of the trees as they can, particularly the ones that aren’t affected, and then also when the trees are removed, supporting landowners to take steps to restore what’s there,” he said. “So that they’re given grants, for example, to be able to recover the woodland sites.”
The government has increased funding for woodlands in recent years but this has been focused on planting new trees.
“If we only have funding and support for the first few years of a tree’s life, but not for those woodlands that are 100 or century years old, then we’re not going to be able to deliver nature recovery and capture carbon,” he said.
Additional reporting Miho Tanaka

AI Research
AI replaces excuses for innovation, not jobs
AI isn’t here to replace jobs, it’s here to eliminate outdated practices and empower entrepreneurs to innovate faster and smarter than ever before.
AI Research
AI Tool Flags Predatory Journals, Building a Firewall for Science

Summary: A new AI system developed by computer scientists automatically screens open-access journals to identify potentially predatory publications. These journals often charge high fees to publish without proper peer review, undermining scientific credibility.
The AI analyzed over 15,000 journals and flagged more than 1,000 as questionable, offering researchers a scalable way to spot risks. While the system isn’t perfect, it serves as a crucial first filter, with human experts making the final calls.
Key Facts
- Predatory Publishing: Journals exploit researchers by charging fees without quality peer review.
- AI Screening: The system flagged over 1,000 suspicious journals out of 15,200 analyzed.
- Firewall for Science: Helps preserve trust in research by protecting against bad data.
Source: University of Colorado
A team of computer scientists led by the University of Colorado Boulder has developed a new artificial intelligence platform that automatically seeks out “questionable” scientific journals.
The study, published Aug. 27 in the journal “Science Advances,” tackles an alarming trend in the world of research.
Daniel Acuña, lead author of the study and associate professor in the Department of Computer Science, gets a reminder of that several times a week in his email inbox: These spam messages come from people who purport to be editors at scientific journals, usually ones Acuña has never heard of, and offer to publish his papers—for a hefty fee.
Such publications are sometimes referred to as “predatory” journals. They target scientists, convincing them to pay hundreds or even thousands of dollars to publish their research without proper vetting.
“There has been a growing effort among scientists and organizations to vet these journals,” Acuña said. “But it’s like whack-a-mole. You catch one, and then another appears, usually from the same company. They just create a new website and come up with a new name.”
His group’s new AI tool automatically screens scientific journals, evaluating their websites and other online data for certain criteria: Do the journals have an editorial board featuring established researchers? Do their websites contain a lot of grammatical errors?
Acuña emphasizes that the tool isn’t perfect. Ultimately, he thinks human experts, not machines, should make the final call on whether a journal is reputable.
But in an era when prominent figures are questioning the legitimacy of science, stopping the spread of questionable publications has become more important than ever before, he said.
“In science, you don’t start from scratch. You build on top of the research of others,” Acuña said. “So if the foundation of that tower crumbles, then the entire thing collapses.”
The shake down
When scientists submit a new study to a reputable publication, that study usually undergoes a practice called peer review. Outside experts read the study and evaluate it for quality—or, at least, that’s the goal.
A growing number of companies have sought to circumvent that process to turn a profit. In 2009, Jeffrey Beall, a librarian at CU Denver, coined the phrase “predatory” journals to describe these publications.
Often, they target researchers outside of the United States and Europe, such as in China, India and Iran—countries where scientific institutions may be young, and the pressure and incentives for researchers to publish are high.
“They will say, ‘If you pay $500 or $1,000, we will review your paper,’” Acuña said. “In reality, they don’t provide any service. They just take the PDF and post it on their website.”
A few different groups have sought to curb the practice. Among them is a nonprofit organization called the Directory of Open Access Journals (DOAJ).
Since 2003, volunteers at the DOAJ have flagged thousands of journals as suspicious based on six criteria. (Reputable publications, for example, tend to include a detailed description of their peer review policies on their websites.)
But keeping pace with the spread of those publications has been daunting for humans.
To speed up the process, Acuña and his colleagues turned to AI. The team trained its system using the DOAJ’s data, then asked the AI to sift through a list of nearly 15,200 open-access journals on the internet.
Among those journals, the AI initially flagged more than 1,400 as potentially problematic.
Acuña and his colleagues asked human experts to review a subset of the suspicious journals. The AI made mistakes, according to the humans, flagging an estimated 350 publications as questionable when they were likely legitimate. That still left more than 1,000 journals that the researchers identified as questionable.
“I think this should be used as a helper to prescreen large numbers of journals,” he said. “But human professionals should do the final analysis.”
A firewall for science
Acuña added that the researchers didn’t want their system to be a “black box” like some other AI platforms.
“With ChatGPT, for example, you often don’t understand why it’s suggesting something,” Acuña said. “We tried to make ours as interpretable as possible.”
The team discovered, for example, that questionable journals published an unusually high number of articles. They also included authors with a larger number of affiliations than more legitimate journals, and authors who cited their own research, rather than the research of other scientists, to an unusually high level.
The new AI system isn’t publicly accessible, but the researchers hope to make it available to universities and publishing companies soon. Acuña sees the tool as one way that researchers can protect their fields from bad data—what he calls a “firewall for science.”
“As a computer scientist, I often give the example of when a new smartphone comes out,” he said.
“We know the phone’s software will have flaws, and we expect bug fixes to come in the future. We should probably do the same with science.”
About this AI and science research news
Author: Daniel Strain
Source: University of Colorado
Contact: Daniel Strain – University of Colorado
Image: The image is credited to Neuroscience News
Original Research: Open access.
“Estimating the predictability of questionable open-access journals” by Daniel Acuña et al. Science Advances
Abstract
Estimating the predictability of questionable open-access journals
Questionable journals threaten global research integrity, yet manual vetting can be slow and inflexible.
Here, we explore the potential of artificial intelligence (AI) to systematically identify such venues by analyzing website design, content, and publication metadata.
Evaluated against extensive human-annotated datasets, our method achieves practical accuracy and uncovers previously overlooked indicators of journal legitimacy.
By adjusting the decision threshold, our method can prioritize either comprehensive screening or precise, low-noise identification.
At a balanced threshold, we flag over 1000 suspect journals, which collectively publish hundreds of thousands of articles, receive millions of citations, acknowledge funding from major agencies, and attract authors from developing countries.
Error analysis reveals challenges involving discontinued titles, book series misclassified as journals, and small society outlets with limited online presence, which are issues addressable with improved data quality.
Our findings demonstrate AI’s potential for scalable integrity checks, while also highlighting the need to pair automated triage with expert review.
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Business1 day ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Jobs & Careers2 months ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle