Connect with us

AI Insights

A Map of the Impossible: MICrONS Delivers AI and Neuroscience Advances

Published

on


The mysteries of the brain have captivated scientists for more than 100 years, most notably illustrated in the detailed drawings of neuroanatomist Santiago Ramón y Cajal. These drawings and his findings related to neuronal organization pushed neuroscience into the modern era.1

Since Ramón y Cajal, researchers have developed new approaches to answer questions about the types of cells in the brain and their functions. Neuroscientists understand how calcium allows these cells to send messages and the role of dopamine in the reward system. They can spy on neuron activity using patch clamp electrophysiology and can even watch as someone uses a specific region of the brain with functional magnetic resonance imaging.

However, the factors that determine how neurons connect and interact following a stimulus remain elusive. The task seemed so enormous that some scientists considered it impossible. Francis Crick said as much in a 1979 article in Scientific American, calling a wiring diagram of the brain “asking for the impossible.”2

Clay Reid, today a neuroscientist at the Allen Institute, read this article with Crick’s comment in 1982 when he was a recent college graduate in physics and mathematics from Yale University. “I wish I could say…from the moment I read it, that that was what I wanted to solve. That’s not true, but I think it probably lit a fire,” Reid said.

Eventually, this burning interest led Reid and other researchers to create the most comprehensive wiring diagram of a mammalian brain to date. Fueled by emerging interest in expanding the power of artificial intelligence (AI), the Machine Intelligence from Cortical Networks (MICrONS) program combined anatomical information and functional activity of a neuronal circuit on the scale of hundreds of thousands of cells to provide insights into the brain’s processes. This resource can help researchers begin to understand what guides neuronal interactions and how these connections influence their functions.

Exploring Neuronal Connections Through Structure and Function

Although he didn’t immediately dive into creating a map of the brain, Reid wasn’t too far removed from neural circuitry. After transferring from physics to neuroscience in graduate school, he found a research home exploring the inner workings of the visual cortex. Originally, he used electrophysiology to study the function of these neurons, but a new technique using calcium-sensitive fluorescent probes was lighting up neuroscientists’ computer screens.3,4

“I got tenure and decided it’s time to be brave,” Reid said about switching to calcium imaging to study neurons in the eye.5 “Rather than hearing a pop every time a neuron fired, we were able to see a flash every time a neuron fired.”

Santiago Ramón y Cajal illustrated some of the most detailed depictions of neurons in the brains of animals.

Santiago Ramón y Cajal/Public Domain

Around this time, Reid returned to the question about what neurons do in the living brain and how they do it. Answering this question, though, would require anatomical information about how neurons connect to each other, a field called connectomics, which Reid said was most accurately collected with electron microscopy. In 2004, physicist Winfried Denk at the Max Planck Institute applied electron microscopy to connectomics, demonstrating the ability to reconstruct three-dimensional features of tissues from serial sections—called volume electron microscopy—at micrometer scales using computer automation.6 “It was exactly the technique that we needed to answer the questions that we wanted to do,” Reid said.

Indeed, Reid and his team began combining volume electron microscopy with calcium imaging to explore neural circuitry in parallel with their functions.7 However, these studies looked at a few thousand neurons—barely a fraction of even a mouse brain. Scaling up to the size of a more comprehensive circuit, which includes hundreds of thousands of neurons, though, would require a far larger investment of time and resources. “When it got to that scale, it was a scale that required a much larger group and collaborators all over the country,” Reid said. “At that point, it’s definitely not an ‘I’, it’s a ‘we’.”

Luckily around 2014, an interest for just this type of project was coming online.

Advancing Neuroscience to Improve Artificial Intelligence

The draw to study the inner workings of the brain for neuroscientists stems from their interest in figuring out how creatures, including humans, learn and become individuals, as well as what goes wrong in neurological diseases. These incredible biological processing machines have also inspired developers of machine learning systems.

One funding agency, Intelligence Advanced Research Project Activity (IARPA), through the Office of the Director of National Intelligence, sought to study brain circuitry to build better machine learning algorithms that could replicate the processes of neurons.

It’s like matching a fingerprint with itself. Once you see the match, you know it’s correct.

—Clay Reid, Allen Institute

Previous studies that explored neuronal connections and functions looked at either small scales of up to one thousand neurons or at large-scale neuronal interactions within the whole brain with functional magnetic resonance. Focusing on the middle-scale—neuronal circuits comprising tens to hundreds of thousands of neurons—could offer more insights into how neural circuits work to interpret information but would require advancements for managing the petabyte-levels of data and processing the results.

Seeking to expand the work done on parts of circuits by researchers like Reid, IARPA created the MICrONS program in 2014 to map a circuit on the millimeter scale. “It was, let’s say, an ambitious goal,” said Andreas Tolias, a neuroscientist who was at the Baylor College of Medicine at the time.

Today at Stanford University, Tolias explores the intersection of neuroscience and AI. “I want to understand intelligence,” he said. He’s also been interested in AI and, because of its similarities to the brain, the field of neuroAI, which he described as, “basically forming bridges between these two fields in a more active way and in particular, [an] experimental or data driven way. So, I found that very appealing.”

Photograph showing a reconstruction of a Martinotti cell (grey) from electron microscopy data. The output synapses appear as bright spots and are color-coded based on their cell target. Red indicates a synapse onto excitatory cells in layers 2 and 3 of the cortex. Cyan indicates connections to excitatory cells in layer 5.

Using morphological features observed in their electron microscopy data, researchers identified connection patterns in inhibitory Martinotti cells in the mouse visual cortex. Pseudocolored synaptic outputs indicate whether the cells connect to excitatory cells in layers 2 and 3 of the cortex (red) or in layer 5 (cyan).

Clare Gamlin/Allen Institute

Previously, Tolias and his group developed new methods and approaches to study and interpret signals from neurons.8,9 They also designed new deep learning models to explore the function of the visual cortex.10,11

“In neuroscience, we’ve been data limited, and we still are in many domains. So, at the time, I was looking for opportunities where we could scale up large data collection,” Tolias said, adding that the chance to do exactly this is what attracted him to the MICrONS project. Tolias and his colleagues applied for funding through the MICrONS program to conduct functional imaging of neurons in the visual cortex and use machine learning to explore the mechanisms of this circuit.

“I always dreamed that there would be two main interactions: AI tools to help us understand the brain, and then eventually, as understanding the brain at some fundamental level should also be helpful to AI,” he said.

Automation and Algorithms Yield New Neuron Knowledge

Ultimately, IARPA awarded grants to Reid’s group, Tolias’s group, and a team led by Sebastian Seung, a neuroscientist at Princeton University, to carry out the goals of the MICrONS program. Because researchers had previously characterized the connectomics of the visual cortex, IARPA selected this circuit to focus on for the project.

The teams would collect functional data from neurons in this region while a mouse watched specific visual stimuli using calcium imaging. Then, they planned to obtain anatomical information from this same area using volume electron microscopy. Finally, they would reconstruct the images and align these with the neuron activity data, while at the same time developing digital models from this functional data.

“It sounds like, and it is, a difficult problem to take a bunch of pictures of individual neurons in a living brain, slice them up into thousands of pieces, put it back together, and then say ‘this is neuron 711 here. Here’s neuron 711 in the electron microscopy,’” Reid said about the endeavor. Even so, he added, “It’s like matching a fingerprint with itself. Once you see the match, you know it’s correct.”

However, even before they had the data, the research pushed the teams to develop better technologies to accomplish their goals. “There’s a lot of engineering all the way from hardware to software to bringing in AI tools and scaling up the imaging so it could be done efficiently,” Tolias said.

For example, Reid’s team developed automated serial sectioning processes and pipelines for electron microscopy imaging.12,13 “The electron microscopy is this beautiful, amazing, three-dimensional data,” Reid said. “But back in the day, the way to make sense of that data was to have human beings wander through the three-dimensional data and essentially lay down breadcrumbs one by one to trace out the neurons.”

Seung, he said, pioneered several advancements in tools for outlining, reconstructing, and editing this data that overcame these analysis limitations to “do the equivalent of millions of years of human labor.”14-16 In fact, Reid said that the last four years of the project were really spearheaded by data scientists including those in Seung’s team and, at the Allen Institute, Forrest Collman and Nuno Maçarico da Costa.

Eventually, though, the teams began to realize the fruits of their labor. Reid recalled seeing some of the first reconstructed images. “It was extraordinary,” he said, adding that now, someone can explore the entire 3D structure of one of the processed neurons in the MICrONS dataset.

Beyond the wiring diagrams, the researchers also revealed new insights into the functions of the visual circuit. They identified an overarching mechanism guiding cell communication, showing that inhibitory neurons target specific neurons to block their activity, and that sometimes different types of these inhibitory neurons cooperate to target the same cells through distinct mechanisms.17 In another study, the researchers revealed that excitatory neurons’ structures exist on a continuum, that these forms related to the cells’ functions, and that the projections of some cells are geographically confined to specific regions.18

Using the functional data, Tolias’s group trained an artificial neural network to create a brain model, or digital twin, of the visual circuit.19 This model would try to replicate the neural activity from actual brain data and also solve novel problems using these same neural processes.

Unlike the Human Brain Project, in which scientists tried to recreate models of the brain architecture, Tolias’s team trained their digital twin on only the neural activity from visual stimuli. Subsequently, this model successfully predicted neuronal responses to novel stimuli and features of these cell types despite not receiving anatomical information.20 “Now, it forms a bridge, or sort of a Rosetta Stone, if you want, between artificial intelligence and real brains,” he said.

These digital twins, Tolias said, can allow researchers to perform experiments in silico that would be difficult or even impossible in real animal brains. “What would have taken, let’s say 10,000 or 100,000 years, we can run it very fast on [graphics processing units], because now we can parallelize. Instead of having one digital twin of a mouse, we can have 100,000.”

At the time of the MICrONS dataset publication, the scientists working on the project had only finalized reconstructions of a couple thousand of the tens of thousands of neurons collected in the study. Tolias said that, because of the current need for manual proofreading, the reconstructions take time, but new advances in machine learning could continue to simplify this process.

Even so, the team was excited to show that such a lofty goal was attainable. “It’s beyond our wildest dreams, frankly, that when we started in 2006 that less than 20 years later, at least the first draft of Francis Crick’s impossible experiment was done,” Reid said. Reflecting on the experiment’s completion, he said, “It’s extreme pleasure and a bit of disbelief.”

Scaling Up Brain Science of Mice and Men

The findings also stunned neuroscientists not involved with the project. Sandra Acosta, a neurodevelopmental biologist at the University of Barcelona and Barcelonaβeta Brain Research Center, referenced the drawings of Ramón y Cajal to highlight the advancement. “The level of complexity, although it was fantastic, it was 120-year-old microscopes drawing by hand by like incredible scientists with a very big mind, but that is, at some point, very subjective,” she said, contrasting it with the systematized and objective images from MICrONS.

A representation of neurons activated by visual stimuli within the one cubic millimeter area of interest in the study (left). Information from these neurons’ activity was used to train a model to create a digital twin. A representative image of this model is shown as dots on the right that correspond to each of the neurons in the real brain.

Using machine learning models, researchers built a digital twin that learned how to respond to stimuli, such as visual information, in the same way that biological neurons do. On the left is an image showing activated neurons from the cubic millimeter of studied brain area, and on the right is a representation of a digital twin of this information.

Tyler Sloan, Quorumetrix

“For me, the most shocking [thing] was seeing the numbers,” Acosta continued. The researchers recorded calcium imaging data from more than 75,000 neurons and imaged more than 200,000 cells in the cubic millimeter of the visual cortex that they mapped. “That’s beautiful,” she added.

Cian O’Donnell, a computational neuroscientist at Ulster University, said that a major advantage of the MICrONS project over similar previous studies is that the data were both high throughput and high fidelity. “We had some information, but nowhere near the level of resolution as the MICrONS project has delivered.”

“It’s letting us ask questions, qualitatively different questions that we couldn’t address before,” O’Donnell continued. He and his team study learning using computer modeling, and he said that the paired recordings of brain activity during visual stimulation with the anatomical connectomics data would be helpful information to answer questions he’s interested in.

Similarly, Acosta is looking forward to seeing similar research that evaluates brains from animals with neurodegenerative conditions. “It will be nice to see the extension of this neurodegeneration at a very molecular level, or a synaptic level, as it is here,” she said.

Photograph of Clay Reid (front, right) and members of his team at the Allen Institute reviewing reconstructions of neurons.

Clay Reid led a team of researchers at the Allen Institute to process one cubic millimeter of a mouse brain in the visual cortex and then image this tissue using electron microscopy.

Jenny Burns/Allen Institute

Beyond the physical data and the findings themselves, the researchers developed a variety of tools and resources to facilitate data processing and use. One tool, Neural Decomposition, expedites the editing process by fixing errors introduced from automated data processing tools.21 Another tool, Connectome Annotation Versioning Engine, allows researchers to analyze information from one part of the dataset while another is undergoing editing.22 This resource helped other researchers reconstruct one cubic millimeter of human cortex from electron microscopy data.23 Meanwhile, the reconstruction tools developed by Seung’s group aided the development of the first whole-brain wiring diagram of the fly brain.24

“So, yes, we found out some things about the visual cortical circuit, but I think the influence is far stronger than that,” Reid said.

Additionally, a subsequent project, Brain CONNECTS, is underway using data and resources developed in the MICrONS study to scale up the findings of MICrONS to capture the whole mouse brain. “It’s so unimaginable that Francis Crick wouldn’t have said this is impossible, because it’s absurd,” Reid said. MICrONS researchers Maçarico da Costa and Collman are leading one of the Brain CONNECTS projects where they are using volume electron microscopy to map another region of the mouse brain and combine this with existing gene expression data to create a cell atlas.

“It’s not just going to be like, if we scale it up 10 times, that doesn’t mean nine more of the same things. It means different brain regions being connected to each other,” O’Donnell said about expanding this area of research. Having a whole brain diagram, he said, “it’s going to change neuroscience forever.”

He added that this could eventually lead to studying the brains of multiple mice, allowing for exploration into variability between brains that could help researchers, like O’Donnell, study differences in brains with autism-like traits.

Eventually, researchers including Reid want to extend these advances into mapping the human brain at the same scale. “I want to be involved in [the whole mouse brain], but I really want to map the human brain, because it’s the human brain,” he said.

  1. Sotelo C. Viewing the brain through the master hand of Ramon y Cajal. Nat Rev Neurosci. 2003;4(1):71-77.
  2. Crick FHC. Thinking about the brain. Sci Am. 1979;241(3):219-233.
  3. Waters J, et al. Supralinear Ca2+ influx into dendritic tufts of layer 2/3 neocortical pyramidal neurons in vitro and in vivo. J Neurosci. 2003;23(24):8558-8567.
  4. Stosiek C, et al. In vivo two-photon calcium imaging of neuronal networks. Proc Natl Acad Sci USA. 2003;100(12):7319-7324.
  5. Ohki K, et al. Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature. 2005;433(7026):597-603.
  6. Denk W, Horstmann H. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol. 2004;2(11):e329.
  7. Ko H, et al. Functional specificity of local synaptic connections in neocortical networks. Nature. 2011;473(7345):87-91.
  8. Tolias AS, et al. Recording chronically from the same neurons in awake, behaving primates. J Neurophysiol. 2007;98(6):3780-3790.
  9. Berens P, et al. Reassessing optimal neural population codes with neurometric functions. Proc Natl Acad Sci USA. 2011;108(11):4423-4428.
  10. Cadena SA, et al. Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS Comput Biol. 2019;15(4):e1006897.
  11. Walker EY, et al. Inception loops discover what excites neurons most using deep predictive models. Nat Neurosci. 2019;22(12):2060-2065.
  12. Lee TJ, et al. Large-scale neuroanatomy using LASSO: Loop-based Automated Serial Sectioning Operation. PLoS ONE. 2018;13(10):e0206172.
  13. Yin W, et al. A petascale automated imaging pipeline for mapping neuronal circuits with high-throughput transmission electron microscopy. Nat Commun. 2020;11(1):4949.
  14. Turaga SC, et al. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Comput. 2010;22(2):511-538.
  15. Helmstaedter M, et al. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature. 2013;500(7461):168-174.
  16. Berger DR, et al. VAST (Volume Annotation and Segmentation Tool): Efficient manual and semi-automatic labeling of large 3D image stacks. Front Neural Circuits. 2018;12:88.
  17. Schneider-Mizell CM. Inhibitory specificity from a connectomic census of mouse visual cortex. 2025;640(8058):448-458.
  18. Weiss MA, et al. An unsupervised map of excitatory neuron dendritic morphology in the mouse visual cortex. Nat Commun. 2025;16(1):3361.
  19. Wang EY, et al. Foundation model of neural activity predicts response to new stimulus types. Nature. 2025;640(8058):470-477.
  20. Ding Z, et al. Functional connectomics reveals general wiring rule in mouse visual cortex. 2025;640(8058):459-469.
  21. Celii B, et al. NEURD offers automated proofreading and feature extraction for connectomics. Nature. 2025;640(8058):487-496.
  22. Dorkenwald S, et al. CAVE: Connectome Annotation Versioning Engine. Nat Methods. 2025;22:1112-1120.
  23. Shapson-Coe A, et al. A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution. Science. 2024;384(6696):eadk4858.
  24. Schlegel P, et al. Whole-brain annotation and multi-connectome cell typing of Drosophila. Nature. 2024;634(8032):139-152.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

AI accurately identifies questionable open-access journals by analysing websites and content, matching expert human assessment

Published

on


Artificial intelligence (AI) could be a useful tool to find ‘questionable’ open-access journals, by analysing features such as website design and content, new research has found.

The researchers set out to evaluate the extent to which AI techniques could replicate the expertise of human reviewers in identifying questionable journals and determining key predictive factors. ‘Questionable’ journals were defined as journals violating the best practices outlined in the Directory of Open Access Journals (DOAJ) – an index of open access journals managed by the DOAF foundation based in Denmark – and showing indicators of low editorial standards. Legitimate journals were those that followed DOAJ best practice standards and classed as ‘whitelisted’.

The AI model was designed to transform journal websites into machine-readable information, according to DOAJ criteria, such as editorial board expertise and publication ethics. To train the questionable journal classifier, they compiled a list of around 12,800 whitelisted journals and 2500 unwhitelisted, and then extracted three kinds of features to help distinguish them from each other: website content, website design and bibliometrics-based classifiers.

The model was then used to predict questionable journals from a list of just over 15,000 open-access journals housed by the open database, Unpaywall. Overall, it flagged 1437 suspect journals of which about 1092 were expected to be genuinely questionable. The researchers said these journals had hundreds of thousands of articles, millions of citations, acknowledged funding from major agencies and attracted authors from developing countries.

There were around 345 false positives among those identified, which the researchers said shared a few patterns, for example they had sites that were unreachable or had been formally discontinued, or referred to a book series or conference with titles similar to that of a journal. They also said there was likely around 1780 problematic journals that had remained undetected.

Overall, they concluded that AI could accurately discern questionable journals with high agreement with expert human assessments, although they pointed out that existing AI models would need to be continuously updated to track evolving trends.

‘Future work should explore ways to incorporate real-time web crawling and community feedback into AI-driven screening tools to create a dynamic and adaptable system for monitoring research integrity,’ they said.

 

 



Source link

Continue Reading

AI Insights

Should You Forget BigBear.ai and Buy 3 Artificial Intelligence (AI) Stocks Right Now?

Published

on


BigBear.ai has big problems scaling its AI business.

There’s little doubt that Palantir Technologies (PLTR -0.19%) is one of the most significant stock market stories of the decade, so far. The data mining company unveiled its Artificial Intelligence Platform (AIP) in 2023 and since has been climbing fast.

Palantir jumped 340% in 2024, making it the best-performing stock in the S&P 500, and its 118% gain so far this year puts it at a close second to Seagate Technology for 2025. An investment in Palantir of just $1,000 three years ago would have given you $21,000 today.

PLTR data by YCharts

Undoubtedly, people are looking for the next Palantir, and for many, BigBear.ai (BBAI 0.59%) is a contender. Like Palantir, BigBear.ai is a government contractor that is using artificial intelligence (AI) to develop solutions for defense and intelligence agencies.

A robot hand under a graphic for AI.

Image source: Getty Images.

But if you’re hoping BigBear.ai can match Palantir, I think you’ll be mistaken. There are three other names you should consider instead to play the AI space.

BigBear.ai isn’t another Palantir

Palantir is growing so fast because it’s reeling in contracts hand over fist. It closed $2.27 billion in total contract value sales in the second quarter, up 140% from last year. Its customer count grew 43% for the quarter. That’s why the company’s revenue growth is so steep — it’s gone from about $460 million per quarter to $1 billion a quarter in just three years.

BigBear.ai, however, had revenue of just $32.4 million in the second quarter, down 18% from a year ago. Management said the drop was because of lower volume of U.S. Army programs, but that also shines a spotlight on the company’s biggest problem. BigBear.ai’s biggest contract is with the Army, a $165 million deal to modernize and incorporate AI into its platforms. If the Army slows down its work for any reason, then BigBear.ai and its stock suffer.

So, what AI companies are a better play than BigBear.ai now?

Palantir Technologies

I completely understand wanting to get in on the next Palantir, but I also see a lot of value in investing in the original. While BigBear.ai has to create new platforms and new products for each of its clients, Palantir’s AIP is designed to work with multiple government agencies and commercial businesses.

Palantir rolls out AIP in boot camps so potential customers can try it out, and the results speak for themselves — the company closed 157 deals in the second quarter that were valued at $1 million or more. Sixty-six of those were more than $5 million in value and 42 were more than $10 million. BigBear.ai can’t do that.

International Business Machines

International Business Machines (IBM 1.15%) wins my vote in the AI space because of a bet that Big Blue made six years ago. The venerable computing company that was perhaps best known for its work in personal computing spent $34 billion in 2019 to purchase Red Hat, an open-source enterprise software company, in order to develop its hybrid cloud offerings. The hybrid cloud combines public cloud, private cloud, and on-premises infrastructure, which gives customers flexibility to keep parts of their data secure while utilizing cloud services.

IBM layers its hybrid cloud with its Watsonx, which is its portfolio of artificial intelligence products, which includes a studio to build AI solutions, virtual agents, and code assistants powered by generative AI.

IBM saw software revenue of $7.4 billion in its second quarter, with the hybrid cloud revenue up 16% from a year ago.

“Our strategy remains focused: hybrid cloud and artificial intelligence,” CEO Arvind Krishna said on the Q2 earnings call. “This strategy is built on five reinforcing elements — client trust, flexible and open platforms, sustained innovation, deep domain expertise, and a broad ecosystem.”

Amazon

I love Amazon (AMZN 1.44%) — not because I get packages delivered to my house every week (its e-commerce division makes shopping incredibly convenient), but because of Amazon Web Services (AWS).

AWS holds first place in global market share for cloud computing, with a 30% share. Its Amazon Bedrock platform allows customers to use generative AI to build and experiment with AI-powered products. And because it operates on Amazon’s powerful cloud, users don’t need to invest in expensive graphics processing units (GPUs) or data centers of their own.

AWS was responsible for $30.87 billion in revenue and $10.16 billion in operating income. That profit margin is hugely important, as Amazon’s net income for the quarter was just $18.16 billion — AWS accounts for more than half of the company’s profit despite being responsible for just 18% of the company’s revenue.

In addition, Amazon’s advertising business is growing in importance. It’s using machine learning to deliver targeted product ads, making it one of Amazon’s most profitable efforts. Advertising services revenue jumped to $15.6 billion in the second quarter, up 22% from a year ago.

E-commerce is where Amazon made its mark, but AI  is where Amazon will carve its future.

The bottom line

AI is going to shape our future for years to come. While BigBear.ai is making efforts, not everyone can be a winner. Pass on BigBear.ai for now and focus on established companies that are not only proven winners, but also have a broad runway for growth.



Source link

Continue Reading

AI Insights

Indigenous peoples and Artificial Intelligence: Youth perspectives on rights and a liveable future

Published

on


On August 9, 2025, the world marked the International Day of the World’s Indigenous Peoples under the theme: “Indigenous Peoples and Artificial Intelligence: Defending Rights, Sustaining the Future.” It’s a powerful invitation to ask how emerging tools like AI can empower Indigenous Peoples, rather than marginalise them.

Before we answer how, we need to be clear on who we are talking about and what they face in Cameroon and across the Congo Basin.

Who are Indigenous Peoples in Cameroon?

Cameroon is home to several Indigenous Peoples and communities, including groups often called forest peoples (such as the Baka, Bagyeli, Bedzang) as well as the Mbororo pastoralists and communities commonly referred to as Kirdi. There is no single universal definition of “Indigenous Peoples,” but the UN Declaration on the Rights of Indigenous Peoples (2007) places self-determination at the centre of identification.

The realities: living on the margins

  • Land grabbing and loss of forests. Forests are the supermarket, pharmacy, culture and identity of Indigenous communities in the Congo Basin. Yet illegal and abusive logging, land acquisitions and agroforestry projects without proper consultation put their well-being at risk.
  • Chiefdoms without recognition. The lack of official recognition of Indigenous chiefdoms weakens participation in decision-making and jeopardises their future.
  • No specific national law. Cameroon still lacks a specific legal instrument on Indigenous rights. Reliance on international norms alone doesn’t reflect the local context and leaves gaps in protection.
  • Limited access to education and health. Many Indigenous children lack birth certificates, which blocks school enrolment and access to basic services.

I believe the future can be different: one where Indigenous autonomy is respected, traditional knowledge is valued, and well-being is guaranteed.

So where does AI fit in, and what can youth do?

AI isn’t a silver bullet; however, in the hands of informed, organised youth it can accelerate participatory advocacy, surface evidence, and protect community rights. 

First, AI-assisted mapping, with consent, can document traditional territories, sacred sites, and resource use, turning them into community-owned evidence for authorities and companies. 

Moreover, small AI models can preserve language and knowledge: oral histories, songs, medicinal plants, place names under community data sovereignty, with Indigenous Peoples retaining exclusive rights. 

Meanwhile, simple chatbots or workflows offer legal triage (from birth-certificate requests to land-grievance tracking and administrative appeals). 

Likewise, crowdsourced reports plus AI enable early-warning and accountability on suspicious logging, new roads, or fires, which young monitors can visualise and escalate to community leaders, media, and allies. 

Finally, youth pre-bunk/de-bunk teams can counter misinformation with community-approved information. Above all, use of AI must follow Free, Prior and Informed Consent (FPIC), strong privacy safeguards, and real community control of data.

My commitment as a young activist

As an activist, and with a background in law, I want to keep building projects that put Indigenous Peoples at the centre of decisions. AI can help: it enables faster, structured, participatory advocacy and supports a community-owned database of solutions and traditional knowledge, with exclusive rights for Indigenous communities over any derivative products. My legal training helps me work at the intersection of Indigenous rights, AI, and forest/biodiversity protection.

A call to action

The 2025 theme is more than a slogan; it’s a call to act so that technology serves justice, not exclusion. In Cameroon, where Indigenous Peoples are still fighting for legal recognition, AI must be wielded as a tool of solidarity. With support from allies like Greenpeace Africa and the creativity of youth, a future rooted in dignity and sustainability is within reach.

MACHE NGASSING Darcise Dolorès,  Climate activist



Source link

Continue Reading

Trending