Connect with us

AI Insights

Hunger strikers in SF and London are calling for an end to AI. We Facetimed two of them

Published

on


Three men with deep concerns about the threat artificial intelligence could pose to human existence have embarked on hunger strikes this month in an attempt to get the attention of prominent companies working to develop super-powerful AI. 

On Sept. 1, activist Guido Reichstadter set up an A-frame and folding chair outside the Howard Street office building that’s home to the AI firm Anthropic, beginning a hunger strike during which he is intaking electrolytes and vitamins — but no calories. 

In an X post two days later, Reichstadter said he is “calling on Anthropic’s management, directors, and employees to immediately stop their reckless actions which are harming our society and to work to remediate the harm that has already been caused.” Reichstadter made the stakes as he saw them plain: “We are in an emergency. Let us act as if this emergency is real.”

Days later, a second activist and former AI researcher, Michaël Trazzi, set up similarly outside the headquarters for Alphabet’s AI research lab, Google DeepMind, in London’s Pancras Square.

Guido Reichstadter updates a sign with the number of days he’s been on a hunger strike in front of Anthropic headquarters. | Source: Amanda Andrade-Rhoades/The Standard

In an X post announcing his strike, Trazzi expressed support for Reichstadter’s action and belief that the stakes with AI companies’ frontier models are too high to pursue even with technical attempts to set up guardrails. Like Reichstadter, he’s asking DeepMind to halt development if other companies similarly pause their efforts.

“Experts are warning us that this race to ever more powerful artificial general intelligence puts our lives and well-being at risk, as well as the lives and well-being of our loved ones,” Trazzi wrote in part.

On Friday afternoon, a Standard reporter went to Howard Street to meet with Reichstadter, who was surrounded by signs splashed with dire warnings about AI risks and was expecting to have a call with Trazzi in London. Reichstadter, however, said Trazzi had suspended his strike. The reporter soon found an X post from Trazzi, saying he had experienced a fainting episode and stopped fasting at the recommendation of two doctors. 

Another protester, Denys Sheremet, planned to continue the hunger strike he’d begun with Trazzi outside DeepMind’s offices six days prior.

“I want to thank everyone who has supported me in this journey, both in person and online,” Trazzi said Thursday on X. “My hunger strike ends now, but the movement continues, with Guido and Denys still on strike in front of Anthropic and DeepMind.”

Protestors connect across continents 

A Standard reporter spoke with Sheremet Saturday morning, when he FaceTimed with Reichstadter from London. The two discussed the strategies they have been deploying to get the attention of AI executives, and consoled one another in their struggle.

In the middle of the call, Reichstadter took the time to coach Sheremet on his nutritional regimen. Since a meal of chicken and pasta for dinner on Aug. 31, Reichstadter, 45, has consumed only water, vitamins, and electrolytes. Then, he told Sheremet the story of Sodom and Gomorrah, in which Abraham negotiates with God to save the city of Sodom so long as Abraham can find ten righteous people.

“Let’s see if there are 10 people of integrity at Anthropic,” Reichstadter said. Sheremet nodded affirmatively.

“Keep it up,” Sheremet encouraged before they hung up.

A man in a gray beanie and dark jacket is outdoors near a building, holding something close to his mouth, while another man with glasses and messy hair appears in a video call inset.
Denys Sheremet and Guido Reichstadt are hunger striking in solidarity against AI companies. | Source: Ezra Wallach/The Standard

Reichstadter said he delivered a letter to Anthropic CEO Dario Amodei through the lobby’s security guard on the first day of the protest, requesting a meeting to discuss the technology’s potential dangers. The company has not publicly responded to the protest, and Reichstadter said he plans to continue the hunger strike indefinitely until Amodei agrees to meet with him.

“This company is building technology that their CEO acknowledges puts my life at risk, my children’s lives, everyone in this society,” Reichstadter said.

Reichstadter, who previously conducted a 14-day hunger strike in April 2022 demanding Miami-Dade Mayor Daniella Levine Cara declare a climate emergency, said he chose the extreme form of protest because AI companies are “running into a minefield” by racing to develop artificial general intelligence, or AGI.

‘Wake them up to the danger’

He was joined on Friday by Phoebe Thomas Sorgen, 71, a longtime activist who said she may begin her own hunger strike to support the cause. Thomas-Sorgen said she became concerned about AI risks about three months ago after meeting Reichstadter and other activists at Travis Air Force Base in Fairfield during a protest against the Trump administration’s immigration policies.

“It’s a horrible threat that we’re facing,” Thomas-Sorgen said, citing concerns about job displacement, data privacy, and environmental impacts from energy-intensive data centers.

A person wearing a helmet rides a bicycle on a path in front of a building with large windows and some outdoor seating.
Phoebe Thomas Sorgen, left, and Guido Reichstadter talk in front of Anthropic headquarters. | Source: Amanda Andrade-Rhoades/The Standard

Both referenced statements from AI researchers, including Geoffrey Hinton, who won a Nobel Prize last year and has warned about potential extinction risks from artificial superintelligence. A statement on AI risk signed by numerous researchers, including Anthropic CEO Dario Amodei, warns that “mitigating the risk of extinction from AI should be a global priority.”

Reichstadter said his three demands to Anthropic are simple: stop endangering lives through AI development, use the company’s resources to halt the global AI race through international negotiations, and have Amodei explain why he believes he has the right to risk human lives. 

Anthropic, founded in 2021 by former OpenAI executives including Amodei, says it has emphasized AI safety research alongside its commercial products. 

“I’m here to get the word out as long as I can,” he said. “Everyone that understands the situation we’re in has some responsibility to wake them up to the danger.”





Source link

AI Insights

Tech from China could take the ‘stealth’ out of stealth subs using Artificial Intelligence, magnetic wake detection

Published

on


Submarines were once considered the stealthiest assets of navies. Not anymore. Studies from China suggest that new tech can break the code of the stealth used on submarines, which make them powerful war machines. These innovations that detect underwater vessels can change the face of naval warfare. Artificial Intelligence and magnetic wake detection are some of the methods being used to achieve this. Here is what you should know.

China is developing submarine detection technologies using AI. How it works

The studies from China suggest that subs could be highly vulnerable to artificial intelligence (AI) and magnetic field detection technologies, as reported by the South China Morning Post.

Add WION as a Preferred Source

In a study published in August, a team led by Meng Hao from the China Helicopter Research and Development Institute revealed an AI-powered anti-submarine warfare (ASW) system.
Led by AI, this tech is being touted as the first of its kind, enabling automated decision-making in detecting submarines.

As per the study published in the journal Electronics Optics & Control, the ASW system mimics a smart battlefield commander, integrating real-time data from sonar buoys, radar, underwater sensors, and ocean conditions like temperature and salinity.

Powered by AI, the system can autonomously analyse and adapt, slashing a submarine’s escape chances to just 5 per cent.

This would mean only one in 20 submarines could evade detection and attack.

This will be a significant shift in naval warfare, with researchers warning that the “invisible” submarine era is ending.

Stealth may soon be an impossible feat, Meng’s team said.

China can track US submarines via ‘magnetic wakes’

In December last year, scientists from Northwestern Polytechnical University (NPU) in Xi’an, revealed a novel method for tracking submarines via ‘magnetic wakes’.

The study led by Associate Professor Wang Honglei, models how submarines generate faint magnetic fields as they disturb seawater, creating ‘Kelvin wakes’.

These wakes, long after the vessel has passed, leave “footprints in the ocean’s magnetic fabric,” said the study, published in the Journal of Harbin Engineering University on December 4.

For example, a Seawolf-class submarine travelling at 24 knots and 30 metres depth generates a magnetic field of 10⁻¹² tesla—detectable by existing airborne magnetometres.

This method exploits a critical vulnerability in submarines, the Kelvin wakes, that ‘cannot be silenced,’ Wang’s team said.

This is in contrast to the acoustic – or sound-based- detection, which submarines can counter with sound-dampening technologies.

Together, the studies suggest that AI and magnetic detection could soon make submarine stealth a thing of the past.

Related Stories



Source link

Continue Reading

AI Insights

Rethinking the AI Race | The Regulatory Review

Published

on


Openness in AI models is not the same as freedom.

In 2016, Noam Chomsky, the father of modern linguistics, published the book Who Rules the World? referring to the United States’ dominance in global affairs. Today, policymakers—such as U.S. President Donald J. Trump argue that whoever wins the artificial intelligence (AI) race will rule the world, driven by a relentless, borderless competition for technological supremacy. One strategy gaining traction is open-source AI. But is it advisable? The short answer, I believe, is no.

Closed-source and open-source represent the two main paradigms in software, and AI software is no exception. While closed-source refers to proprietary software with restricted use, open-source software typically involves making the underlying source code publicly available, allowing unrestricted use, including the ability to modify the code and develop new applications.

AI is impacting virtually every industry, and AI startups have proliferated nonstop in recent years. OpenAI secured a multi-billion-dollar investment from Microsoft, while Anthropic has attracted significant investments from Amazon and Google. These companies are currently leading the AI race with closed-source models, a strategy aimed at maintaining proprietary control and addressing safety concerns.

But open-source models have consistently driven innovation and competition in software. Linux, one of the most successful open-source operating systems ever, is pivotal in the computer industry. Google Android, which is used in approximately 70 percent of smartphones worldwide, Amazon Web Services, Microsoft Azure, and all of the world’s top 500 supercomputers run on Linux. The success story of open-source software naturally fuels enthusiasm for open-source AI software. And behind the scenes, companies such as Meta are emerging by developing open-source AI initiatives to promote the democratization and growth of AI through a joint effort.

Mark Zuckerberg, in promoting an open-source model for AI, recalled the story of Linux’s open-source operating system. Linux became “the industry standard foundation for both cloud computing and the operating systems that run most mobile devices—and we all benefit from superior products because of it.”

But the story of Linux is quite different from Meta’s “open-source” AI project, Llama. First and foremost, no universally accepted definition of open-source AI exists. Second, Linux had no “Big Tech” corporation behind it. Its success was made possible by the free software movement, led by American activist and programmer Richard Stallman, who created the GNU General Public License (GPL) to ensure software freedom. The GPL allowed for the free distribution and collaborative development of essential software, most notably the Linux open source operating system, developed by Finnish programmer Linus Torvalds. Linux has become the foundation for numerous open-source operating systems, developed by a global community that has fostered a culture of openness, decentralization, and user control. Llama is not distributed under a GPL.

Under the Llama 4 licensing agreement, entities with more than 700 million monthly active users in the preceding calendar month must obtain a license from Meta, “which Meta may grant to you in its sole discretion” before using the model. Moreover, algorithms powering large AI models rely on vast amounts of data to function effectively. Meta, however, does not make its training data publicly available.

Thus, can we really call it open source?

Most importantly, AI presents fundamentally different and more complex challenges than traditional software, with the primary concern being safety. Traditional algorithms are predictable; we know the inputs and outputs. Consider the Euclidean algorithm, which provides an efficient way for computing the greatest common divisor of two integers. Conversely, AI algorithms are typically unpredictable because they leverage a large amount of data to build models, which are becoming increasingly sophisticated.

Deep learning algorithms, which underlie large language models such as ChatGPT and other well-known AI applications, rely on increasingly complex structures that make AI outputs virtually impossible to interpret or explain. Large language models are performing increasingly well, but would you trust something that you cannot fully interpret and understand? Open-source AI, rather than offering a solution, may be amplifying the problem. Although it is often seen as a tool to promote democratization and technological progress, open source in AI increasingly resembles a Ferrari engine with no brakes.

Like cars, computers and software are powerful technologies—but as with any technology, AI can harm if misused or deployed without a proper understanding of the risks. Currently, we do not know what AI can and cannot do. Competition is important, and open-source software has been a key driver of technological progress, providing the foundation for widely used technologies such as Android smartphones and web infrastructure. It has been, and continues to be, a key paradigm for competition, especially in a digital framework.

Is AI different because we do not know how to stop this technology if required? Free speech, free society, and free software are all appealing concepts, but let us do better than that. In the 18th century, French philosopher Baron de Montesquieu argued that “Liberty is the right to do everything the law permits.” Rather than promoting openness and competition at any cost to rule the world, liberty in AI seems to require a calibrated legal framework that balances innovation and safety.



Source link

Continue Reading

AI Insights

A Map of the Impossible: MICrONS Delivers AI and Neuroscience Advances

Published

on


The mysteries of the brain have captivated scientists for more than 100 years, most notably illustrated in the detailed drawings of neuroanatomist Santiago Ramón y Cajal. These drawings and his findings related to neuronal organization pushed neuroscience into the modern era.1

Since Ramón y Cajal, researchers have developed new approaches to answer questions about the types of cells in the brain and their functions. Neuroscientists understand how calcium allows these cells to send messages and the role of dopamine in the reward system. They can spy on neuron activity using patch clamp electrophysiology and can even watch as someone uses a specific region of the brain with functional magnetic resonance imaging.

However, the factors that determine how neurons connect and interact following a stimulus remain elusive. The task seemed so enormous that some scientists considered it impossible. Francis Crick said as much in a 1979 article in Scientific American, calling a wiring diagram of the brain “asking for the impossible.”2

Clay Reid, today a neuroscientist at the Allen Institute, read this article with Crick’s comment in 1982 when he was a recent college graduate in physics and mathematics from Yale University. “I wish I could say…from the moment I read it, that that was what I wanted to solve. That’s not true, but I think it probably lit a fire,” Reid said.

Eventually, this burning interest led Reid and other researchers to create the most comprehensive wiring diagram of a mammalian brain to date. Fueled by emerging interest in expanding the power of artificial intelligence (AI), the Machine Intelligence from Cortical Networks (MICrONS) program combined anatomical information and functional activity of a neuronal circuit on the scale of hundreds of thousands of cells to provide insights into the brain’s processes. This resource can help researchers begin to understand what guides neuronal interactions and how these connections influence their functions.

Exploring Neuronal Connections Through Structure and Function

Although he didn’t immediately dive into creating a map of the brain, Reid wasn’t too far removed from neural circuitry. After transferring from physics to neuroscience in graduate school, he found a research home exploring the inner workings of the visual cortex. Originally, he used electrophysiology to study the function of these neurons, but a new technique using calcium-sensitive fluorescent probes was lighting up neuroscientists’ computer screens.3,4

“I got tenure and decided it’s time to be brave,” Reid said about switching to calcium imaging to study neurons in the eye.5 “Rather than hearing a pop every time a neuron fired, we were able to see a flash every time a neuron fired.”

Santiago Ramón y Cajal illustrated some of the most detailed depictions of neurons in the brains of animals.

Santiago Ramón y Cajal/Public Domain

Around this time, Reid returned to the question about what neurons do in the living brain and how they do it. Answering this question, though, would require anatomical information about how neurons connect to each other, a field called connectomics, which Reid said was most accurately collected with electron microscopy. In 2004, physicist Winfried Denk at the Max Planck Institute applied electron microscopy to connectomics, demonstrating the ability to reconstruct three-dimensional features of tissues from serial sections—called volume electron microscopy—at micrometer scales using computer automation.6 “It was exactly the technique that we needed to answer the questions that we wanted to do,” Reid said.

Indeed, Reid and his team began combining volume electron microscopy with calcium imaging to explore neural circuitry in parallel with their functions.7 However, these studies looked at a few thousand neurons—barely a fraction of even a mouse brain. Scaling up to the size of a more comprehensive circuit, which includes hundreds of thousands of neurons, though, would require a far larger investment of time and resources. “When it got to that scale, it was a scale that required a much larger group and collaborators all over the country,” Reid said. “At that point, it’s definitely not an ‘I’, it’s a ‘we’.”

Luckily around 2014, an interest for just this type of project was coming online.

Advancing Neuroscience to Improve Artificial Intelligence

The draw to study the inner workings of the brain for neuroscientists stems from their interest in figuring out how creatures, including humans, learn and become individuals, as well as what goes wrong in neurological diseases. These incredible biological processing machines have also inspired developers of machine learning systems.

One funding agency, Intelligence Advanced Research Project Activity (IARPA), through the Office of the Director of National Intelligence, sought to study brain circuitry to build better machine learning algorithms that could replicate the processes of neurons.

It’s like matching a fingerprint with itself. Once you see the match, you know it’s correct.

—Clay Reid, Allen Institute

Previous studies that explored neuronal connections and functions looked at either small scales of up to one thousand neurons or at large-scale neuronal interactions within the whole brain with functional magnetic resonance. Focusing on the middle-scale—neuronal circuits comprising tens to hundreds of thousands of neurons—could offer more insights into how neural circuits work to interpret information but would require advancements for managing the petabyte-levels of data and processing the results.

Seeking to expand the work done on parts of circuits by researchers like Reid, IARPA created the MICrONS program in 2014 to map a circuit on the millimeter scale. “It was, let’s say, an ambitious goal,” said Andreas Tolias, a neuroscientist who was at the Baylor College of Medicine at the time.

Today at Stanford University, Tolias explores the intersection of neuroscience and AI. “I want to understand intelligence,” he said. He’s also been interested in AI and, because of its similarities to the brain, the field of neuroAI, which he described as, “basically forming bridges between these two fields in a more active way and in particular, [an] experimental or data driven way. So, I found that very appealing.”

Photograph showing a reconstruction of a Martinotti cell (grey) from electron microscopy data. The output synapses appear as bright spots and are color-coded based on their cell target. Red indicates a synapse onto excitatory cells in layers 2 and 3 of the cortex. Cyan indicates connections to excitatory cells in layer 5.

Using morphological features observed in their electron microscopy data, researchers identified connection patterns in inhibitory Martinotti cells in the mouse visual cortex. Pseudocolored synaptic outputs indicate whether the cells connect to excitatory cells in layers 2 and 3 of the cortex (red) or in layer 5 (cyan).

Clare Gamlin/Allen Institute

Previously, Tolias and his group developed new methods and approaches to study and interpret signals from neurons.8,9 They also designed new deep learning models to explore the function of the visual cortex.10,11

“In neuroscience, we’ve been data limited, and we still are in many domains. So, at the time, I was looking for opportunities where we could scale up large data collection,” Tolias said, adding that the chance to do exactly this is what attracted him to the MICrONS project. Tolias and his colleagues applied for funding through the MICrONS program to conduct functional imaging of neurons in the visual cortex and use machine learning to explore the mechanisms of this circuit.

“I always dreamed that there would be two main interactions: AI tools to help us understand the brain, and then eventually, as understanding the brain at some fundamental level should also be helpful to AI,” he said.

Automation and Algorithms Yield New Neuron Knowledge

Ultimately, IARPA awarded grants to Reid’s group, Tolias’s group, and a team led by Sebastian Seung, a neuroscientist at Princeton University, to carry out the goals of the MICrONS program. Because researchers had previously characterized the connectomics of the visual cortex, IARPA selected this circuit to focus on for the project.

The teams would collect functional data from neurons in this region while a mouse watched specific visual stimuli using calcium imaging. Then, they planned to obtain anatomical information from this same area using volume electron microscopy. Finally, they would reconstruct the images and align these with the neuron activity data, while at the same time developing digital models from this functional data.

“It sounds like, and it is, a difficult problem to take a bunch of pictures of individual neurons in a living brain, slice them up into thousands of pieces, put it back together, and then say ‘this is neuron 711 here. Here’s neuron 711 in the electron microscopy,’” Reid said about the endeavor. Even so, he added, “It’s like matching a fingerprint with itself. Once you see the match, you know it’s correct.”

However, even before they had the data, the research pushed the teams to develop better technologies to accomplish their goals. “There’s a lot of engineering all the way from hardware to software to bringing in AI tools and scaling up the imaging so it could be done efficiently,” Tolias said.

For example, Reid’s team developed automated serial sectioning processes and pipelines for electron microscopy imaging.12,13 “The electron microscopy is this beautiful, amazing, three-dimensional data,” Reid said. “But back in the day, the way to make sense of that data was to have human beings wander through the three-dimensional data and essentially lay down breadcrumbs one by one to trace out the neurons.”

Seung, he said, pioneered several advancements in tools for outlining, reconstructing, and editing this data that overcame these analysis limitations to “do the equivalent of millions of years of human labor.”14-16 In fact, Reid said that the last four years of the project were really spearheaded by data scientists including those in Seung’s team and, at the Allen Institute, Forrest Collman and Nuno Maçarico da Costa.

Eventually, though, the teams began to realize the fruits of their labor. Reid recalled seeing some of the first reconstructed images. “It was extraordinary,” he said, adding that now, someone can explore the entire 3D structure of one of the processed neurons in the MICrONS dataset.

Beyond the wiring diagrams, the researchers also revealed new insights into the functions of the visual circuit. They identified an overarching mechanism guiding cell communication, showing that inhibitory neurons target specific neurons to block their activity, and that sometimes different types of these inhibitory neurons cooperate to target the same cells through distinct mechanisms.17 In another study, the researchers revealed that excitatory neurons’ structures exist on a continuum, that these forms related to the cells’ functions, and that the projections of some cells are geographically confined to specific regions.18

Using the functional data, Tolias’s group trained an artificial neural network to create a brain model, or digital twin, of the visual circuit.19 This model would try to replicate the neural activity from actual brain data and also solve novel problems using these same neural processes.

Unlike the Human Brain Project, in which scientists tried to recreate models of the brain architecture, Tolias’s team trained their digital twin on only the neural activity from visual stimuli. Subsequently, this model successfully predicted neuronal responses to novel stimuli and features of these cell types despite not receiving anatomical information.20 “Now, it forms a bridge, or sort of a Rosetta Stone, if you want, between artificial intelligence and real brains,” he said.

These digital twins, Tolias said, can allow researchers to perform experiments in silico that would be difficult or even impossible in real animal brains. “What would have taken, let’s say 10,000 or 100,000 years, we can run it very fast on [graphics processing units], because now we can parallelize. Instead of having one digital twin of a mouse, we can have 100,000.”

At the time of the MICrONS dataset publication, the scientists working on the project had only finalized reconstructions of a couple thousand of the tens of thousands of neurons collected in the study. Tolias said that, because of the current need for manual proofreading, the reconstructions take time, but new advances in machine learning could continue to simplify this process.

Even so, the team was excited to show that such a lofty goal was attainable. “It’s beyond our wildest dreams, frankly, that when we started in 2006 that less than 20 years later, at least the first draft of Francis Crick’s impossible experiment was done,” Reid said. Reflecting on the experiment’s completion, he said, “It’s extreme pleasure and a bit of disbelief.”

Scaling Up Brain Science of Mice and Men

The findings also stunned neuroscientists not involved with the project. Sandra Acosta, a neurodevelopmental biologist at the University of Barcelona and Barcelonaβeta Brain Research Center, referenced the drawings of Ramón y Cajal to highlight the advancement. “The level of complexity, although it was fantastic, it was 120-year-old microscopes drawing by hand by like incredible scientists with a very big mind, but that is, at some point, very subjective,” she said, contrasting it with the systematized and objective images from MICrONS.

A representation of neurons activated by visual stimuli within the one cubic millimeter area of interest in the study (left). Information from these neurons’ activity was used to train a model to create a digital twin. A representative image of this model is shown as dots on the right that correspond to each of the neurons in the real brain.

Using machine learning models, researchers built a digital twin that learned how to respond to stimuli, such as visual information, in the same way that biological neurons do. On the left is an image showing activated neurons from the cubic millimeter of studied brain area, and on the right is a representation of a digital twin of this information.

Tyler Sloan, Quorumetrix

“For me, the most shocking [thing] was seeing the numbers,” Acosta continued. The researchers recorded calcium imaging data from more than 75,000 neurons and imaged more than 200,000 cells in the cubic millimeter of the visual cortex that they mapped. “That’s beautiful,” she added.

Cian O’Donnell, a computational neuroscientist at Ulster University, said that a major advantage of the MICrONS project over similar previous studies is that the data were both high throughput and high fidelity. “We had some information, but nowhere near the level of resolution as the MICrONS project has delivered.”

“It’s letting us ask questions, qualitatively different questions that we couldn’t address before,” O’Donnell continued. He and his team study learning using computer modeling, and he said that the paired recordings of brain activity during visual stimulation with the anatomical connectomics data would be helpful information to answer questions he’s interested in.

Similarly, Acosta is looking forward to seeing similar research that evaluates brains from animals with neurodegenerative conditions. “It will be nice to see the extension of this neurodegeneration at a very molecular level, or a synaptic level, as it is here,” she said.

Photograph of Clay Reid (front, right) and members of his team at the Allen Institute reviewing reconstructions of neurons.

Clay Reid led a team of researchers at the Allen Institute to process one cubic millimeter of a mouse brain in the visual cortex and then image this tissue using electron microscopy.

Jenny Burns/Allen Institute

Beyond the physical data and the findings themselves, the researchers developed a variety of tools and resources to facilitate data processing and use. One tool, Neural Decomposition, expedites the editing process by fixing errors introduced from automated data processing tools.21 Another tool, Connectome Annotation Versioning Engine, allows researchers to analyze information from one part of the dataset while another is undergoing editing.22 This resource helped other researchers reconstruct one cubic millimeter of human cortex from electron microscopy data.23 Meanwhile, the reconstruction tools developed by Seung’s group aided the development of the first whole-brain wiring diagram of the fly brain.24

“So, yes, we found out some things about the visual cortical circuit, but I think the influence is far stronger than that,” Reid said.

Additionally, a subsequent project, Brain CONNECTS, is underway using data and resources developed in the MICrONS study to scale up the findings of MICrONS to capture the whole mouse brain. “It’s so unimaginable that Francis Crick wouldn’t have said this is impossible, because it’s absurd,” Reid said. MICrONS researchers Maçarico da Costa and Collman are leading one of the Brain CONNECTS projects where they are using volume electron microscopy to map another region of the mouse brain and combine this with existing gene expression data to create a cell atlas.

“It’s not just going to be like, if we scale it up 10 times, that doesn’t mean nine more of the same things. It means different brain regions being connected to each other,” O’Donnell said about expanding this area of research. Having a whole brain diagram, he said, “it’s going to change neuroscience forever.”

He added that this could eventually lead to studying the brains of multiple mice, allowing for exploration into variability between brains that could help researchers, like O’Donnell, study differences in brains with autism-like traits.

Eventually, researchers including Reid want to extend these advances into mapping the human brain at the same scale. “I want to be involved in [the whole mouse brain], but I really want to map the human brain, because it’s the human brain,” he said.

  1. Sotelo C. Viewing the brain through the master hand of Ramon y Cajal. Nat Rev Neurosci. 2003;4(1):71-77.
  2. Crick FHC. Thinking about the brain. Sci Am. 1979;241(3):219-233.
  3. Waters J, et al. Supralinear Ca2+ influx into dendritic tufts of layer 2/3 neocortical pyramidal neurons in vitro and in vivo. J Neurosci. 2003;23(24):8558-8567.
  4. Stosiek C, et al. In vivo two-photon calcium imaging of neuronal networks. Proc Natl Acad Sci USA. 2003;100(12):7319-7324.
  5. Ohki K, et al. Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Nature. 2005;433(7026):597-603.
  6. Denk W, Horstmann H. Serial block-face scanning electron microscopy to reconstruct three-dimensional tissue nanostructure. PLoS Biol. 2004;2(11):e329.
  7. Ko H, et al. Functional specificity of local synaptic connections in neocortical networks. Nature. 2011;473(7345):87-91.
  8. Tolias AS, et al. Recording chronically from the same neurons in awake, behaving primates. J Neurophysiol. 2007;98(6):3780-3790.
  9. Berens P, et al. Reassessing optimal neural population codes with neurometric functions. Proc Natl Acad Sci USA. 2011;108(11):4423-4428.
  10. Cadena SA, et al. Deep convolutional models improve predictions of macaque V1 responses to natural images. PLoS Comput Biol. 2019;15(4):e1006897.
  11. Walker EY, et al. Inception loops discover what excites neurons most using deep predictive models. Nat Neurosci. 2019;22(12):2060-2065.
  12. Lee TJ, et al. Large-scale neuroanatomy using LASSO: Loop-based Automated Serial Sectioning Operation. PLoS ONE. 2018;13(10):e0206172.
  13. Yin W, et al. A petascale automated imaging pipeline for mapping neuronal circuits with high-throughput transmission electron microscopy. Nat Commun. 2020;11(1):4949.
  14. Turaga SC, et al. Convolutional networks can learn to generate affinity graphs for image segmentation. Neural Comput. 2010;22(2):511-538.
  15. Helmstaedter M, et al. Connectomic reconstruction of the inner plexiform layer in the mouse retina. Nature. 2013;500(7461):168-174.
  16. Berger DR, et al. VAST (Volume Annotation and Segmentation Tool): Efficient manual and semi-automatic labeling of large 3D image stacks. Front Neural Circuits. 2018;12:88.
  17. Schneider-Mizell CM. Inhibitory specificity from a connectomic census of mouse visual cortex. 2025;640(8058):448-458.
  18. Weiss MA, et al. An unsupervised map of excitatory neuron dendritic morphology in the mouse visual cortex. Nat Commun. 2025;16(1):3361.
  19. Wang EY, et al. Foundation model of neural activity predicts response to new stimulus types. Nature. 2025;640(8058):470-477.
  20. Ding Z, et al. Functional connectomics reveals general wiring rule in mouse visual cortex. 2025;640(8058):459-469.
  21. Celii B, et al. NEURD offers automated proofreading and feature extraction for connectomics. Nature. 2025;640(8058):487-496.
  22. Dorkenwald S, et al. CAVE: Connectome Annotation Versioning Engine. Nat Methods. 2025;22:1112-1120.
  23. Shapson-Coe A, et al. A petavoxel fragment of human cerebral cortex reconstructed at nanoscale resolution. Science. 2024;384(6696):eadk4858.
  24. Schlegel P, et al. Whole-brain annotation and multi-connectome cell typing of Drosophila. Nature. 2024;634(8032):139-152.



Source link

Continue Reading

Trending