AI Research
Indian American scientist developing computers mimicking human brain
Researchers led by Indian American scientist are taking inspiration from the human brain to develop computing architecture that can support the growing energy demands of artificial intelligence.
“There’s nothing in the world that’s as efficient as our brain — it’s evolved to maximize the storage and processing of information and minimize energy usage,” says Sambandamurthy Ganapathy, professor in the University at Buffalo Department of Physics and associate dean for research in the UB College of Arts and Sciences.
“While the brain is far too complex to actually recreate, we can mimic how it stores and processes information to create more energy-efficient computers, and thus, more energy-efficient AI.”
While an AI model is estimated to take over 6,000 joules of energy to generate a single text response, human brain needs just 20 joules every second to keep one alive and cognitive.
READ: 8 Indian Americans among 10 3M Young Scientist Challenge finalists (June 26, 2025)
This brain-inspired approach is known as neuromorphic computing. Its origins go back to the 1980s but it has taken on more relevance in recent years as computing tasks have become more energy intensive and complex, especially tasks that require AI, according to a university release.
While neuromorphic computing can relate to both brain-inspired hardware and software, Ganapathy’s team is focused on hardware.
Their research, funded by the National Science Foundation, is a blend of quantum science and engineering that involves probing the unique electrical properties of materials that can be used to build neuromorphic computer chips.
The team’s goal is to ultimately develop chips and devices that are not only more energy efficient, but also just better at completing tasks — perhaps even in a more human-like way.
“The computers of today were built for simple and repetitive tasks, but with the rise of AI, we don’t want to just solve simple problems anymore,” Ganapathy says. “We want computers to solve complex problems, like human beings do every day. Neuromorphic computing may provide the structure to allow computers to do this.”
“Neuromorphic computing simply aims to move beyond the binary framework and closer to the far more complex system given to us by nature,” says Nitin Kumar, a graduate student in Ganapathy’s lab.
READ: Four Indian Americans named 2025 Guggenheim fellows (April 18, 2025)
One of the ways the brain is more complex — and energy efficient — than a computer is that information is stored and processed in the same place.
“It’s not as if the left side of the brain holds all the memories and the right is where all learning happens,” Ganapathy says. “It’s intertwined.”
Information storage and processing are separated in traditional computers, and thus, a lot of energy is used simply transporting data along tiny circuits between its memory unit and its processing unit. This can become even more energy-intensive when the computing architecture is supporting an AI model.
“Of course, the question then becomes how close we can place memory and processing together within a computer chip,” Ganapathy says. “This is known as in-memory computing and it’s a major advantage of neuromorphic computing.”
Memory and processing are intertwined in the brain thanks to an intricate system of neurons. So Ganapathy’s team is developing artificial neurons and synapses designed to mimic their biological counterparts’ electrical signaling of information.
READ: Har Gobind Khorana: Indian American scientist who cracked the genetic code (January 10, 2022)
“We essentially want to recreate those rhythmic and synchronized electrical oscillations you may see in a brain scan,” Kumar says. “To do this, we need to create our neurons and synapses out of advanced materials whose electrical conductivity can controllably be switched on and off with precision.”
“Our next goal,” Ganapathy adds, “is to synchronize the oscillations of multiple devices to construct an oscillatory neural network capable of emulating complex brain functions such as pattern recognition, motor control and other rhythmic behaviors.”
Ganapathy stresses that neuromorphic computers mimic the brain on a purely phenomenological level. Neuromorphic computing aims to recreate the brain’s functional behaviors and benefits — not consciousness.
However, it’s possible that neuromorphic computers will solve problems less like computers and more like human beings.
Researchers think this could be especially helpful in applications like self-driving cars, where AI does well in most road situations but still underperforms humans when it comes to more complex scenarios with no easy solution.
“Neuromorphic chips may not be in your smartphone anytime soon, but I do think we will see them in highly specific applications, like self-driving cars. Perhaps even one chip to respond to the road and another to find the best possible route,” Ganapathy says.
AI Research
Enterprises will strengthen networks to take on AI, survey finds
- Private data centers: 29.5%
- Traditional public cloud: 35.4%
- GPU as a service specialists: 18.5%
- Edge compute: 16.6%
“There is little variation from training to inference, but the general pattern is workloads are concentrated a bit in traditional public cloud and then hyperscalers have significant presence in private data centers,” McGillicuddy explained. “There is emerging interest around deploying AI workloads at the corporate edge and edge compute environments as well, which allows them to have workloads residing closer to edge data in the enterprise, which helps them combat latency issues and things like that. The big key takeaway here is that the typical enterprise is going to need to make sure that its data center network is ready to support AI workloads.”
AI networking challenges
The popularity of AI doesn’t remove some of the business and technical concerns that the technology brings to enterprise leaders.
According to the EMA survey, business concerns include security risk (39%), cost/budget (33%), rapid technology evolution (33%), and networking team skills gaps (29%). Respondents also indicated several concerns around both data center networking issues and WAN issues. Concerns related to data center networking included:
- Integration between AI network and legacy networks: 43%
- Bandwidth demand: 41%
- Coordinating traffic flows of synchronized AI workloads: 38%
- Latency: 36%
WAN issues respondents shared included:
- Complexity of workload distribution across sites: 42%
- Latency between workloads and data at WAN edge: 39%
- Complexity of traffic prioritization: 36%
- Network congestion: 33%
“It’s really not cheap to make your network AI ready,” McGillicuddy stated. “You might need to invest in a lot of new switches and you might need to upgrade your WAN or switch vendors. You might need to make some changes to your underlay around what kind of connectivity your AI traffic is going over.”
Enterprise leaders intend to invest in infrastructure to support their AI workloads and strategies. According to EMA, planned infrastructure investments include high-speed Ethernet (800 GbE) for 75% of respondents, hyperconverged infrastructure for 56% of those polled, and SmartNICs/DPUs for 45% of surveyed network professionals.
AI Research
Amazon Web Services builds heat exchanger to cool Nvidia GPUs for AI
The letters AI, which stands for “artificial intelligence,” stand at the Amazon Web Services booth at the Hannover Messe industrial trade fair in Hannover, Germany, on March 31, 2025.
Julian Stratenschulte | Picture Alliance | Getty Images
Amazon said Wednesday that its cloud division has developed hardware to cool down next-generation Nvidia graphics processing units that are used for artificial intelligence workloads.
Nvidia’s GPUs, which have powered the generative AI boom, require massive amounts of energy. That means companies using the processors need additional equipment to cool them down.
Amazon considered erecting data centers that could accommodate widespread liquid cooling to make the most of these power-hungry Nvidia GPUs. But that process would have taken too long, and commercially available equipment wouldn’t have worked, Dave Brown, vice president of compute and machine learning services at Amazon Web Services, said in a video posted to YouTube.
“They would take up too much data center floor space or increase water usage substantially,” Brown said. “And while some of these solutions could work for lower volumes at other providers, they simply wouldn’t be enough liquid-cooling capacity to support our scale.”
Rather, Amazon engineers conceived of the In-Row Heat Exchanger, or IRHX, that can be plugged into existing and new data centers. More traditional air cooling was sufficient for previous generations of Nvidia chips.
Customers can now access the AWS service as computing instances that go by the name P6e, Brown wrote in a blog post. The new systems accompany Nvidia’s design for dense computing power. Nvidia’s GB200 NVL72 packs a single rack with 72 Nvidia Blackwell GPUs that are wired together to train and run large AI models.
Computing clusters based on Nvidia’s GB200 NVL72 have previously been available through Microsoft or CoreWeave. AWS is the world’s largest supplier of cloud infrastructure.
Amazon has rolled out its own infrastructure hardware in the past. The company has custom chips for general-purpose computing and for AI, and designed its own storage servers and networking routers. In running homegrown hardware, Amazon depends less on third-party suppliers, which can benefit the company’s bottom line. In the first quarter, AWS delivered the widest operating margin since at least 2014, and the unit is responsible for most of Amazon’s net income.
Microsoft, the second largest cloud provider, has followed Amazon’s lead and made strides in chip development. In 2023, the company designed its own systems called Sidekicks to cool the Maia AI chips it developed.
WATCH: AWS announces latest CPU chip, will deliver record networking speed
AI Research
Materials scientist Daniel Schwalbe-Koda wins second collaborative AI innovation award
For two years in a row, Daniel Schwalbe-Koda, an assistant professor of materials science and engineering at the UCLA Samueli School of Engineering, has received an award from the Scialog program for collaborative research into AI-supported and partially automated synthetic chemistry.
Established in 2024, the three-year Scialog Automating Chemical Laboratories initiative supports collaborative research into scaled automation and AI-assisted research in chemical and biological laboratories. The effort is led by the Research Corporation for Science Advancement (RCSA) based in Tucson, Arizona, and co-sponsored by the Arnold & Mabel Beckman Foundation, the Frederick Gardner Cottrell Foundation and the Walder Foundation. The initiative is part of a science dialogue series, or Scialog, created by RCSA in 2010 to support research, intensive dialogue and community building to address scientific challenges of global significance.
Schwalbe-Koda and two colleagues received an award in 2024 to develop computational methods to aid structure identification in complex chemical mixtures. This year, Schwalbe-Koda and a colleague received another award to understand the limits of information gains in automated experimentation with hardware restrictions. Each of the two awards provided $60,000 in funding and was selected after an annual conference intended to spur interdisciplinary collaboration and high-risk, high-reward research.
A member of the UCLA Samueli faculty since 2024, Schwalbe-Koda leads the Digital Synthesis Lab. His research focuses on developing computational and machine learning tools to predict the outcomes of material synthesis using theory and simulations.
To read more about Schwalbe-Kobe’s honor visit the UCLA Samueli website.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Funding & Business6 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%