Connect with us

AI Research

Microsoft CEO Satya Nadella Hails Research Team Which Built Analog Optical Computer Using Smartphone Parts To Tackle AI Workloads – Microsoft (NASDAQ:MSFT)

Published

on


On Wednesday, Microsoft Corp. MSFT CEO Satya Nadella praised a breakthrough by the company’s research division, highlighting a prototype analog optical computer built with smartphone components that could reshape how industries solve complex problems.

Nadella Calls It A Breakthrough

Nadella took to X, formerly Twitter, saying Microsoft’s “breakthrough work on an analog optical computer points to new ways to solve complex real-world problems with much greater efficiency.

 He added that he was “super” excited to see the research published in the scientific journal Nature.

See Also: Nvidia Prepares New China-Specific AI Chip To Defend Market Share

Built With Everyday Tech, Designed For Speed And Efficiency

According to a Microsoft blog, the research team spent four years developing the analog optical computer, or AOC, using readily available parts such as micro-LED lights, smartphone camera sensors and optical lenses.

Unlike traditional digital computers, which process binary data, the AOC uses light as a medium for computation.

Microsoft said the prototype could be up to 100 times faster and 100 times more energy efficient at solving certain optimization problems compared to today’s digital systems, with the potential to run AI workloads at a fraction of the energy used by GPUs.

Tackling Finance, Healthcare And AI

The research team tested the AOC on optimization problems in both banking and healthcare.

In collaboration with Barclays, the system helped simulate complex transaction settlements involving thousands of parties and tens of thousands of trades.

In healthcare, researchers used the AOC’s digital twin to reconstruct MRI scans, suggesting future versions of the device could cut scanning times from 30 minutes to five.

The team also mapped early machine learning tasks onto the system, showing potential for the AOC to one day run large language models with lower costs and energy consumption.

Francesca Parmigiani, Microsoft principal research manager leading the project, said that the device is not yet a general-purpose computer but could solve a wide range of practical problems.

Microsoft also said that it is sharing its optimization solver algorithm and a digital twin of the AOC so other organizations can test the system virtually and propose new applications.

Price Action: Over the past month, Microsoft’s shares have declined by 5.65%, yet they are still up 20.73% year-to-date, according to Benzinga Pro.

Benzinga’s Edge Stock Rankings suggest that while MSFT is experiencing some short-term weakness, it continues to maintain a solid upward trend in the medium and long term. Additional performance insights can be found here.

Read Next:

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Photo courtesy: Shutterstock



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

100x Faster Brain-Inspired AI Model

Published

on

By


In the rapidly evolving field of artificial intelligence, a new contender has emerged from China’s research labs, promising to reshape how we think about energy-efficient computing. The SpikingBrain-7B model, developed by the Brain-Inspired Computing Lab (BICLab) at the Chinese Academy of Sciences, represents a bold departure from traditional large language models. Drawing inspiration from the human brain’s neural firing patterns, this system employs spiking neural networks to achieve remarkable efficiency gains. Unlike conventional transformers that guzzle power, SpikingBrain-7B mimics biological neurons, firing only when necessary, which slashes energy consumption dramatically.

At its core, the model integrates hybrid-linear attention mechanisms and conversion-based training techniques, allowing it to run on domestic MetaX chips without relying on NVIDIA hardware. This innovation addresses a critical bottleneck in AI deployment: the high energy demands of training and inference. According to a technical report published on arXiv, the SpikingBrain series, including the 7B and 76B variants, demonstrates over 100 times faster first-token generation at long sequence lengths, making it ideal for edge devices in industrial control and mobile applications.

Breaking Away from Transformer Dominance

The genesis of SpikingBrain-7B can be traced to BICLab’s GitHub repository, where the open-source code reveals a sophisticated architecture blending spiking neurons with large-scale model training. Researchers at the lab, led by figures like Guoqi Li and Bo Xu, have optimized for non-NVIDIA clusters, overcoming challenges in parallel training and communication overhead. This approach not only enhances stability but also paves the way for neuromorphic hardware that prioritizes energy optimization over raw compute power.

Recent coverage in Xinhua News highlights how SpikingBrain-1.0, the foundational system, breaks from mainstream models like ChatGPT by using spiking networks instead of dense computations. This brain-inspired paradigm allows the model to train on just a fraction of the data typically required—reports suggest as little as 2%—while matching or exceeding transformer performance in benchmarks.

Efficiency Gains and Real-World Applications

Delving deeper, the model’s spiking mechanism enables asynchronous processing, akin to how the brain handles information dynamically. This is detailed in the arXiv report, which outlines a roadmap for next-generation hardware that could integrate seamlessly into sectors like healthcare and transportation. For instance, in robotics, SpikingBrain’s low-power profile supports real-time decision-making without the need for massive data centers.

Posts on X (formerly Twitter) from AI enthusiasts, such as those praising its 100x speedups, reflect growing excitement. Users have noted how the model’s hierarchical processing mirrors neuroscience findings, with emergent brain-like patterns in its structure. This sentiment aligns with broader neuromorphic computing trends, as seen in a Nature Communications Engineering article on advances in robotic vision, where spiking networks enable efficient AI in constrained environments.

Challenges and Future Prospects

Despite its promise, deploying SpikingBrain-7B isn’t without hurdles. The arXiv paper candidly discusses adaptations needed for CUDA and Triton operators in hybrid attention setups, underscoring the technical feats involved. Moreover, training on MetaX clusters required custom optimizations to handle long-sequence topologies, a feat that positions China at the forefront of independent AI innovation amid global chip restrictions.

In industry circles, this development is seen as a catalyst for shifting AI paradigms. A NotebookCheck report emphasizes its potential for up to 100x performance boosts over conventional systems, fueling discussions on sustainable AI. As neuromorphic computing gains traction, SpikingBrain-7B could inspire a wave of brain-mimicking models, reducing the environmental footprint of AI while expanding its reach to everyday devices.

Implications for Global AI Research

Beyond technical specs, the open-sourcing of SpikingBrain-7B via GitHub invites global collaboration, with the repository already garnering attention for its spike-driven transformer implementations. This mirrors earlier BICLab projects like Spike-Driven-Transformer-V2, building a continuum of research toward energy-efficient intelligence.

Looking ahead, experts anticipate integrations with emerging hardware, as outlined in PMC’s coverage of spike-based dynamic computing. With SpikingBrain’s bilingual capabilities and industry validations, it stands as a testament to how bio-inspired designs can democratize AI, challenging Western dominance and fostering a more inclusive technological future.



Source link

Continue Reading

AI Research

Exclusive | Cyberport may use Chinese GPUs at Hong Kong supercomputing hub to cut reliance on Nvidia

Published

on


Cyberport may add some graphics processing units (GPUs) made in China to its Artificial Intelligence Supercomputing Centre in Hong Kong, as the government-run incubator seeks to reduce its reliance on Nvidia chips amid worsening China-US relations, its chief executive said.

Cyberport has bought four GPUs made by four different mainland Chinese chipmakers and has been testing them at its AI lab to gauge which ones to adopt in the expanding facilities, Rocky Cheng Chung-ngam said in an interview with the Post on Friday. The park has been weighing the use of Chinese GPUs since it first began installing Nvidia chips last year, he said.

“At that time, China-US relations were already quite strained, so relying solely on [Nvidia] was no longer an option,” Cheng said. “That is why we felt that for any new procurement, we should in any case include some from the mainland.”

Cyberport’s AI supercomputing centre, established in December with its first phase offering 1,300 petaflops of computing power, will deliver another 1,700 petaflops by the end of this year, with all 3,000 petaflops currently relying on Nvidia’s H800 chips, he added.

Cyberport CEO Rocky Cheng Chung-ngam on September 12, 2025. Photo: Jonathan Wong

As all four Chinese solutions offer similar performance, Cyberport would take cost into account when determining which ones to order, according to Cheng, declining to name the suppliers.



Source link

Continue Reading

AI Research

Why do AI chatbots use so much energy?

Published

on


In recent years, ChatGPT has exploded in popularity, with nearly 200 million users pumping a total of over a billion prompts into the app every day. These prompts may seem to complete requests out of thin air.

But behind the scenes, artificial intelligence (AI) chatbots are using a massive amount of energy. In 2023, data centers, which are used to train and process AI, were responsible for 4.4% of electricity use in the United States. Across the world, these centers make up around 1.5% of global energy consumption. These numbers are expected to skyrocket, at least doubling by 2030 as the demand for AI grows.



Source link

Continue Reading

Trending