Connect with us

AI Research

Japan’s New Supercomputer Poised to Transform Scientific Research and AI Worldwide

Published

on


Japan has unveiled plans for a next-generation supercomputer, known as fugakunext, in what is being described as a major leap forward for the country’s scientific and technological ambitions. The project, backed by an investment of more than $750 million, is led by RIKEN, Japan’s national research institute, and Fujitsu Limited, the nation’s top technology company by market share. If successful, the new machine is expected to achieve speeds around 1,000 times faster than today’s leading systems—potentially shifting the global balance in high-performance computing.

A New Flagship for Japanese Computing

Fugakunext follows in the footsteps of the original fugaku supercomputer, which was launched in 2020 in collaboration between RIKEN and Fujitsu at the RIKEN Center for Computational Science in Kobe, Japan. Fugaku made an immediate impact, peaking at 442 petaFLOPS and reaching 415.5 Linpack petaflops. It played a critical role in pandemic modeling during COVID-19 and rose to the upper ranks of the Top500 list of the world’s fastest supercomputers.

Japanese authorities, including the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), had already begun planning for a successor as early as 2022. Feasibility studies, backed by a budget of approximately $3 million, have been underway since August 2022 and are scheduled to continue until March 2024.

Four research teams are assessing the technical and scientific benefits of a zetta-scale supercomputer, as the country seeks to ensure its ongoing leadership in high-performance computing infrastructure.

Aiming for a Technological Milestone

Japan’s vision for fugakunext is described as “nothing short of extraordinary.” The target is to create a zetta-supercomputer, a system capable of reaching a scale about 1,000 times faster than today’s leading systems, including the US-built Frontier supercomputer. Fugaku itself reached the fourth position on the Top500 list, but with computing demands soaring, Japan’s new project seeks to set a new benchmark for speed and capability.

At the core of fugakunext will be the FUJITSU-MONAKA3 and its successor, the MONAKA-X CPU. These processors are being developed on 2-nanometer technology and are engineered to deliver both high performance and energy efficiency. The technology features Fujitsu’s “unique microarchitecture optimized for advanced 3D packaging and ultra-low voltage circuit operation.”

This design is expected to enable seamless integration with GPUs and other accelerators, making the system adaptable for a wide range of uses—from artificial intelligence to intricate scientific simulations.

Partnership and Strategic Goals

The development contract for fugakunext was awarded by RIKEN to Fujitsu, which will be responsible for the design of the overall system, including computing nodes and CPU components. The basic design phase is scheduled to run until 27 February 2026. Vivek Mahajan, Corporate Executive Officer, Corporate Vice President, and CTO in charge of System Platform at Fujitsu Limited, commented on the project: “Fujitsu is determined to build a system that can dynamically meet customer needs, drawing on our invaluable experience from Fugaku and the cutting-edge technologies of FUJITSU-MONAKA and FUJITSU-MONAKA-X.”

The initiative is part of Japan’s broader focus on “AI for Science”—a strategy that integrates artificial intelligence with simulation technologies and real-time data to accelerate scientific discovery. According to the HPCI Steering Committee, established by MEXT, there is a rising demand for a “flexible platform” that supports large-scale computing resources, particularly as generative AI and other data-intensive technologies drive research and development.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review – Cureus

Published

on



Radiomics-Based Artificial Intelligence and Machine Learning Approach for the Diagnosis and Prognosis of Idiopathic Pulmonary Fibrosis: A Systematic Review  Cureus



Source link

Continue Reading

AI Research

A Real-Time Look at How AI Is Reshaping Work : Information Sciences Institute

Published

on


Artificial intelligence may take over some tasks and transform others, but one thing is certain: it’s reshaping the job market. Researchers at USC’s Information Sciences Institute (ISI) analyzed LinkedIn job postings and AI-related patent filings to measure which jobs are most exposed, and where those changes are happening first. 

The project was led by ISI research assistant Eun Cheol Choi, working with students in a graduate-level USC Annenberg data science course taught by USC Viterbi Research Assistant Professor Luca Luceri. The team developed an “AI exposure” score to measure how closely each role is tied to current AI technologies. A high score suggests the job may be affected by automation, new tools, or shifts in how the work is done. 

Which Industries Are Most Exposed to AI?

To understand how exposure shifted with new waves of innovation, the researchers compared patent data from before and after a major turning point. “We split the patent dataset into two parts, pre- and post-ChatGPT release, to see how job exposure scores changed in relation to fresh innovations,” Choi said. Released in late 2022, ChatGPT triggered a surge in generative AI development, investment, and patent filings.

Jobs in wholesale trade, transportation and warehousing, information, and manufacturing topped the list in both periods. Retail also showed high exposure early on, while healthcare and social assistance rose sharply after ChatGPT, likely due to new AI tools aimed at diagnostics, medical records, and clinical decision-making.

In contrast, education and real estate consistently showed low exposure, suggesting they are, at least for now, less likely to be reshaped by current AI technologies.

AI’s Reach Depends on the Role

AI exposure doesn’t just vary by industry, it also depends on the specific type of work. Jobs like software engineer and data scientist scored highest, since they involve building or deploying AI systems. Roles in manufacturing and repair, such as maintenance technician, also showed elevated exposure due to increased use of AI in automation and diagnostics.

At the other end of the spectrum, jobs like tax accountant, HR coordinator, and paralegal showed low exposure. They center on work that’s harder for AI to automate: nuanced reasoning, domain expertise, or dealing with people.

AI Exposure and Salary Don’t Always Move Together

The study also examined how AI exposure relates to pay. In general, jobs with higher exposure to current AI technologies were associated with higher salaries, likely reflecting the demand for new AI skills. That trend was strongest in the information sector, where software and data-related roles were both highly exposed and well compensated.

But in sectors like wholesale trade and transportation and warehousing, the opposite was true. Jobs with higher exposure in these industries tended to offer lower salaries, especially at the highest exposure levels. The researchers suggest this may signal the early effects of automation, where AI is starting to replace workers instead of augmenting them.

“In some industries, there may be synergy between workers and AI,” said Choi. “In others, it may point to competition or replacement.”

From Class Project to Ongoing Research

The contrast between industries where AI complements workers and those where it may replace them is something the team plans to investigate further. They hope to build on their framework by distinguishing between different types of impact — automation versus augmentation — and by tracking the emergence of new job categories driven by AI. “This kind of framework is exciting,” said Choi, “because it lets us capture those signals in real time.”

Luceri emphasized the value of hands-on research in the classroom: “It’s important to give students the chance to work on relevant and impactful problems where they can apply the theoretical tools they’ve learned to real-world data and questions,” he said. The paper, Mapping Labor Market Vulnerability in the Age of AI: Evidence from Job Postings and Patent Data, was co-authored by students Qingyu Cao, Qi Guan, Shengzhu Peng, and Po-Yuan Chen, and was presented at the 2025 International AAAI Conference on Web and Social Media (ICWSM), held June 23-26 in Copenhagen, Denmark.

Published on July 7th, 2025

Last updated on July 7th, 2025



Source link

Continue Reading

AI Research

Agentic AI Accelerates Shift From ‘Sick’ Care

Published

on

By


Healthcare is a complex and fragmented sector that has long been weighed down by legacy systems and regulations.

If that sounds like a recipe for innovation, you might want to get your ears checked.

The industry’s longstanding institutional inertia when it comes to modernizing not just the business of care but the administrative workflows and processes supporting it might be beginning to thaw.

The reason? The evolution of agentic artificial intelligence, which represents the latest, autonomous iteration of the buzzy software technology.

“We are in a unique time in history,” Autonomize AI CEO Ganesh Padmanabhan said during a discussion hosted by PYMNTS CEO Karen Webster. “Until large language models specifically came about, it was impossible to distill information out of complex medical clinical documentation and contextualize it for different workflows. Now it’s possible,”

Still, Webster noted, agentic AI has become the latest talking point regardless of its real-world results in critical areas.

“It used to be generative AI, now it’s agentic AI,” she said. “But this is still an emerging technology. Why is now the time for it to be applied in healthcare, given that a lot of the industry is still trying to get its arms around basic automation?”

“Healthcare is one of those industries with a lot of knowledge work,” Padmanabhan said. “Data is often created by humans for other humans to consume, which makes automation innately harder.”

At the heart of the problem in healthcare is an industry drowning in administrative burdens. In the United States, an estimated $1.5 trillion is spent on healthcare administration annually, a cost that contributes to delayed care, clinician burnout and poor patient experience.

 

 

Targeting the ‘Business of Care’ With Agentic AI

Rather than tackling every facet of healthcare at once, Autonomize AI, which closed a $28 million funding round last month, focuses on what Padmanabhan called the “business of care.” That includes the invisible scaffolding that supports how care is delivered, such as insurance approvals, quality reporting and patient communication.

“Our focus is on building AI assistants, copilots and agents to augment the workforce,” Padmanabhan said. “There are two people often forgotten in healthcare: the providers who deliver care, and the patients who receive it. We’re putting them both back at the center.”

One example is prior authorization, a complex and manual process in which doctors seek insurer approval for medical procedures. It often involves faxes, weeks-long delays, and endless reviews by nurses and doctors, ultimately leaving patients in limbo.

“This whole process takes days, if not weeks,” Padmanabhan said. “It’s very error-prone. We aim to automate the intake, parse the information in the medical records, adjudicate that against policies, and summarize it for a clinician to make a decision in minutes.”

As Webster noted of the pain point: “After a doctor has said, ‘I want you to see XYZ doctor,’ you assume that call is going to happen. And then it doesn’t. You have to chase it down. That burden falls back on the patient.”

Building Trust in a High-Stakes Environment

For healthcare businesses, unburdening clinicians from administrative tasks isn’t just about productivity but can be about purpose, too.

“There’s a 300,000-nurse shortage in the provider spectrum,” Padmanabhan said. “Most are working at health plans doing paperwork. We need to enable a transition for them to do what they’re meant to do, which is provide care at the point of care.”

Yet automating workflows in healthcare isn’t as easy as flipping a switch.

“This is a hard problem,” Padmanabhan said. “Healthcare data isn’t fully digitized. There are gaps in knowledge.”

Autonomize AI’s own solution is to deploy “copilots” that identify which parts of a workflow can be automated, and then orchestrate seamless handoffs between AI and human workers, he said. Over time, these systems learn and improve based on real-world use.

Trust is the linchpin.

Webster pointed out the risks of incorrect output.

“In a clinical setting, the ramifications of wrong can be quite significant,” she said. “How do you build in those checks and balances?”

“You’ve got to build trust through product,” Padmanabhan said. “Showing evidence, provenance and allowing clinicians to go back to the source data is crucial.”

The long-term vision of agentic AI in healthcare isn’t just about optimizing current processes; it’s about redefining success.

“We don’t do healthcare in this country. We do sick care,” Padmanabhan said. “We need to shift from measuring mortality rates to tracking how many preventative interventions reduced chronic disease.”

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.



Source link

Continue Reading

Trending