Connect with us

AI Research

New MIT Sloan research suggests that AI is more likely to complement, not replace, human workers


CAMBRIDGE, MA, March 17, 2025 — While much public discourse centers on concerns of advanced technologies substituting for and displacing human workers, new research from the MIT Sloan School of Management presents a different perspective — moving beyond simply identifying jobs at risk from AI and highlighting areas where human expertise will remain important and complementary to technological advancements.

The paper, “The EPOCH of AI: Human-Machine Complementarities at Work” offers a framework of human-intensive capabilities and a set of metrics to evaluate tasks across all occupations and better understand the effects of AI on the labor market. Authors are the Society of Sloan Fellows Professor of Management at MIT Sloan, and postdoctoral associate Isabella Loaiza.

“There tends to be a prevailing narrative that robots are coming for jobs,” said Rigobon. “We think it’s important to ask different questions — looking more at human capabilities than AI capabilities and shifting toward what technology can give us rather than what it might take away.”

The researchers  studied the statistical limitations of AI tools. AI is based on universal approximation functions and it is known that those tools perform badly when data are biased or small, when extrapolation far from the training data is needed, and when moral dilemmas emerge. From these deficiencies, the authors concentrated on how humans have dealt with these problems, which creates the foundation for skills that are complementary to AI.

The paper evaluates tasks across a variety of occupations using three key metrics: the EPOCH index (encompassing five groups of human capabilities), a risk-of-substitution score, and a potential-for-augmentation score. The acronym EPOCH stands for: 

  • Empathy and Emotional Intelligence
  • Presence, Networking, and Connectedness
  • Opinion, Judgment, and Ethics
  • Creativity and Imagination
  • Hope, Vision, and Leadership. 

Each of these categories include uniquely human capabilities that enable humans to do work in areas where machines are limited.

The metrics are used to evaluate how human-intensive a task is, and whether an occupation  can likely be automated or augmented by technology. While automation involves a direct transfer of a task from humans to machines, augmentation occurs when using a machine in a task increases worker productivity in that task or other tasks, thus enhancing overall labor productivity. Augmentation, therefore, requires considering the interactions among tasks, whether in pairs, clusters, or networks. Rather than just serving as “partial automation,” augmentation allows humans to do things that they couldn’t do before. For example, the introduction of advanced microscope tools has augmented humans’ ability to work within micro and nano worlds.

“A lot of the research done in this area tends to look more generally at detailed work activities — using scores from there and extrapolating it down,” said Loaiza. “We focused specifically on tasks and, most importantly, the structure of tasks within a job or occupation to measure augmentation.”

The findings suggest that there are many critical human-intensive tasks — tasks that cannot be done effectively entirely by machines — and an increase in the amount of human-intensive tasks, and in the frequency with which workers performed these tasks between 2016 and 2024. In addition, tasks that were newly added to the O*NET data set — one of the largest datasets used to study labor in the United States, maintained by the Bureau of Labor Statistics — in 2024 have higher levels of EPOCH capabilities than the tasks that previously existed (prior to 2024) and the tasks that disappeared in 2024.

Examples of tasks with high EPOCH levels involve direct recruitment, placement, training and evaluation of architecture or engineering project staff. Such tasks include determining scientific or technical goals within broad outlines provided by top management and developing detailed plans to accomplish these goals. Examples of jobs that often include high levels of EPOCH skills, such as creativity or empathy, include emergency management directors, clinical and counseling psychologists, childcare providers, public relations specialists, and film directors.

The research points to the need to invest in the development of workers’ EPOCH capabilities to gain the benefits from helping workers become complementary to — and not replaced by — AI and new technologies.

“We deliberately don’t call these [human skills] ‘soft’ skills,” said Rigobon. “A ‘hard’ skill, like solving a math problem, is comparatively easy to teach. It is much harder to teach a person these critical human skills and capabilities—such as hope, empathy, and creativity.”

About the MIT Sloan School of Management

The MIT Sloan School of Management is where smart, independent leaders come together to solve problems, create new organizations, and improve the world. Learn more at mitsloan.mit.edu.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Prediction: This Artificial Intelligence (AI) Stock Will Be the Next Household Name by 2031

Published

on


For now, the “Magnificent Seven” and select others remain the most popular names in the AI arena.

Over the last few years, companies like Nvidia, Amazon, Alphabet, Microsoft, and Meta Platforms dominated the narrative around artificial intelligence (AI). As the conversation shifted beyond chips and into adjacent applications in data centers and software, names such as Broadcom, Taiwan Semiconductor Manufacturing, and Palantir Technologies also stepped into the spotlight.

It’s no secret that the AI trade remains heavily concentrated within a small circle of big tech giants. But savvy investors know that opportunity doesn’t end with the usual suspects.

So here’s the question: Have you heard of Nebius Group (NBIS 50.52%)? If not, you’re not alone.

This sprawling data center company has flown under the radar — but its unique position in the AI ecosystem could propel it into the spotlight and make it a household name very soon.

Nebius took an unconventional route to the AI revolution

Unlike many of its louder peers, Nebius did not emerge as a flashy start-up or an established tech titan already entrenched in the AI race. Instead, the company traces its roots back to Yandex — a Russian internet conglomerate.

As geopolitical tensions from the Russia-Ukraine war escalated, Yandex moved to divest its noncore assets. From that process, Nebius was spun off, and it was listed on the Nasdaq exchange last October.

Soon after, Nebius completed a capital raise that attracted a particularly notable participant: Nvidia. The undisputed leader in AI chips not only became an investor but also established itself as a strategic ally — lending Nebius a level of credibility that few companies can claim.

At its core, Nebius can be considered a neocloud — a business specializing in building AI infrastructure by constructing data centers and renting out Nvidia’s sought-after graphics processing units (GPUs) to other businesses via the cloud. This model positions Nebius to scale up in lockstep with Nvidia, benefiting as next-generation chips like Blackwell and Rubin enter the market.

Image source: Getty Images.

Nebius is more than GPUs

While infrastructure is its core business, Nebius operates several subsidiaries and also has notable strategic investments.

Toloka is in the business of data labeling, an important component of training datasets for AI models. The company also has exposure to autonomous driving systems and robotics through Avride and maintains a software platform called TripleTen that specializes in educating developers across various AI applications.

Nebius also has an equity stake in ClickHouse, an open-source database management and analytics system.

This diversified ecosystem positions Nebius beyond chips and provides the company with exposure to a number of potentially trillion-dollar ancillary markets as AI workloads become larger and more advanced.

Is Nebius stock a buy right now?

In December 2024, Nebius’s core infrastructure segment closed the year with an annualized run rate of $90 million. Just two quarters later (by June 30), the company’s annual recurring revenue (ARR) run rate surged to $430 million. Even more compelling is that management recently raised full-year guidance to a range of $900 million to $1.1 billion from its prior outlook of $750 million to $1 billion.

On Sept. 8, however, everything changed for Nebius as news broke that the company signed a massive new deal with Microsoft. According to regulatory filings, Nebius “will provide Microsoft access to dedicated GPU infrastructure capacity” at its data center in New Jersey. The contract is worth $17.4 billion and runs through 2031.

Prior to the deal with Microsoft, Nebius boasted a market capitalization of $15.4 billion — implying a forward price-to-sales ratio of about 14 at the high end of its ARR forecast. For context, that’s about half the multiple CoreWeave commanded at its peak earlier this year following its much-hyped initial public offering.

CRWV PS Ratio Chart

CRWV PS Ratio data by YCharts

This suggests a couple of takeaways. On one hand, Nebius’s valuation has been swept up in the broader bullish AI narrative — leaving traces of froth. On the other, the stock has remained relatively insulated from the sharp pullbacks seen in more volatile peers like CoreWeave — a dynamic that could play in its favor as it continues to fight for mindshare in an increasingly crowded and competitive market.

Looking ahead, Nebius appears positioned to benefit from secular tailwinds fueling AI infrastructure. Microsoft’s new deal emphasizes that cloud hyperscalers are showing no signs of slowing their capital expenditure, and Nebius is already steadily carving out a role as a beneficiary of that spending.

I think Nebius will be trading materially higher than it is today by next decade as its relationship with Microsoft matures. That makes it, in my view, a compelling buy-and-hold opportunity.

Adam Spatacco has positions in Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, and Palantir Technologies. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, Palantir Technologies, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Broadcom and Nebius Group and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Research

Studying Falls to Save Lives

Published

on


For older adults, falling is a major concern. It’s the leading cause of injury for those over 65 and the consequences can be life-threatening.

“Falling itself isn’t the issue—it’s the injury that’s really harmful for older adults,” says Syracuse University exercise science professor Yaejin Moon, who lost two of her grandparents to fall-related injuries.

The experience of losing a family member, friend or neighbor from complications after a fall is all too universal. That’s why Moon and Ph.D. student Reese Michaels G’24 are using cutting-edge research tactics—combining advanced artificial intelligence (AI) video analysis with traditional lab research—to learn how people fall and how to prevent serious injury.

Analyzing Falls With AI and Custom Code

AI-powered tools like OpenPose and WHAM are replacing traditional motion-tracking markers, allowing researchers to study movement more easily in real-world settings.

Traditionally, studying human movement meant attaching motion-tracking markers to the body—a technique common in gaming, film and movement science. Today, however, advances in AI make it possible to analyze movement directly from standard video footage.

“If we take a video—even from an iPhone—and input it into the system, the AI can automatically detect key body points and track motion. We don’t need markers anymore,” explains Moon, referring to AI-based pose estimation algorithms such as OpenPose.

Working with researchers in Canada, Moon and Michaels have access to over 1,700 real-life fall videos from surveillance footage in long-term care facilities and hospitals. Using OpenPose and Michaels’ custom code, the research pair track body position and extract biomechanical data to identify which types of falls result in injury and evaluate which movements protect against harm.

“It’s like having access to a black box for accidents,” Moon says. “We can analyze exactly what happened.”

Although Michaels had no prior coding experience, he took a graduate-level Python course through Syracuse’s School of Information Studies. “It was trial by fire, but I was able to write code for one of our projects, and I realized I could apply those skills in a meaningful way to research,” says Michaels, who started working with Moon in the Falk College of Sport as an exercise science master’s student two years ago.

“He can calculate things like velocity of the fall, acceleration and knee angle at the moment of impact—very specific biomechanical outcomes—all generated through his own programming,” Moon says.

As the AI models continue to improve, the team’s research also advances. “These newer AI models can track movement in three dimensions rather than two,” Michaels explains. “That gives us much more insight into things like joint angles during a fall, which opens the door to more realistic and accurate analysis.”

“The goal is to implement this kind of technology in long-term care settings to get real-time insights into how people move and how injuries happen,” Michaels says.

Person walking on a treadmill while a researcher collects data.

Falk College professor Yaejin Moon (left) uses a special treadmill to simulate sudden loss of balance, while motion-capture cameras track how participants respond.

In the lab, the AI models are validated using a specialized treadmill that safely simulates balance loss. The treadmill can move forward, backward and side to side while participants wear a safety harness and adjust to the sudden changes in movement. Motion-capture cameras record every step and reaction.

Falls happen in three phases: the initial phase (standing or walking normally), the loss-of-balance phase (when the fall begins) and the impact phase (when the body hits the ground).

Person walking on treadmill to simulate falls.

New AI models allow researchers to track movement in 3D, greatly improving the accuracy and realism of fall analysis.

“The perturbation treadmill is used to study that second phase—the moment when balance is lost,” Moon says. “We analyze how people react to losing balance and how they try to recover.”

The research also explores dual-task conditions—how cognitive load impacts the ability to recover balance. Participants are asked to perform mental tasks, such as listing animals or counting backward from 100 by sevens, while walking. This adds a layer of realism, simulating situations where older adults might be distracted by thinking, talking or multitasking while moving.

“Do we recover balance faster when we’re focused solely on walking? Or is our response slower or different when our attention is divided?” Moon inquires.

Research in the Real World

People working on research in a lab.

Ph.D. student Reese Michaels G’24 is the lead author of two studies—one published in Scientific Reports and another currently under review in the Journal of Biomechanics.

So, how will this ongoing research impact people’s everyday lives? Moon breaks it down into three key components: “First is understanding the mechanisms—how the body and mind work together during a fall. Second is developing intervention programs. And third is improving technology.”

Michaels, who is now in his second year of the exercise science Ph.D. program, is especially focused on improving technology.

Person walking on a treadmill while two other people run lab tests on her.

A third-degree black belt in Taekwondo, Moon began her research by teaching older adults how to fall safely using martial arts. Now, she and Michaels are using AI tools to better understand falls and develop new ways to prevent serious injuries.

“One of our next steps is feeding outputs from pose estimation models into a machine learning algorithm that could predict impact force—how hard someone hit the ground,” explains Michaels. “That would give us a direct measure of whether a fracture or injury occurred.”

The pair is also working to make their video analysis methods more generalizable. With ongoing AI advancements and more real-world video data, the team hopes to analyze situations that can’t be replicated in a lab, such as falls down a set of stairs, and to address different age and health groups.

By combining AI, biomechanics and real-world data, this research is not only advancing the study of falls but also laying the foundation for innovative solutions to prevent injuries in aging populations. As technology continues to evolve, their work promises to lead to more precise strategies that could significantly reduce the risks older adults face, ultimately improving their quality of life and safety.



Source link

Continue Reading

AI Research

NCCN Policy Summit Explores Whether Artificial Intelligence Can Transform Cancer Care Safely and Fairly

Published

on


WASHINGTON, D.C. [September 9, 2025] — Today, the National Comprehensive Cancer Network® (NCCN®)—an alliance of leading cancer centers devoted to patient care, research, and education—hosted a Policy Summit exploring where artificial intelligence (AI) currently stands as a tool for improving cancer care, and where it may be going in the future. Subject matter experts, including patients and advocates, clinicians, and policymakers, weighed in on where they saw emerging success and also reasons for concern.

Travis Osterman, DO, MS, FAMIA, FASCO, Director of Cancer Clinical Informatics, Vanderbilt-Ingram Cancer Center—a member of the NCCN Digital Oncology Forumdelivered a keynote address, stating: “Because of AI, we are at an inflection point in how technology supports the delivery of care to our patients with cancer. Thoughtful regulation can help us integrate these tools into everyday practice in ways that improve care delivery and support oncology practices. The decisions we make now will determine how AI innovations serve our patients and impact clinicians for years to come.”

Many speakers took a cautiously optimistic tone on AI, rooted in pragmatism.

“AI isn’t the future of cancer care… it’s already here, helping detect disease earlier, guide personalized treatment, and reduce clinical burdens,” said William Walders, Executive Vice President, Chief Digital and Information Officer, The Joint Commission. “To fully realize the promise of AI in oncology, we must implement thoughtful guardrails that not only build trust but actively safeguard patient safety and uphold the highest standards of care. At Joint Commission, our mission is to shape policy and guidance that ensures AI complements, never compromises, the human touch. These guardrails are essential to prevent unintended consequences and to ensure equitable, high-quality outcomes for all.”

Panelists noted the speed at which AI models are evolving. Some compared its potential to previous advances in care, such as the leap from paper to electronic medical records. Many expressed excitement over the possibilities it represents for improving efficiency and helping to support an overburdened oncology workforce and accelerate the pursuit of new cures.

“Artificial intelligence is transforming every industry, and oncology is no exception,” stated Jorge Reis-Filho, MD, PhD, FRCPath, Chief AI and Data Scientist, Oncology R&D, AstraZeneca. “With the advent of multimodal foundation models and agentic AI, there are unique opportunities to propel clinical development, empowering researchers and clinicians with the ability to generate a more holistic understanding of disease biology and develop the next generation of biomarkers to guide decision making.”

“AI has enormous potential to optimize cancer outcomes by making clinical trials accessible to patients regardless of their location and by simplifying complex trial processes for patients and research teams alike. I am looking forward to new approaches for safe evaluation and implementation so that we can effectively and responsibly use AI to gain maximum insight from every piece of patient data and drive progress,” commented Danielle Bitterman, MD, Clinical Lead for Data Science/AI, Mass General Brigham.

She continued: “As AI becomes integrated into clinical practice, stronger collaborations between oncologists and computer scientists will catalyze advances and will be key to directly addressing the most urgent challenges in cancer care.”

Regina Barzilay, PhD, School of Engineering Distinguished Professor for AI and Health, MIT, expressed her concern that adoption may not be moving quickly enough: “AI-driven diagnostics and treatment has potential to transform cancer outcomes. Unfortunately, today, these tools are not utilized enough in patient care. Guidelines could play a critical role in changing this status quo.”

She illustrated some specific AI technologies that she believes are ready to be implemented into patient care and asserted her wishes for keeping up with rapidly progressing technology.

Some of the panel participants raised issues about the potential challenges from AI adoption, including:

  • How to implement quality control, accreditation, and fact-checking in a way that is fair and not burdensome
  • How to determine appropriate governmental oversight
  • How medical and technology organizations can work together to best leverage the expertise of both
  • How to integrate functionality across various platforms
  • How to avoid increasing disparities and technology gaps
  • How to account for human error and bias while maintaining the human touch

“Many similar problems have been solved in different application environments,” concluded Allen Rush, PhD, MS, Co-Founder and Board Chairman, Jacqueline Rush Lynch Syndrome Cancer Foundation. “This will take teaming up with non-medical industry experts to find the best tools, fine-tune them, and apply ongoing learning. We need to ask the right questions and match them with the right AI platforms to unlock new possibilities for cancer detection and treatment.”

The topic of AI and cancer care was also featured in a plenary session during the NCCN 2025 Annual Conference. Visit NCCN.org/conference to view that session and others via the NCCN Continuing Education Portal.

Next up, on Tuesday, December 9, 2025, NCCN is hosting a Patient Advocacy Summit on addressing the unique cancer care needs of veterans and first responders. Visit NCCN.org/summits to learn more and register.

# # #

About the National Comprehensive Cancer Network

The National Comprehensive Cancer Network® (NCCN®) is marking 30 years as a not-for-profit alliance of leading cancer centers devoted to patient care, research, and education. NCCN is dedicated to defining and advancing quality, effective, equitable, and accessible cancer care and prevention so all people can live better lives. The NCCN Clinical Practice Guidelines in Oncology (NCCN Guidelines®) provide transparent, evidence-based, expert consensus-driven recommendations for cancer treatment, prevention, and supportive services; they are the recognized standard for clinical direction and policy in cancer management and the most thorough and frequently-updated clinical practice guidelines available in any area of medicine. The NCCN Guidelines for Patients® provide expert cancer treatment information to inform and empower patients and caregivers, through support from the NCCN Foundation®. NCCN also advances continuing education, global initiatives, policy, and research collaboration and publication in oncology. Visit NCCN.org for more information.





Source link

Continue Reading

Trending