Connect with us

Tools & Platforms

Study Links Human and AI Learning Strategies

Published

on


Summary 

Brown University researchers found that humans and AI integrate two types of learning  fast, flexible learning and slower, incremental learning  in surprisingly similar ways. The study revealed trade-offs between memory retention and flexibility, offering insights into human cognition and guiding development of more intuitive, trustworthy AI systems.

Key Takeaways

  • Humans and AI combine fast “in-context” and slower incremental learning.
  • Trade-offs exist between flexibility and long-term memory retention.
  • Findings may improve human–AI collaboration and future AI tools.

New research found similarities in how humans and artificial intelligence integrate two types of learning, offering new insights about how people learn as well as how to develop more intuitive AI tools.

 Led by Jake Russin, a postdoctoral research associate in computer science at Brown University, the study found by training an AI system that flexible and incremental learning modes interact similarly to working memory and long-term memory in humans.

“These results help explain why a human looks like a rule-based learner in some circumstances and an incremental learner in others,” Russin said. “They also suggest something about what the newest AI systems have in common with the human brain.”

Russin holds a joint appointment in the laboratories of Michael Frank, a professor of cogntive and psychological sciences and director of the Center for Computational Brain Science at Brown’s Carney Institute for Brain Science, and Ellie Pavlick, an associate professor of computer science who leads the AI Research Institute on Interaction for AI Assistants at Brown. The study was published in the Proceedings of the National Academy of Sciences.

 Depending on the task, humans acquire new information in one of two ways. For some tasks, such as learning the rules of tic-tac-toe, “in-context” learning allows people to figure out the rules quickly after a few examples. In other instances, incremental learning builds on information to improve understanding over time — such as the slow, sustained practice involved in learning to play a song on the piano.

 While researchers knew that humans and AI integrate both forms of learning, it wasn’t clear how the two learning types work together. Over the course of the research team’s ongoing collaboration, Russin — whose work bridges machine learning and computational neuroscience — developed a theory that the dynamic might be similar to the interplay of human working memory and long-term memory.

 To test this theory, Russin used “meta-learning”— a type of training that helps AI systems learn about the act of learning itself — to tease out key properties of the two learning types. The experiments revealed that the AI system’s ability to perform in-context learning emerged after it meta-learned through multiple examples. 

One experiment, adapted from an experiment in humans, tested for in-context learning by challenging the AI to recombine similar ideas to deal with new situations: if taught about a list of colors and a list of animals, could the AI correctly identify a combination of color and animal (e.g. a green giraffe) it had not seen together previously? After the AI meta-learned by being challenged to 12,000 similar tasks, it gained the ability to successfully identify new combinations of colors and animals.

The results suggest that for both humans and AI, quicker, flexible in-context learning arises after a certain amount of incremental learning has taken place. 

“At the first board game, it takes you a while to figure out how to play,” Pavlick said. “By the time you learn your hundredth board game, you can pick up the rules of play quickly, even if you’ve never seen that particular game before.”

 The team also found trade-offs, including between learning retention and flexibility: Similar to humans, the harder it is for AI to correctly complete a task, the more likely it will remember how to perform it in the future. According to Frank, who has studied this paradox in humans, this is because errors cue the brain to update information stored in long-term memory, whereas error-free actions learned in context increase flexibility but don’t engage long-term memory in the same way. 

For Frank, who specializes in building biologically inspired computational models to understand human learning and decision-making, the team’s work showed how analyzing strengths and weaknesses of different learning strategies in an artificial neural network can offer new insights about the human brain. 

“Our results hold reliably across multiple tasks and bring together disparate aspects of human learning that neuroscientists hadn’t grouped together until now,” Frank said. 

 The work also suggests important considerations for developing intuitive and trustworthy AI tools, particularly in sensitive domains such as mental health.  

 “To have helpful and trustworthy AI assistants, human and AI cognition need to be aware of how each works and the extent that they are different and the same,” Pavlick said. “These findings are a great first step.”

Reference: Russin J, Pavlick E, Frank MJ. Parallel trade-offs in human cognition and neural networks: The dynamic interplay between in-context and in-weight learning. Proc Natl Acad Sci USA. 2025;122(35):e2510270122. doi: 10.1073/pnas.2510270122

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source. Our press release publishing policy can be accessed here.

This article is a rework of a press release issued by [name of institute]. Material has been edited for length and content.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | indystar.com
logo

indystar.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on indystar.com



Source link

Continue Reading

Tools & Platforms

Creating more jobs while transforming work

Published

on


Artificial intelligence is reshaping employment in ways that challenge basic assumptions about work and human value. While headlines focus on job displacement fears, the data tells a different story: AI will create far more jobs than it eliminates, generating 78 million net new positions globally by 2030.

The World Economic Forum shows that economy-wide trends – including AI adoption, green transition, and demographic shifts – will create 170 million jobs while displacing 92 million. This isn’t simple technological substitution; it represents entirely new forms of human-machine collaboration that require rethinking the boundaries between human and artificial intelligence.

As AI handles routine cognitive tasks, humans are being pushed toward work demanding creativity, emotional intelligence, and nuanced judgment that remains uniquely human. The question isn’t whether we can adapt – it’s whether we can evolve quickly enough to thrive.

Emergence of human-AI collaboration roles

The most revealing development in AI employment isn’t traditional tech job creation, but roles that exist precisely because humans and machines think differently. Tesla’s AI generalists, commanding salaries from $118,000 to $390,000, represent a new professional category: individuals who translate between artificial and human intelligence.

These roles reveal a deeper truth. Rather than replacing human intelligence, AI is highlighting its uniqueness by contrast. The most valuable workers aren’t those competing with machines at computational tasks, but those complementing artificial intelligence with distinctly human capabilities -contextual understanding, ethical reasoning, and navigating ambiguity that remains beyond algorithmic reach.

This represents more than new job categories – it’s the emergence of professionals who serve as translators between artificial and human intelligence. Like social media creating community managers who understood both technology and human behavior, AI creates roles requiring fluency in both machine logic and human insight.

Specialized expertise in AI age

The AI job market is rapidly organizing around a crucial insight: as artificial intelligence handles routine analysis, human expertise becomes more specialized and valuable. Apple’s Machine Learning Algorithm Validation Engineers, earning $141,800-$258,600, don’t just test code – they make judgment calls about when AI systems are safe for real-world deployment.

This specialization reflects a broader pattern across industries. AI Security Specialists, commanding low-six figures to mid-$200,000s, aren’t just cybersecurity experts – they understand how adversaries might exploit AI systems’ tendency to hallucinate or misinterpret edge cases. Their expertise lies in understanding AI vulnerabilities in ways only human insight can provide.

The educational requirements tell a similar story. While many advanced AI roles still prefer graduate credentials, degree requirements have been easing in AI-exposed jobs since 2019 as employers prioritize skills and portfolios. Companies seek individuals who think critically about AI implications, understand limitations, and make nuanced decisions about deployment and oversight.

Education and the transformation of human development

Educational mobilization around AI reflects recognition that transformation goes beyond job training to fundamental questions about human development. In August 2025, Google announced a three-year, $1 billion commitment to provide AI training and tools to US higher-education institutions and nonprofits.

Some selective, cohort-based AI training programs report completion rates approaching 85 per cent, significantly higher than traditional online courses. This success reflects a deeper truth: effective AI education isn’t about learning to use tools, but developing new ways of thinking that complement rather than compete with artificial intelligence.

The paradox of progress and human value

The most counterintuitive aspect of AI employment transformation may be its effect on human value. As artificial intelligence becomes more capable, skills that remain uniquely human become more precious. Recent analyses find salary premiums for AI skills – around 28 per cent in job postings and up to 56 per cent in cross-country comparisons within occupations.

PwC projects AI could contribute $15.7 trillion to the global economy by 2030, while the International Monetary Fund warns that nearly 40 per cent of global employment faces AI exposure, with advanced economies experiencing approximately 60 per cent exposure. These figures suggest transformation rather than simple displacement – work requiring humans to collaborate with AI systems while providing oversight, creativity, and ethical reasoning that algorithms cannot supply.

The gaming industry exemplifies this paradox. Despite experiencing restructuring-related layoffs, 49 per cent of game development workplaces now use AI tools. Rather than eliminating creative work, AI is pushing human creativity toward higher-level conceptual thinking – story design, emotional narrative, and cultural understanding that gives entertainment meaning rather than just technical competence.

Preparing for fundamental transformation

The research reveals both unprecedented opportunity and profound challenge. While AI creates more jobs than it eliminates, WEF estimates roughly 44 per cent of workers’ skills will be disrupted in the next few years. This suggests transformation beyond retraining to fundamental questions about human adaptability and productive work.

Success stories from early adopters provide valuable insights. Companies implementing comprehensive AI training report significant productivity gains not because humans become more machine-like, but because they learn to leverage AI capabilities while providing uniquely human value.

Adaptation or transformation

The AI employment revolution represents more than technological change- it’s an opportunity to reconsider fundamental assumptions about human potential, work, and value creation. The 78 million net new jobs by 2030 will demand not just new skills but new ways of thinking about intelligence, creativity, and what makes humans irreplaceable.

The geographic and demographic dimensions add complexity that cannot be ignored. Advanced economies face higher AI exposure than emerging markets. In the U.S., 21 per cent of women versus 17 per cent of men work in jobs among the most exposed to AI. The transformation risks exacerbating existing inequalities unless approached with intentional focus on inclusive development and equitable access to AI-era opportunities.

Embracing the transformation thoughtfully

The AI employment revolution offers an unprecedented opportunity to elevate human work beyond routine tasks toward creativity, relationship building, and the kind of meaning-making that defines our species. The infrastructure investments, educational initiatives, and emerging job categories all point toward a future where humans and artificial intelligence collaborate rather than compete.

The choice before us extends beyond managing technological disruption to embracing human potential in an age of artificial minds. By recognizing that AI’s greatest gift may be forcing us to discover what makes us irreplaceably human, we can build a future where technology amplifies rather than diminishes human flourishing.

The 78 million jobs being created aren’t just employment opportunities – they’re invitations to discover new forms of human capability, creativity, and value creation. The workers who answer that invitation thoughtfully, organizations that embrace human-AI collaboration purposefully, and societies that ensure broad access to AI-era opportunities will shape a future where artificial intelligence serves to reveal rather than replace the irreplaceable nature of human intelligence.

That future requires action today – not just in retraining programs or policy frameworks, but in reimagining what it means to be human in an age of artificial minds. The opportunity is unprecedented, and the time for thoughtful transformation is now.

(Krishna Kumar is a Technology Explorer & Strategist based in Austin, Texas in the US. Rakshitha Reddy is AI Engineer based in Atlanta, US)



Source link

Continue Reading

Tools & Platforms

Opinion | Governing AI – The Kathmandu Post

Published

on



Opinion | Governing AI  The Kathmandu Post



Source link

Continue Reading

Trending