Connect with us

AI Insights

AI Rebound: The Paradoxical Drop After the AI Lift

Published

on


I was recently sent a paper in The Lancet Gastroenterology & Hepatology that pulled me in for a closer look. OK, it was about colonoscopies, but the observations can be applied to a broader application of artificial intelligence. This study followed gastroenterologists using AI to help detect polyps. With the AI running, detection rates improved. And that’s no surprise.

But here’s what got me thinking. When those same doctors went back to working without AI, their detection rates dropped below where they’d been before the technology was introduced.

That’s more than just a dip. It’s what I callAI rebound,” the paradox where a tool that boosts performance in the moment leaves people worse off when it’s removed.

More Than a Medical Story

If this can happen to highly trained specialists, it’s easy to imagine it in other domains. A driver grows comfortable with Tesla’s Full Self-Driving and finds their reflexes slower in a sudden takeover. A pilot spends most of a flight on autopilot and then has to land manually in bad weather. Even in creative work, I’ve seen writers lose their natural flow after leaning too heavily on a digital assistant.

The pattern seems to be the same from topic to topic. When a system takes over, the human role changes. We’re not “doing” the skill in its full form anymore; we’re supervising, monitoring, or even just waiting for something to go wrong. And while that might feel safer in the moment, I think that it might alter the fundamental human dynamics.

The Mechanics of AI Rebound

AI rebound, as I’m calling it, may be related to the “out-of-the-loop” problem that’s seen in conventional automation. When automation handles the details, situational awareness dulls. And in that context, we scan less, anticipate less, and make fewer micro-adjustments. Simply put, the mental models we rely on to navigate complex situations shrink because the system is doing what we once did ourselves.

Over time, this isn’t just about pausing a skill; it may be more akin to erosion. And when the technology steps away, the skill doesn’t simply return to baseline. It can come back lower.

The Lancet study didn’t find that AI was misidentifying polyps or making dangerous errors. It found that without AI, people were less sharp than before they started using it. That’s the paradox and one with significant implications. And it might be time to question the tool that improves performance while it’s active but degrades the very abilities it was meant to enhance, particularly when the tool is defined by intermittent or occasional use.

Why the Baseline Matters

In high-stakes fields, small changes in performance have real consequences. In medicine, not noticing a small lesion can mean a missed diagnosis. On the road, a half-second delay can turn a near-miss into a collision. In business, hesitation or uncertainty can derail a critical decision.

It’s easy to focus on the gains we see when AI is switched on. But the baseline matters just as much—because that’s where we operate when the tool is absent, fails, or needs to be set aside.

Designing Against the Drop

If AI rebound is a real and measurable risk, the solution isn’t to avoid AI but to integrate it in a way that preserves core human competence. And the potential fix might be as simple as making a few adjustments in the way we use technology.

  • Mix AI-on and AI-off sessions so people continue practicing their full skill set.
  • Highlight human-first decision-making with appropriate AI support.
  • Incorporate regular takeover drills where speed and accuracy are measured without AI assistance.
  • Track and reward unaided performance alongside AI-assisted results.

These are not just technical fixes; they’re design choices that keep humans engaged as active participants rather than passive overseers. And by tracking the upside and downside, it may foster added AI-augmented skills and diminish the potential for AI rebound.

Caution and Opportunity

AI rebound isn’t about fearing automation or clinging to the old ways of working. It’s about understanding how technology shapes our capabilities over time. And, as it does, taking steps to make sure that what we gain today doesn’t undermine us tomorrow.

This gastroenterology paper is a clear example of how easily this can happen. The doctors in that trial didn’t lose their medical degrees or their experience, but their sharpness dipped when the AI was gone. That’s a subtle, almost invisible shift, until it matters.

Today, the opportunity to name it, measure it, and design against it should be on our radar—from the operating room to the classroom. Because the day will come when the machine goes quiet, and our performance will depend on what we’ve kept alive in ourselves.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

xAI lays off 500 AI tutors working on Grok

Published

on


Elon Musk’s artificial intelligence startup xAI has laid off 500 workers from its data annotation team, which helps train its Grok chatbot.

The layoffs were earlier reported by Business Insider.

The AI company notified employees over email that it was planning to downsize its team of generalist AI tutors, according to messages viewed by the publication. The company said the “strategic pivot” meant prioritizing specialist AI tutors, while scaling back its focus on general AI tutor roles.

In response to the story, xAI directed reporters to a post on X, in which the company said it plans to expand its specialist AI tutor team by “10X” and intends to open roles on its careers page.

The human data annotator team at xAI plays a key role in teaching Grok to understand the world by labeling, contextualizing, and categorizing raw data used to train the chatbot. The email sent by xAI said that laid-off workers would be paid through either the end of their contract or Nov. 30, but their access to company systems would be terminated the day of the layoff notice.

Prior to the layoff, the xAI’s data annotation team was one of the largest, with 1,500 full-time and contract staff members, which included AI tutors. The reorganization of the data annotators team comes on the back of a leadership shake-up at the team that saw nine employees reportedly exit the firm last week.

As a sign of its changing approach to training Grok, xAI on Thursday asked some of the AI tutors to prepare for tests, Business Insider reported, that covered traditional domains such as STEM, coding, finance, and medicine, as well as quirkier specialties such as Grok’s “personality and model behavior” and doomscrollers.”

Musk launched xAI in 2023 to compete with OpenAI and Google DeepMind, which are racing to win the AI race. He introduced Grok as a safe and truthful alternative to what he accused competitors of building, “woke” chatbots prone to censorship.



Source link

Continue Reading

AI Insights

Google’s newest AI datacenter & its monstrous CO2 emissions

Published

on


The impact of the rise of AI on the environment is a very real concern, and it’s not one that’s going away in a hurry. Especially not when Google’s planned new datacenter in the UK looks set to emit the same quantity of Carbon Dioxide in a year as hundreds of flights every week would.

It comes via a report from The Guardian, which has seen the plans for the new facility and the very real carbon impact assessment.



Source link

Continue Reading

AI Insights

China doubts artificial intelligence use in submarines

Published

on


by Alimat Aliyeva

The integration of artificial intelligence into submarine
warfare may reduce the chances of crew survival by up to 5%,
according to a new report by the South China Morning Post (SCMP),
citing a study led by Meng Hao, a senior engineer at the Chinese
Institute of Helicopter Research and Development,
Azernews reports.

Researchers analyzed an advanced anti-submarine warfare (ASW)
system enhanced by AI, which is designed to detect and track even
the most stealthy submarines. The system relies on real-time
intelligent decision-making, allowing it to respond rapidly and
adaptively to underwater threats. According to the study, only one
out of twenty submarines may be able to avoid detection and attack
under such conditions — a major shift in naval combat dynamics.

“As global powers accelerate the militarization of AI, this
study suggests the era of ‘invisible’ submarines — long considered
the backbone of strategic deterrence — may be drawing to a close,”
SCMP notes.

Historically, stealth has been a submarine’s most valuable
asset, allowing them to operate undetected and deter adversaries
through uncertainty. However, the rise of AI-enabled systems
threatens to upend this balance by minimizing human response
delays, analyzing massive data sets, and predicting submarine
behavior with unprecedented precision.

The implications extend far beyond underwater warfare. In
August, Nick Wakeman, editor-in-chief of Defense One, reported that
the U.S. Army is also exploring AI for use in air operations
control systems. AI could enhance resilience to electronic warfare,
enable better integration of drones, and support the deployment of
autonomous combat platforms in contested airspace.

The growing role of AI in modern militaries — from the seabed to
the stratosphere — raises new questions not only about tactical
advantage, but also about ethical decision-making, autonomous
weapons control, and the future of human involvement in combat
scenarios.

As nations continue investing in next-generation warfare
technology, experts warn that AI may not just change how wars are
fought — it could redefine what survivability means on the modern
battlefield.



Source link

Continue Reading

Trending