Connect with us

AI Insights

group forms to limit harms of artificial intelligence – NBC 5 Dallas-Fort Worth

Published

on

AI Insights

There is No Such Thing as Artificial Intelligence – Nathan Beacom

Published

on


One man tried to kill a cop with a butcher knife, because OpenAI killed his lover. A 29-year-old mother became violent toward her husband when he suggested that her relationship with ChatGPT was not real. A 41-year-old now-single mom split with her husband after he became consumed with chatbot communication, developing bizarre paranoia and conspiracy theories.

These stories, reported by the New York Times and Rolling Stone, represent the frightening, far end of the spectrum of chatbot-induced madness. How many people, we might wonder, are quietly losing their minds because they’ve turned to chatbots as a salve for loneliness or frustrated romantic desire?



Source link

Continue Reading

AI Insights

Apple Supplier Lens Tech Said to Price $607 Million Hong Kong Listing at Top of Range

Published

on




Apple Inc. supplier Lens Technology Co. has raised HK$4.8 billion ($607 million) after pricing its Hong Kong listing at the top of the marketed range, according to people familiar with the matter.



Source link

Continue Reading

AI Insights

The Cognitive Cost Of AI-Assisted Learning – Analysis – Eurasia Review

Published

on


A decade ago, if someone had claimed machines would soon draft essays, debug code, and explain complex theories in seconds, the idea might have sounded like science fiction. Today, artificial intelligence is doing all of this and more. Large Language Models (LLMs) like ChatGPT have transformed how information is consumed, processed, and reproduced. But as the world becomes more comfortable outsourcing intellectual labor, serious questions are emerging about what this means for human cognition.

It isn’t a doomsday scenario, at least not yet. But mounting research suggests there may be cognitive consequences to the growing dependence on AI tools, particularly in academic and intellectual spaces. The concern isn’t that these tools are inherently harmful, but rather that they change the mental labor required to learn, think, and remember. When answers are pre-packaged and polished, the effort that usually goes into connecting ideas, analyzing possibilities, or struggling through uncertainty may quietly fade away.

A recent study conducted by researchers at the MIT Media Lab helps illustrate this. Fifty-four college students were asked to write short essays under three conditions: using only their brains, using the internet without AI, or using ChatGPT freely. Participants wore EEG headsets to monitor brain activity. The results were striking. Those who relied on their own cognition or basic online searches showed higher brain connectivity in regions tied to attention, memory retrieval, and creativity. In contrast, those who used ChatGPT showed reduced neural activity. Even more concerning: these same students often struggled to recall what they had written.

This finding echoes a deeper pattern. In “The Shallows: What the Internet Is Doing to Our Brains,” Nicholas Carr argues that technologies designed to simplify access to information can also erode our ability to engage deeply with that information. Carr’s thesis, originally framed around search engines and social media, gains renewed relevance in an era where even thinking can be automated.

AI tools have democratized knowledge, no doubt. A student confused by a math problem or an executive drafting a report can now receive tailored, well-articulated responses in moments. But this ease may come at the cost of originality. According to the same MIT study, responses generated with the help of LLMs tended to converge around generic answers. When asked subjective questions like “What does happiness look like?”, essays often landed in a narrow band of bland, agreeable sentiment. It’s not hard to see why: LLMs are trained to produce outputs that reflect the statistical average of billions of human texts.

This trend toward homogenization poses philosophical as well as cognitive challenges. In “The Age of Surveillance Capitalism,” Shoshana Zuboff warns that as technology becomes more capable of predicting human behavior, it also exerts influence over it. If the answers generated by AI reflect the statistical mean, then users may increasingly absorb, adopt, and regurgitate those same answers, reinforcing the very patterns that machines predict.

The concern isn’t just about bland writing or mediocre ideas. It’s about losing the friction that makes learning meaningful. In “Make It Stick: The Science of Successful Learning,” Brown, Roediger, and McDaniel emphasize that learning happens most effectively when it involves effort, retrieval, and struggle. When a student bypasses the challenge and lets a machine produce the answer, the brain misses out on the very processes that cement understanding.

That doesn’t mean AI is always a cognitive dead-end. Used wisely, it can be a powerful amplifier. The same MIT study found that participants who first engaged with a prompt using their own thinking and later used AI to enhance their responses actually showed higher neural connectivity than those who only used AI. In short, starting with your brain and then inviting AI to the table might be a productive partnership. Starting with AI and skipping the thinking altogether is where the danger lies.

Historically, humans have always offloaded certain cognitive tasks to tools. In “Cognition in the Wild,” Edwin Hutchins shows how navigation in the Navy is a collective, tool-mediated process that extends individual cognition across people and systems. Writing, calculators, calendars, even GPS—these are all examples of external aids that relieve our mental burden. But LLMs are different in kind. They don’t just hold information or perform calculations; they construct thoughts, arguments, and narratives—the very outputs we once considered evidence of human intellect.

The worry becomes more acute in educational settings. A Harvard study published earlier this year found that while generative AI made workers feel more productive, it also left them less motivated. This emotional disengagement is subtle, but significant. If students begin to feel they no longer own their ideas or creations, motivation to learn may gradually erode. In “Deep Work,” Cal Newport discusses how focus and effort are central to intellectual development. Outsourcing too much of that effort risks undermining not just skills, but confidence and identity.

Cognitive offloading isn’t new, but the scale and intimacy of AI assistance is unprecedented. Carnegie Mellon researchers recently described how relying on AI tools for decision-making can leave minds “atrophied and unprepared.” Their concern wasn’t that these tools fail, but that they work too well. The smoother the experience, the fewer opportunities the brain has to engage. Over time, this could dull the mental sharpness that comes from grappling with ambiguity or constructing arguments from scratch.

Of course, there’s nuance. Not all AI use is equal, and not all users will be affected in the same way. A senior using a digital assistant to remember appointments is not the same as a student using ChatGPT to write a philosophy paper. As “Digital Minimalism” by Cal Newport suggests, it’s not the presence of technology, but the purpose and structure of its use that determines its impact.

Some might argue that concerns about brain rot echo earlier panics. People once feared that writing would erode memory, that newspapers would stunt critical thinking, or that television would replace reading altogether. And yet, society adapted. But the difference now lies in the depth of substitution. Where earlier technologies altered the way information was delivered, LLMs risk altering the way ideas are born.

The road forward is not to abandon AI, but to treat it with caution. Educators, researchers, and developers need to think seriously about how these tools are integrated into daily life, especially in formative contexts. Transparency, guided usage, and perhaps even deliberate “AI-free zones” in education could help preserve the mental muscles that matter.

In the end, the question is not whether AI will shape how people think. It already is. The better question is whether those changes will leave future generations sharper, or simply more efficient at being average.

References

  • Carr, N. (2010). The Shallows: What the Internet Is Doing to Our Brains. W.W. Norton.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
  • Brown, P.C., Roediger, H.L., & McDaniel, M.A. (2014). Make It Stick: The Science of Successful Learning. Belknap Press.
  • Hutchins, E. (1995). Cognition in the Wild. MIT Press.
  • Newport, C. (2016). Deep Work: Rules for Focused Success in a Distracted World. Grand Central.
  • Newport, C. (2019). Digital Minimalism: Choosing a Focused Life in a Noisy World. Portfolio.
  • Daugherty, P. R., & Wilson, H. J. (2018). Human + Machine: Reimagining Work in the Age of AI. Harvard Business Review Press.



Source link

Continue Reading

Trending