AI Insights
group forms to limit harms of artificial intelligence – NBC 5 Dallas-Fort Worth
AI Insights
There is No Such Thing as Artificial Intelligence – Nathan Beacom
One man tried to kill a cop with a butcher knife, because OpenAI killed his lover. A 29-year-old mother became violent toward her husband when he suggested that her relationship with ChatGPT was not real. A 41-year-old now-single mom split with her husband after he became consumed with chatbot communication, developing bizarre paranoia and conspiracy theories.
These stories, reported by the New York Times and Rolling Stone, represent the frightening, far end of the spectrum of chatbot-induced madness. How many people, we might wonder, are quietly losing their minds because they’ve turned to chatbots as a salve for loneliness or frustrated romantic desire?
We might not all be losing our minds. But there are subtle, pernicious ways in which chatbots still affect us. Because they have been designed to present themselves as personal beings, we cannot help but to personify them. We ask them for help in making decisions, for advice, for counsel. Companies are setting about making a great deal of money by replacing therapeutic relationships with “therapy chatbots” and are proposing to offer AI companions to the elderly, so that their faraway children need not visit so often. Are you lonely? Talk to a machine. Corporations are happy to endow these programs with human names, like Abbi, Claude, and Alexa.
This is a disaster. In uncritically letting these machines shape our lives, we become prey to all kinds of manipulation, we lose sight of reality, and we are induced, in an important way, to take a reductive view of actual people. Chatbots offer us a form of relationship without friction, without burden and responsibility. This illusory kind of relationship hampers our ability to engage in the difficult challenge of real bonds, which are the only things that can give value to human life. The more we personify AI, the more we slouch toward lives of isolation and deception.
In working to avoid all of this, it is important to recognize that the fundamental idea of artificial intelligence is a falsehood. There is no such thing as artificial intelligence, and in fact, I will suggest that the very phrase is an oxymoron. If we understand what “artificial intelligence” is, we’ll be free from its deceptions, free to cultivate true intelligence in ourselves and others.
Language matters. Confucius, when asked what he would do to heal society, said he would first “make right the names.” The health of human society must be grounded in truth and honesty, and names, the sage thought, should match reality as best they can. The term “artificial intelligence,” then, because it is based on a falsehood, should be abandoned in favor of language that reflects the reality of what it is.
Computation versus understanding.
We often understand intelligence today to refer to a certain excellence in carrying out mind-dependent tasks. Thus, when a computer produces similar products to the products of intelligence, we begin to call it “intelligent,” too.
The idea that intelligence is reducible to task completion is embodied by the famous Turing Test (put forward by the famed mathematician, Alan Turing, in the 1940s), which proposes that if a user communicating with both a machine and human is unable to distinguish between the two, the machine can be said to be “intelligent.” We are clearly at this point already, because users have become convinced, in certain instances and to varying degrees, that AI tools really are thinking things.
Philosopher John Searle famously posed a contrary thought experiment, known as the “Chinese room.” Imagine two people are communicating through a closed door. One knows Chinese and one does not. The non-speaker is equipped with a complicated, exhaustive set of rules that allow him to match the right response characters to the characters submitted by the Chinese speaker. The Chinese speaker, as a result, receives adequate and sensible written Chinese responses to his queries and statements. This Chinese person could become convinced that he is having a real conversation despite the fact that his conversation partner has no idea of the meaning of the characters in play. He is only following a set of patterns and rules.
This thought experiment shows us how the Turing test fails as an assessment of intelligence, since it could be that the machine being tested has no understanding at all, but merely follows a set of “rules,” which are in fact merely material processes designed to reliably produce certain symbolic outputs in response to certain inputs.
Indeed, this is really what is going on even in the most complex computer. In understanding this, it’s simplest to start with a pocket calculator. The machine has been programmed such that when this button, this button, and that button are pressed, a certain set of pixels will appear on the screen. At no point in that process does the calculator understand math, because understanding refers to the subjective comprehension of a thing. The calculator produces symbols that humans understand as representing concepts. The machine possesses the symbol, but not the concept. No one thinks that their pocket calculator is a thinking thing.
So it is also with more advanced machines. Computers, even when very complex, are still machines that reliably produce certain symbolic outputs based on certain mechanical inputs (usually typing on keys). Like a calculator, a computer does not literally contain information, have a memory, or think (what it contains are charged wires and transistors and so on)—so when we say that a computer contains information or memory, we are using those terms loosely. The complex network of transistors responding to electrical charges is wondrously impressive, and a testament to human ingenuity. But it is not thinking.
Because so many of us do not understand how computers work, and because the mechanical processes are hidden from view, the process of imitating the products of human intelligence feels almost like real intelligence. But, no matter how complex and well-designed these processes are, they remain mechanical processes, containing no inherent understanding.
Part of the increased illusion of personality with “AI” over other forms of computation is its responsiveness. “AI”s operate by use of what is called a neural net (itself a dubious term, presuming an equivalence between the circuits and the much more mysterious working of neurons). These are computational models through which “data” can be run, and through which the machine can collate statistically significant correlations between data points. This allows the machine, if it is “trained” on enough data, to detect regularities and produce probable responses to symbolic inputs, according to a designated program. Because of the way chatbots, which run on Large Language Models (LLMs), present themselves to us, as responses apparently from nowhere popping up on our screen, we may not recognize that the machines producing these responses are, in fact, run on huge servers that occupy massive warehouses in rural America. This produces the illusion of conversation. But the chatbot is more accurately described as a glorified, very impressive autocomplete program, selecting the next most probable words based on statistical correlations.
This is part of why users can be fooled into conspiracy theories or romances. In a very human tone, the machine will produce what is most statistically probable that the user is looking for. The “AI” is “trained” on data from across the internet, including romance novels and conspiracy theories. And so, if the user queries along a path likely to produce those results, those are the results they will achieve.
Science fiction has given us images of societies run by “AI.” In these stories, the machine is more capable of aggregating data and providing reliable solutions to societal issues than human beings. This machine may be portrayed as malevolent or benevolent, depending on the story. You may think of the malicious “The Entity” in the most recent Mission: Impossible, or of the benevolent AI that runs the planet Attin in Star Wars: Skeleton Crew. But we should recognize that, no matter how good the “AI” is portrayed, it is a dead thing, a tool, with no desires, no personality, no judgment. It instead embodies the desires, personality, and judgment of those who have designed it and the data upon which it has been “trained.”
Those who recognize this, as “AI” advances, will be able to see the clay feet of the new idol. There will be a temptation to treat “AI” like an oracle. Recognizing that it is a machine and not a wise truth teller will help us to avoid forking over our own capacity for judgment to what is, after all, merely a (very striking) human artifact.
Kind versus degree.
The difference between mechanical and mental processes is not one of degree, but of kind. It is not as though a calculator thinks a little bit and a supercomputer thinks a lot. Both processes are of the same sort, just different degrees of complexity. But a mental process is of a different kind altogether. Information, concepts, and the relations between them are not mechanical processes, even if they correlate with or depend on physical processes in a living brain.
To see that this is true, one must only recognize that mental realities are never fully describable in physical terms. Even a fully exhaustive explanation of how the human brain works would leave the life of the mind a mystery, because it would not include notions of concepts, ideas, thoughts, or choices. You could describe in total physical detail the nature of neurons and their interactions, and you will still not have described a thought or an idea. These mental terms are ineliminably personal, and cannot retain their meaning if a reduction to physical language is attempted.
The relationship between the material brain and mind is a mysterious one. We know that events in the brain affect the mind. We also know that events in the mind affect the brain. This is how treatment for certain kinds of obsessive-compulsive disorder works. The subjective understanding of safety that comes through exposure therapy actually changes and restructures the brain away from fear-generating responses, and the choice to participate in exposure therapy is, likewise, only fully describable in mental—that is, mind-related—terms. To use only the terms of physical science, one would be limited to the merely descriptive series of physical events, describing how photons hit the retina at time 1, initiating a series of electric and chemical transfers inside brain tissue. Totally absent would be the mental realities required to fully explain what is going on, including the subjective notions of fear and safety. In such cases, a patient chooses to sit and experience the thing they fear. It is precisely the patient’s choosing (mental) to do this and accepting their own safety as true (mental) that changes the way the body (including the brain) responds.
It does not follow from the fact that the human mind and human brain are clearly interrelated that a set of electrified wires could somehow summon a mind. This gets back to our point about the difference in kind between the life of the mind, as embodied in an organic being, and the function of a machine. Just as we would say that a child fooled by an animatronic mouse at Chuck E. Cheese simply doesn’t understand that it’s only a machine, so we should think of someone who is fooled by a very good “AI.”
These philosophical issues are complex, and can’t be fully explicated here. But I hope, at least, to have provided some tools for conceptual clarification and to have cast doubt on the possibility of “artificial intelligence.”
The fundamental deception of chatbots.
Part of the reason that it is so important to be clearer about what “AI” is and is not, is that these machines—and their associated tradeoffs, both practical and moral—are becoming ubiquitous in our lives. A great deal could be written about the risks “AI” poses, and, indeed, it has been written. Perhaps we are familiar with the idea that an overdependence on “AI” will cause an atrophy in our own ability to read, think, reason, and relate. And we know of the doomsday scenarios of an “AI” that decides to clear humans off the earth with nuclear bombs.
But even the simpler chatbots today have a moral valence: They are immoral, because they are fundamentally deceptive. They are presented to us by companies like OpenAI and Google as though they were thinking things, and their development is geared toward making them more and more deceptive, until even the critical user can be fooled into thinking that the machine thinks.
Aside from simple dishonesty, the deception of intelligence in these machines serves as a distraction from real personal relationships. By creating simulacra of sympathy, of engaging conversation, and of sage advice (like Claude telling you how to prepare for a date, or comforting you after the loss of your mother), these machines lead us away from forming real personal bonds with the people in our lives.
Chatbots pervert our sense of what human relationships are. Because the “AI” caters to us, because there is really only one person in the relationship, simulations of human bonding by “AIs” are fundamentally self-centered. In choosing the low-friction option of a machine that caters to our every desire, we are shaped toward selfishness, rather than drawn out into true empathy, sympathy, and care for others. They are also likely to cause our ability to handle the difficulties of human relationships to atrophy. Gaining wisdom about how to manage differences, misunderstandings, and heartbreak takes practice, gained through friction and failure. It is only through difficulty that we learn how to be fully mature humans.
Chatbots also bias us toward the idea that connection is reducible to words. Already, the idea of “AI” therapy is in use and producing profits for enterprising corporations. The idea of this technology is that therapy is about simply hearing the “right words.” In reality, therapy, like all human relationships, is not so much about the words as about being understood by another. This is something the machine cannot do, despite the language of marketers and users.
When we remember that LLMs are very fancy autocompletes, we should be aware that Sam Altman and the other “AI” boosters are trying to fool us. Researchers at Apple, gratefully, bucked this trend, publishing their findings with respect to the ways in which the appearance of thought falls apart when LLMs are given certain logic and reasoning puzzles. But many AI boosters understand these machines and programs, and are, at least implicitly, encouraging the public to believe AIs can think, relate, and understand the user.
A new term.
“AI”s are certainly artificial, having been made by human hands. But they are not intelligent. To call them “artificial intelligence” is to accept, not just a fiction, but a lie. It is to misconstrue both the nature of machines and of man. It is to give in to the ways in which chatbots threaten to atrophy our humanity, and, in extreme cases, even drive us to madness.
In lieu of “artificial intelligence,” I propose a more accurate, ethical, and socially responsible name: “pattern engine.” Early computers, which would find mathematical differences, were called “difference engines.” This name adequately recognized the reality of the machine at hand. “AI”s are indeed engines, and engines made for aggregating patterns and sorting data into statistical correlations. They are, truly, engines that sort things into patterns and produce outputs based on the statistical weight of what has been sorted.
A healthy society must be based on truth. And as technological advancement speeds forward faster than our ability to understand and adapt, we can at least not be fooled about what’s happening. Join me, if you will, in calling “AI” what it is. If it catches on, maybe we can find ways to use pattern engines in a way that dignifies humanity, rather than degrades it.
AI Insights
Apple Supplier Lens Tech Said to Price $607 Million Hong Kong Listing at Top of Range
Apple Inc. supplier Lens Technology Co. has raised HK$4.8 billion ($607 million) after pricing its Hong Kong listing at the top of the marketed range, according to people familiar with the matter.
Source link
AI Insights
The Cognitive Cost Of AI-Assisted Learning – Analysis – Eurasia Review
A decade ago, if someone had claimed machines would soon draft essays, debug code, and explain complex theories in seconds, the idea might have sounded like science fiction. Today, artificial intelligence is doing all of this and more. Large Language Models (LLMs) like ChatGPT have transformed how information is consumed, processed, and reproduced. But as the world becomes more comfortable outsourcing intellectual labor, serious questions are emerging about what this means for human cognition.
It isn’t a doomsday scenario, at least not yet. But mounting research suggests there may be cognitive consequences to the growing dependence on AI tools, particularly in academic and intellectual spaces. The concern isn’t that these tools are inherently harmful, but rather that they change the mental labor required to learn, think, and remember. When answers are pre-packaged and polished, the effort that usually goes into connecting ideas, analyzing possibilities, or struggling through uncertainty may quietly fade away.
A recent study conducted by researchers at the MIT Media Lab helps illustrate this. Fifty-four college students were asked to write short essays under three conditions: using only their brains, using the internet without AI, or using ChatGPT freely. Participants wore EEG headsets to monitor brain activity. The results were striking. Those who relied on their own cognition or basic online searches showed higher brain connectivity in regions tied to attention, memory retrieval, and creativity. In contrast, those who used ChatGPT showed reduced neural activity. Even more concerning: these same students often struggled to recall what they had written.
This finding echoes a deeper pattern. In “The Shallows: What the Internet Is Doing to Our Brains,” Nicholas Carr argues that technologies designed to simplify access to information can also erode our ability to engage deeply with that information. Carr’s thesis, originally framed around search engines and social media, gains renewed relevance in an era where even thinking can be automated.
AI tools have democratized knowledge, no doubt. A student confused by a math problem or an executive drafting a report can now receive tailored, well-articulated responses in moments. But this ease may come at the cost of originality. According to the same MIT study, responses generated with the help of LLMs tended to converge around generic answers. When asked subjective questions like “What does happiness look like?”, essays often landed in a narrow band of bland, agreeable sentiment. It’s not hard to see why: LLMs are trained to produce outputs that reflect the statistical average of billions of human texts.
This trend toward homogenization poses philosophical as well as cognitive challenges. In “The Age of Surveillance Capitalism,” Shoshana Zuboff warns that as technology becomes more capable of predicting human behavior, it also exerts influence over it. If the answers generated by AI reflect the statistical mean, then users may increasingly absorb, adopt, and regurgitate those same answers, reinforcing the very patterns that machines predict.
The concern isn’t just about bland writing or mediocre ideas. It’s about losing the friction that makes learning meaningful. In “Make It Stick: The Science of Successful Learning,” Brown, Roediger, and McDaniel emphasize that learning happens most effectively when it involves effort, retrieval, and struggle. When a student bypasses the challenge and lets a machine produce the answer, the brain misses out on the very processes that cement understanding.
That doesn’t mean AI is always a cognitive dead-end. Used wisely, it can be a powerful amplifier. The same MIT study found that participants who first engaged with a prompt using their own thinking and later used AI to enhance their responses actually showed higher neural connectivity than those who only used AI. In short, starting with your brain and then inviting AI to the table might be a productive partnership. Starting with AI and skipping the thinking altogether is where the danger lies.
Historically, humans have always offloaded certain cognitive tasks to tools. In “Cognition in the Wild,” Edwin Hutchins shows how navigation in the Navy is a collective, tool-mediated process that extends individual cognition across people and systems. Writing, calculators, calendars, even GPS—these are all examples of external aids that relieve our mental burden. But LLMs are different in kind. They don’t just hold information or perform calculations; they construct thoughts, arguments, and narratives—the very outputs we once considered evidence of human intellect.
The worry becomes more acute in educational settings. A Harvard study published earlier this year found that while generative AI made workers feel more productive, it also left them less motivated. This emotional disengagement is subtle, but significant. If students begin to feel they no longer own their ideas or creations, motivation to learn may gradually erode. In “Deep Work,” Cal Newport discusses how focus and effort are central to intellectual development. Outsourcing too much of that effort risks undermining not just skills, but confidence and identity.
Cognitive offloading isn’t new, but the scale and intimacy of AI assistance is unprecedented. Carnegie Mellon researchers recently described how relying on AI tools for decision-making can leave minds “atrophied and unprepared.” Their concern wasn’t that these tools fail, but that they work too well. The smoother the experience, the fewer opportunities the brain has to engage. Over time, this could dull the mental sharpness that comes from grappling with ambiguity or constructing arguments from scratch.
Of course, there’s nuance. Not all AI use is equal, and not all users will be affected in the same way. A senior using a digital assistant to remember appointments is not the same as a student using ChatGPT to write a philosophy paper. As “Digital Minimalism” by Cal Newport suggests, it’s not the presence of technology, but the purpose and structure of its use that determines its impact.
Some might argue that concerns about brain rot echo earlier panics. People once feared that writing would erode memory, that newspapers would stunt critical thinking, or that television would replace reading altogether. And yet, society adapted. But the difference now lies in the depth of substitution. Where earlier technologies altered the way information was delivered, LLMs risk altering the way ideas are born.
The road forward is not to abandon AI, but to treat it with caution. Educators, researchers, and developers need to think seriously about how these tools are integrated into daily life, especially in formative contexts. Transparency, guided usage, and perhaps even deliberate “AI-free zones” in education could help preserve the mental muscles that matter.
In the end, the question is not whether AI will shape how people think. It already is. The better question is whether those changes will leave future generations sharper, or simply more efficient at being average.
References
- Carr, N. (2010). The Shallows: What the Internet Is Doing to Our Brains. W.W. Norton.
- Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
- Brown, P.C., Roediger, H.L., & McDaniel, M.A. (2014). Make It Stick: The Science of Successful Learning. Belknap Press.
- Hutchins, E. (1995). Cognition in the Wild. MIT Press.
- Newport, C. (2016). Deep Work: Rules for Focused Success in a Distracted World. Grand Central.
- Newport, C. (2019). Digital Minimalism: Choosing a Focused Life in a Noisy World. Portfolio.
- Daugherty, P. R., & Wilson, H. J. (2018). Human + Machine: Reimagining Work in the Age of AI. Harvard Business Review Press.
-
Funding & Business6 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business3 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business6 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Funding & Business6 days ago
Europe’s Most Ambitious Startups Aren’t Becoming Global; They’re Starting That Way
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth
-
Jobs & Careers4 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit