Connect with us

AI Insights

AI’s Hidden Geometry of Thought

Published

on


We’ve spent the last few years marveling at how AI tools seem to think, with me, for me, and even a curious cognitive construct that I’ve struggled to put my finger on. I’ve pushed on these bounds by even calling artificial intelligence something antithetical to our human thinking—anti-intelligence. It’s clear that these models complete our sentences, summarize our thoughts, generate prose, suggest decisions, and even pass for emotionally aware. But something isn’t sitting right. The more I push into this statistically rigid yet ambiguous space, the more it becomes clear that these systems aren’t thinking like us at all. They’re doing something else entirely.

Yes, it’s tempting to anthropomorphize AI and in many instances it seems inevitable. The conversation naturally shifts to frame AI like it’s a synthetic mind—one built in our image, or perhaps in our shadow. But the deeper truth, at least from my perspective, is more disorienting and perhaps even freakish. These models don’t reflect human cognition, they reflect something we haven’t named yet. Something that feels like a kind of mathematical terrain that doesn’t map to our human experience of thought. And as much as we seek to draw the map from our “flatland” perspective, we can’t.

That terrain is what some are calling the alien substrate. And I agree.

This Isn’t Human-Like Intelligence

Let’s start with a simple point. Most large language models today, especially the so-called frontier models like ChatGPT and GROK, operate in embedding spaces with over 12,000 dimensions. That number isn’t cosmetic and it frames my perspective. It’s the number of abstract axes along which meaning, coherence, and association are structured or encoded.

To put that in perspective, your lived experience happens in three physical dimensions plus time, and your internal cognitive modeling might stretch that a little—perhaps six or seven dimensions of emotion, memory, and attention, depending on the task. But 12,000? That’s not evolution, that’s vast computation. And it produces a kind of intelligence that doesn’t feel like anything. It just sorta works.

These systems aren’t building models of the world like we do. They aren’t theorizing or interpreting or guessing. They’re locating positions in a hyperdimensional semantic field, where proximity means probability and distance diminishes coherence. When a model “predicts” your next word or “completes” a thought, it’s doing so by collapsing a statistical wave function in this incomprehensible geometric space. Remember, it’s not thinking, but just selecting from a geometry that “seeks” a type of linguistic stability.

Centaur and Alien Understanding

A recent paper in Nature introduced a model called Centaur, which was trained on millions of behavioral data points from over 160 psychological experiments. The model learned to predict how humans would act across different tasks that included for example, gambling, memory, moral judgment. And it did a good job that was often better than many traditional models, and arguably more consistent than human reasoning itself.

But this paper doesn’t claim to have discovered anything new about the mind or offer an advanced theory of behavior. And that’s fine. What it shows is that if you give a language model enough data, well-structured, clean, and across human trials, it will find patterns. And those patterns will let it anticipate behavior.

At the “cognitive heart” of this is that it offers prediction without introspection. It’s precision but a new level of accuracy that’s without understanding. And it works because it lives in that alien substrate where our messy human outputs can be modeled as stable attractors in a hyperspace of possible moves.

Here’s the key point. It’s not smarter, it’s certainly not conscious and it’s not even insightful. It’s just really, really good at navigating a landscape we live in, but can’t see.

The Shift We Should Be Talking About

So, here are the emergent questions. What happens when a machine can predict your professional judgment better than a colleague? What happens when it completes your thoughts more fluently than you can? What happens when an LLM can model your biases, your hesitations, your habits of mind and then adjust accordingly? Read this paragraph again and really think about it—in a way that only a human can.

Artificial Intelligence Essential Reads

This isn’t just imitation. My sense is that it’s a form of divergence. The model doesn’t replicate how we think and yet we still try to align AI with the human construct. But here’s the essential truth: AI doesn’t replicate human thought, It bypasses it.

We’re still asking whether AI is “intelligent,” whether it “understands,” whether it’s getting close to passing as human. But these are the wrong questions. The right one might be to ask what kind of cognition is this? Because, it’s not ours.

There’s a difference between being human-like and being human-relevant. AI may never feel what we feel, or grasp meaning as we do. But they’re starting to outperform us in domains that once seemed uniquely human and include writing, strategy, diagnosis, even empathy simulations. And they’re doing by navigating an invisible map that’s been built from our language. What AI has done is to flatten, vectorized, and made operable in a space no human can comprehend. The alien has arrived.

A Frontier Beyond Familiarity

So where does this leave us? The future of cognition is unfolding in this alien substrate and the traditional psychological models may start to look like quaint approximations. Theories designed to be interpretable in low “human” dimensions might simply not hold up in the face of systems that don’t need to explain to us why their predictions work. They just do, and we can’t.

Yes, it’s alien or perhaps echoing the principle of non-causality reality, but in a cognitive perspective. The old contract between explanation and trust is breaking down. We used to believe that if we couldn’t explain it, we shouldn’t believe it. Now we’re using tools every day that outperform us without offering any narrative of how they do it. We call it black box behavior. But maybe it’s not a box. Maybe it’s a geometry and we’re the ones outside it.

The models are getting better. Not more human, just more effective. And if we keep judging them by how well they reflect us, we’ll miss the fact that they’re outgrowing us in a direction we don’t yet have words for.

That’s the alien substrate. And it’s not coming. It’s here.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

AI chatbots and mental health: How to cover the topic responsibly

Published

on


Artificial intelligence-powered chatbots can provide round-the-clock access to supportive “conversations,” which some people are using as a substitute for interactions with licensed mental health clinicians or friends. But users may develop dependencies on the tools and mistake these transactions for real relationships with people or true therapy. Recent news stories have discussed the dangers of chatbots’ fabricated, supportive nature. In some incidents, people developed AI-related psychosis or were supported in their plans to commit suicide.

What is it about this technology that sucks people in? Who is at risk? How can you report on these conditions sensitively? In this webinar, hear from moderator Karen Blum and an expert panel, including psychiatrists John Torous, M.D. (Beth Israel Deaconess Medical Center); Keith Sakata, M.D. (UC San Francisco), and Mashable Senior Reporter Rebecca Ruiz, to learn more.

Karen Blum

AHCJ Health Beat Leader for Health IT
Karen Blum is AHCJ’s health beat leader for health IT. She’s an independent health and science journalist, based in the Baltimore area. She has written for publications such as the Baltimore Sun, Pharmacy Practice News, Clinical Oncology News, Clinical Laboratory News, Cancer Today, CURE, AARP.org, General Surgery News and Infectious Disease Special Edition; covered numerous medical conferences for trade magazines and news services; and written many profiles and articles on medical and science research as well as trends in health care and health IT. She is a member of the American Society of Journalists and Authors (ASJA) and chairs its Virtual Education Committee; and a member of the National Association of Science Writers (NASW) and its freelance committee.

Rebecca Ruiz

Senior reporter, Mashable
Rebecca Ruiz is a Senior Reporter at Mashable. She frequently covers mental health, digital culture, and technology. Her areas of expertise include suicide prevention, screen use and mental health, parenting, youth well-being, and meditation and mindfulness. Rebecca’s experience prior to Mashable includes working as a staff writer, reporter, and editor at NBC News Digital and as a staff writer at Forbes. Rebecca has a B.A. from Sarah Lawrence College and a master’s degree from UC Berkeley’s Graduate School of Journalism.

Keith Sakata, M.D.

Psychiatry resident, UC San Francisco
Keith Sakata, M.D., is a psychiatry resident at the University of California, San Francisco, where he founded the Mental Health Innovation and Digital Hub (MINDHub) to advance AI-enabled care delivery. He provides treatment and psychotherapy across outpatient and specialty clinics, with a focus on dual diagnosis, PTSD, OCD, pain, and addiction.

Dr. Sakata previously trained in internal medicine at Stanford Health Care and co-founded Skript, a diagnostic training platform adopted by UCSF and Stanford that improved medical education outcomes during the COVID-19 pandemic. He currently serves as Clinical Lead at Sunflower, an addiction recovery startup. He also helps and advises startups working to improve access in mental health: including Two Chairs, and Circuit Breaker Labs, which is providing a safety layer for AI tools in mental health care.

His professional interests bridge psychiatry, neuroscience, and digital innovation. Dr. Sakata holds a B.S. in Neurobiology from UC Irvine and earned his M.D. from UCSF.

John Torous, M.D., MBI

Director, Digital Psychiatry, Beth Israel Deaconess Medical Center
John Torous, M.D., MBI, is director of the digital psychiatry division in the Department of Psychiatry at Beth Israel Deaconess Medical Center (BIDMC), a Harvard Medical School-affiliated teaching hospital, where he also serves as a staff psychiatrist and associate professor. He has a background in electrical engineering and computer sciences and received an undergraduate degree in the field from UC Berkeley before attending medical school at UC San Diego. He completed his psychiatry residency, fellowship in clinical informatics and master’s degree in biomedical informatics at Harvard.

Torous is active in investigating the potential of mobile mental health technologies for psychiatry and his team supports mindapps.org as the largest database of mental health apps, the mindLAMP technology platform for scalable digital phenotyping and intervention, and the Digital Navigator program to promote digital equity and access. Torous has published over 300 peer-reviewed articles and five book chapters on the topic. He directs the Digital Psychiatry Clinic at BIDMC, which seeks to improve access to and quality of mental health care through augmenting treatment with digital innovations.

Torous serves as editor-in-chief for the journal JMIR Mental Health, web editor for JAMA Psychiatry, and a member of various American Psychiatric Association committees.



Source link

Continue Reading

AI Insights

Google Debuts Agent Payments Protocol to Bolster AI Commerce

Published

on


AP2 is designed to “securely initiate and transact agent-led payments across platforms,” according to a Tuesday (Sept. 16) company blog post.

Google is collaborating on agentic payments with more than 60 companies, some of which include Adyen, American Express, Mastercard, PayPal, Coinbase and Revolut, per the post.

“AI agents are capable of transacting on behalf of users, which creates a need to establish a common foundation to securely authenticate, validate and convey an agent’s authority to transact,” the post said. “While today’s payment systems generally assume a human is directly clicking ‘buy’ on a trusted surface, the rise of autonomous agents and their ability to initiate a payment breaks this fundamental assumption and raises critical questions that AP2 helps to address.”

The questions are authorization, or proving that a user gave an agent authority to make a specific purchase; authenticity, or allowing merchants to be sure an agent’s request reflects the user’s intent; and accountability in cases of fraud or incorrect transactions, per the post.

The protocol can be used as an extension of the Agent2Agent (A2A) protocol and Model Context Protocol (MCP). In conjunction with industry rules and standards, it offers a payment-agnostic framework for users, merchants and payments providers to transact across all types of payment methods, the post said.

Advertisement: Scroll to Continue

PYMNTS Intelligence’s August edition of The Prompt Economy Tracker® Series explored the rise of MCP, an open standard that was introduced by Anthropic in late 2024 and has since been adopted by OpenAI, Microsoft and Visa.

“MCP is the digital equivalent of USB-C for agents,” the report said. “It defines how agents plug into data, invoke APIs, talk to other agents, and complete tasks securely and efficiently. This is the infrastructure that transforms agents from smart tools into autonomous actors inside the commerce ecosystem.”

Meanwhile, a July PYMNTS Intelligence report, “Payments Execs Say AI Agents Give Payments an Autonomous Overhaul,” revealed that agentic AI could demand new infrastructure, trust frameworks and corporate oversight.

AI agents require real-time, scalable, secure infrastructure, the report said. Legacy systems can’t deal with thousands of concurrent autonomous agents acting on APIs, analyzing data and triggering actions across systems.

For all PYMNTS AI and digital transformation coverage, subscribe to the daily AI and Digital Transformation Newsletters.



Source link

Continue Reading

AI Insights

AI’s Baby Bonus? | American Enterprise Institute

Published

on


It seems humanity is running out of children faster than expected. Fertility rates are collapsing around the world, often decades ahead of United Nations projections. Turkey’s fell to 1.48 last year—a level the UN thought would not arrive until 2100—while Bogotá’s is now below Tokyo’s. Even India, once assumed to prop up global demographics, has dipped under replacement. According to a new piece in The Economist, the world’s population, once projected to crest at 10.3 billion in 2084, may instead peak in the 2050s below nine billion before declining. (Among those experts mentioned, by the way, is Jesús Fernández-Villaverde, an economist at the University of Pennsylvania and visiting AEI scholar.)

From “Humanity will shrink, far sooner than you think” in the most recent issue: “At that point, the world’s population will start to shrink, something it has not done since the 14th century, when the Black Death wiped out perhaps a fifth of humanity.”

This demographic crunch has defied policymaker efforts. Child allowances, flexible work schemes, and subsidized daycare have barely budged birth rates. For its part, the UN continues to assume fertility will stabilize or rebound. But a demographer quoted by the magazine calls that “wishful thinking,” and the opinion is hardly an outlier. 

See if you find the UN assumption persuasive:

It is indeed possible to imagine that fertility might recover in some countries. It has done so before, rising in the early 2000s in the United States and much of northern Europe as women who had delayed having children got round to it. But it is far from clear that the world is destined to follow this example, and anyway, birth rates in most of the places that seemed fecund are declining again. They have fallen by a fifth in Nordic countries since 2010.

John Wilmoth of the United Nations Population Division explains one rationale for the idea that fertility rates will rebound: “an expectation of continuing social progress towards gender equality and women’s empowerment”. If the harm to women’s careers and finances that comes from having children were erased, fertility might rise. But the record of women’s empowerment thus far around the world is that it leads to lower fertility rates. It is not “an air-tight case”, concedes Mr Wilmoth.

Against this bleak backdrop, technology may be the only credible source of hope. Zoom boss Eric Yuan recently joined Bill Gates, Nvidia’s Jensen Huang, and JPMorgan’s Jamie Dimon in predicting shorter workweeks as advances in artificial intelligence boost worker productivity. The optimistic scenario goes like this: As digital assistants and code-writing bots shoulder more of the office load, employees reclaim hours for home life. Robot nannies and AI tutors lighten the costs and stresses of parenting, especially for dual-income households.

History hints at what could follow. Before the Industrial Revolution, wealth and fertility went hand-in-hand. That relationship flipped when economies modernized. Education became compulsory, child labor fell out of favor, and middle- and upper-class families invested heavily in fewer children’s education and well-being. 

But today, wealthier Americans are having more children, treating them as the ultimate luxury good. As AI-driven abundance spreads more broadly, perhaps resulting in the shorter workweeks those CEOs are talking about, larger families may once again be considered an attainable aspiration for regular folks rather than an elite indulgence. (Fingers crossed, given this recent analysis from JPM: “The vast sums being spent on AI suggest that investors believe these productivity gains will ultimately materialize, but we suspect many of them have not yet done so.”)

Indeed, even a modest “baby bonus” from technology would be profound. Governments are running out of levers to pull, dials to turn, and buttons to press. AI-powered productivity may not just be the best bet for growth, it could be the only realistic chance of nudging humanity away from demographic decline. This is something for governments to think hard about when deciding how to regulate this fast-evolving technology.



Source link

Continue Reading

Trending