Connect with us

AI Insights

Can chatbots really improve mental health?

Published

on


Recently, I found myself pouring my heart out, not to a human, but to a chatbot named Wysa on my phone. It nodded – virtually – asked me how I was feeling and gently suggested trying breathing exercises.

As a neuroscientist, I couldn’t help but wonder: Was I actually feeling better, or was I just being expertly redirected by a well-trained algorithm? Could a string of code really help calm a storm of emotions?

Artificial intelligence-powered mental health tools are becoming increasingly popular – and increasingly persuasive. But beneath their soothing prompts lie important questions: How effective are these tools? What do we really know about how they work? And what are we giving up in exchange for convenience?

Of course it’s an exciting moment for digital mental health. But understanding the trade-offs and limitations of AI-based care is crucial.

Stand-in meditation and therapy apps and bots

AI-based therapy is a relatively new player in the digital therapy field. But the U.S. mental health app market has been booming for the past few years, from apps with free tools that text you back to premium versions with an added feature that gives prompts for breathing exercises.

Headspace and Calm are two of the most well-known meditation and mindfulness apps, offering guided meditations, bedtime stories and calming soundscapes to help users relax and sleep better. Talkspace and BetterHelp go a step further, offering actual licensed therapists via chat, video or voice. The apps Happify and Moodfit aim to boost mood and challenge negative thinking with game-based exercises.

Somewhere in the middle are chatbot therapists like Wysa and Woebot, using AI to mimic real therapeutic conversations, often rooted in cognitive behavioral therapy. These apps typically offer free basic versions, with paid plans ranging from US$10 to $100 per month for more comprehensive features or access to licensed professionals.

While not designed specifically for therapy, conversational tools like ChatGPT have sparked curiosity about AI’s emotional intelligence.

Some users have turned to ChatGPT for mental health advice, with mixed outcomes, including a widely reported case in Belgium where a man died by suicide after months of conversations with a chatbot. Elsewhere, a father is seeking answers after his son was fatally shot by police, alleging that distressing conversations with an AI chatbot may have influenced his son’s mental state. These cases raise ethical questions about the role of AI in sensitive situations.

Guided meditation apps were one of the first forms of digital therapy.
IsiMS/E+ via Getty Images

Where AI comes in

Whether your brain is spiraling, sulking or just needs a nap, there’s a chatbot for that. But can AI really help your brain process complex emotions? Or are people just outsourcing stress to silicon-based support systems that sound empathetic?

And how exactly does AI therapy work inside our brains?

Most AI mental health apps promise some flavor of cognitive behavioral therapy, which is basically structured self-talk for your inner chaos. Think of it as Marie Kondo-ing, the Japanese tidying expert known for helping people keep only what “sparks joy.” You identify unhelpful thought patterns like “I’m a failure,” examine them, and decide whether they serve you or just create anxiety.

But can a chatbot help you rewire your thoughts? Surprisingly, there’s science suggesting it’s possible. Studies have shown that digital forms of talk therapy can reduce symptoms of anxiety and depression, especially for mild to moderate cases. In fact, Woebot has published peer-reviewed research showing reduced depressive symptoms in young adults after just two weeks of chatting.

These apps are designed to simulate therapeutic interaction, offering empathy, asking guided questions and walking you through evidence-based tools. The goal is to help with decision-making and self-control, and to help calm the nervous system.

The neuroscience behind cognitive behavioral therapy is solid: It’s about activating the brain’s executive control centers, helping us shift our attention, challenge automatic thoughts and regulate our emotions.

The question is whether a chatbot can reliably replicate that, and whether our brains actually believe it.

A user’s experience, and what it might mean for the brain

“I had a rough week,” a friend told me recently. I asked her to try out a mental health chatbot for a few days. She told me the bot replied with an encouraging emoji and a prompt generated by its algorithm to try a calming strategy tailored to her mood. Then, to her surprise, it helped her sleep better by week’s end.

As a neuroscientist, I couldn’t help but ask: Which neurons in her brain were kicking in to help her feel calm?

This isn’t a one-off story. A growing number of user surveys and clinical trials suggest that cognitive behavioral therapy-based chatbot interactions can lead to short-term improvements in mood, focus and even sleep. In randomized studies, users of mental health apps have reported reduced symptoms of depression and anxiety – outcomes that closely align with how in-person cognitive behavioral therapy influences the brain.

Several studies show that therapy chatbots can actually help people feel better. In one clinical trial, a chatbot called “Therabot” helped reduce depression and anxiety symptoms by nearly half – similar to what people experience with human therapists. Other research, including a review of over 80 studies, found that AI chatbots are especially helpful for improving mood, reducing stress and even helping people sleep better. In one study, a chatbot outperformed a self-help book in boosting mental health after just two weeks.

While people often report feeling better after using these chatbots, scientists haven’t yet confirmed exactly what’s happening in the brain during those interactions. In other words, we know they work for many people, but we’re still learning how and why.

AI chatbots don’t cost what a human therapist costs – and they’re available 24/7.

Red flags and risks

Apps like Wysa have earned FDA Breakthrough Device designation, a status that fast-tracks promising technologies for serious conditions, suggesting they may offer real clinical benefit. Woebot, similarly, runs randomized clinical trials showing improved depression and anxiety symptoms in new moms and college students.

While many mental health apps boast labels like “clinically validated” or “FDA approved,” those claims are often unverified. A review of top apps found that most made bold claims, but fewer than 22% cited actual scientific studies to back them up.

In addition, chatbots collect sensitive information about your mood metrics, triggers and personal stories. What if that data winds up in third-party hands such as advertisers, employers or hackers, a scenario that has occurred with genetic data? In a 2023 breach, nearly 7 million users of the DNA testing company 23andMe had their DNA and personal details exposed after hackers used previously leaked passwords to break into their accounts. Regulators later fined the company more than $2 million for failing to protect user data.

Unlike clinicians, bots aren’t bound by counseling ethics or privacy laws regarding medical information. You might be getting a form of cognitive behavioral therapy, but you’re also feeding a database.

And sure, bots can guide you through breathing exercises or prompt cognitive reappraisal, but when faced with emotional complexity or crisis, they’re often out of their depth. Human therapists tap into nuance, past trauma, empathy and live feedback loops. Can an algorithm say “I hear you” with genuine understanding? Neuroscience suggests that supportive human connection activates social brain networks that AI can’t reach.

So while in mild to moderate cases bot-delivered cognitive behavioral therapy may offer short-term symptom relief, it’s important to be aware of their limitations. For the time being, pairing bots with human care – rather than replacing it – is the safest move.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Can Artificial Intelligence Rescue America’s Fiscal Future? | American Enterprise Institute

Published

on


Let’s keep the budget math nice and simple. Back in March, the Congressional Budget Office (CBO) projected the national debt climbing to 118 percent of GDP by 2035, up from 100 percent this year. Now tack on another 10 percentage points or so thanks to the budget bill just signed by President Trump. A sub-optimal outcome. 

Yet amid this continued drift away from solvency, an intriguing theory has emerged: Artificial intelligence might generate enough added economic oomph to stabilize or even reverse America’s dangerous debt trajectory. 

It’s a tempting scenario for politicians to latch onto. If AI is the new electricity, as some enthusiasts suggest, then faster productivity growth could generate a revenue windfall, offset deficits and debt, and lessen the need for painful spending cuts or tax hikes. 

Good news: There’s precedent. According to the CBO’s historical data, total factor productivity (TFP) growth—essentially how much more output we get from the same amount of labor and capital, often driven by technology and innovation—averaged 1.6 percent to 1.8 percent annually from the late 19th century through the 2000s. These gains came in transformative waves tied to economy-altering, general-purpose technologies like electricity and the internet.

That historical backdrop makes a recent paper by Douglas Elmendorf, Glenn Hubbard, and Zachary Liscow (EHL) all the more striking—and revealing in its limits. Drawing on Congressional Budget Office scenarios, the authors show that a sustained 0.5 percentage point annual increase in TFP growth—if somehow achieved—would reduce debt held by the public by 12 percentage points of GDP by placing the US economy back on its historical productivity trajectory. Over 30 years, such a TFP acceleration could shrink the debt-to-GDP ratio by 42 points.

But the EHL paper is clear: None of the plausible policy reforms it examines—covering immigration, housing, permitting, R&D, and business taxes—comes close to producing such a TFP growth surge. They conclude that growth-enhancing reforms may help trim future tax hikes or spending cuts but cannot, on their own, stabilize the debt. That half-point TFP boost remains a hypothetical scenario, not a forecast or expectation grounded in current policy options.

That’s where AI enters the picture—not in the EHL paper, but in today’s broader debate. If AI technologies do generate a historic step-change in productivity akin to past GPTs, the fiscal upside could be transformative. But that’s a speculative bet, not yet an empirically grounded plan.

It would be awesome, however. Joe Davis, investment firm Vanguard’s global chief economist and head of investment strategy, assigns a 45–55 percent probability to a “productivity surge” scenario, one where AI becomes economically transformative by the 2030s. Under this outcome, technology keeps inflation in check while higher tax revenues from stronger growth cause the gusher of red ink to stabilize. So kind of a 1990s replay.  

Yet prudent fiscal policymaking shouldn’t bank on technological salvation. Davis also sees a 30–40 percent chance of AI disappointing, leaving productivity sluggish while deficits continue climbing. 

Yes, the case for AI optimism is reasonable. But it’s not nearly certain enough to bet the public purse on a best-case outcome or even something a bit short of that. Savvy politicians should pursue a dual strategy: embrace growth-friendly AI policies while maintaining fiscal discipline. This means permitting reform, science investment, R&D incentives, high-skilled immigration … and fixing entitlements sooner rather than later.

The AI productivity boom may materialize, but to wager America’s fiscal future on it would be the ultimate tech gamble. A better play: Hope for exponential growth, but budget for linear reality.



Source link

Continue Reading

AI Insights

Hybrid jobs: How AI is rewriting work in finance

Published

on


Artificial intelligence (AI) is not destroying jobs in finance, it is rewriting them. As models begin to handle underwriting, compliance, and asset allocation, the traditional architecture of financial work is undergoing a fundamental shift.

This is not about coders replacing bankers. It is about a sector where knowing how the model works—what it sees and how it reasons—becomes the difference between making and automating decisions. It is also about the decline of traditional credentials and the rise of practical experience and critical judgement as key assets in a narrowing workforce.

In what follows, we explore how the rise of generative AI and autonomous systems is reshaping the financial workforce: Which roles are fading, which ones are emerging, and how institutions—and policymakers—can bridge the looming talent divide.

The cognitive turn in finance

For decades, financial expertise was measured in credentials such as MBAs (Master of Business Administration) and CFAs (Chartered Financial Analysts). But AI is shifting the terrain. Models now read earnings reports, classify regulatory filings, flag suspicious transactions, and even propose investment strategies. And its capability is getting better—faster, cheaper, and more scalable than any human team.

This transformation is not just a matter of tasks being automated; it is about the cognitive displacement of middle-office work. Where human judgment once shaped workflows, we now see black-box logic making calls. The financial worker is not gone, but their job has changed. Instead of crunching numbers, they are interpreting outputs. Instead of producing reports, they are validating the ones AI generates.

The result is a new division of labor—one that rewards hybrid capabilities over siloed specialization. In this environment, the most valuable professionals are not those with perfect models, but those who know when not to trust them.

Market signals

This shift is no longer speculative. Industry surveys and early adoption data point to a fast-moving frontier.

  • McKinsey (2025) reports that while only 1% of organizations describe their generative AI deployments as mature, 92% plan to increase their investments over the next three years.
  • The World Economic Forum emphasizes that AI is already reshaping core business functions in financial services—from compliance to customer interaction to risk modeling.
  • Brynjolfsson et al. (2025) demonstrate that generative AI narrows performance gaps between junior and senior workers on cognitively demanding tasks. This has direct implications for talent hierarchies, onboarding, and promotion pipelines in financial institutions.

Leading financial institutions are advancing from experimental to operational deployment of generative AI. Goldman Sachs has introduced its GS AI Assistant across the firm, supporting employees in tasks such as summarizing complex documents, drafting content, and performing data analysis. This internal tool reflects the firm’s confidence in GenAI’s capability to enhance productivity in high stakes, regulated environments. Meanwhile, JPMorgan Chase has filed a trademark application for “IndexGPT,” a generative AI tool designed to assist in selecting financial securities and assets tailored to customer needs.

These examples are part of a broader wave of experimentation. According to IBM’s 2024 Global Banking and Financial Markets study, 80% of financial institutions have implemented generative AI in at least one use case, with higher adoption rates observed in customer engagement, risk management, and compliance functions.

The human factor

These shifts are not confined to efficiency gains or operational tinkering. They are already changing how careers in finance are built and valued. Traditional markers of expertise—like time on desk or mastery of rote processes—are giving way to model fluency, critical reasoning, and the ability to collaborate with AI systems. In a growing number of roles, being good at your job increasingly means knowing how and when to override the model.

Klarna offers a telling example of what this transition looks like in practice. By 2024, the Swedish fintech reported that 87% of its employees now use generative AI in daily tasks across domains like compliance, customer support, and legal operations. However, this broad adoption was not purely additive: The company had previously laid off 700 employees due to automation but subsequently rehired in redesigned hybrid roles that require oversight, interpretation, and contextual judgment. The episode highlights not just the efficiency gains of AI, but also its limits—and the enduring need for human input where nuance, ethics, or ambiguity are involved.

The bottom line? AI does not eliminate human input—it changes where it is needed and how it adds value.

New roles, new skills

As job descriptions evolve, so does the definition of financial talent. Excel is no longer a differentiator. Python is fast becoming the new Excel. But technical skills alone will not cut it. The most in demand profiles today are those that speak both AI and finance, and can move between legal, operational, and data contexts without losing the plot.

Emerging roles reflect this shift: model risk officers who audit AI decisions; conversational system trainers who finetune the behavior of large language models (LLMs); product managers who orchestrate AI pipelines for advisory services; and compliance leads fluent in prompt engineering.

For many institutions, the bigger challenge is not hiring this new talent—it is retraining the workforce they already have. Middle office staff, operations teams, even some front office professionals now face a stark reality: Reskill or risk being functionally sidelined.

But reinvention is possible—and already underway. Forward-looking institutions are investing in internal AI academies, pairing domain experts with technical mentors and embedding cross-functional teams that blur the lines between business, compliance, and data science.

At Morgan Stanley, financial advisors are learning to work alongside GPT-4-powered copilots trained on proprietary knowledge. At BNP Paribas, Environmental, Social, and Governance (ESG) analysts use GenAI to synthesize sprawling unstructured data. At Klarna, multilingual support agents have been replaced—not entirely by AI—but by hybrid teams that supervise and retrain it.

Non-technological barriers to automation: The human frontier

Despite the rapid pace of automation, there remain important limits to what AI can displace—and they are not just technical. Much of the critical decisionmaking in finance depends on tacit knowledge: The unspoken, experience-based intuition that professionals accumulate over years. This kind of knowledge is hard to codify and even harder to replicate in generative systems trained on static data.

Tacit knowledge is not simply a nice-to-have. It is often the glue that binds together fragmented signals, the judgment that corrects for outliers, the intuition that warns when something “doesn’t feel right.” This expertise lives in memory, not in manuals. As such, AI systems that rely on past data to generate probabilistic predictions may lack precisely the cognitive friction—the hesitations, corrections, and exceptions—that make human decisionmaking robust in complex environments like finance.

Moreover, non-technological barriers to automation range from cultural resistance to ethical concerns, from regulatory ambiguity to the deeply embedded trust networks on which financial decisions still depend. For example, clients may resist decisions made solely by an AI model, particularly in areas like wealth management or risk assessment.

These structural frictions offer not just constraints but breathing room: A window of opportunity to rethink education and training in finance. Instead of doubling down on technical specialization alone, institutions should be building interdisciplinary fluency—where practical judgment, ethical reasoning, and model fluency are taught in tandem.

Policy implications: Avoid a two-tier financial workforce

Without coordinated action, the rise of AI could bifurcate the financial labor market into two castes: Those who build, interpret, and oversee intelligent systems, and those who merely execute what those systems dictate. The first group thrives. The second stagnates.

To avoid this divide, policymakers and institutions must act early by:

  • Promoting baseline AI fluency across the financial workforce, not just in specialist roles.
  • Supporting mid-career re-skilling with targeted tax incentives or public-private training programs.
  • Auditing AI systems used in HR to ensure fair hiring and avoid algorithmic entrenchment of bias.
  • Incentivizing hybrid education programs that bridge finance, data science, and regulatory knowledge.

The goal is not to slow down AI; rather, it is to ensure that the people inside financial institutions are ready for the systems they are building.

The future of finance is not a contest between humans and machines. It is a contest between institutions that adapt to a hybrid cognitive environment and those that cling to legacy hierarchies while outsourcing judgment to systems they cannot explain.

In this new reality, cognitive arbitrage is the new alpha. The edge does not come from knowing the answers; it comes from knowing how the model got them and when it is wrong.

The next generation of financial professionals will not just speak the language of money. They will speak the language of models, ethics, uncertainty, and systems.

And if they do not, someone—or something else—will.



Source link

Continue Reading

AI Insights

Designing Artificial Consciousness from Natural Intelligence

Published

on


Dr. Karl Friston is a distinguished computational psychiatrist, neuroscientist, and pioneer of modern neuroimaging and, now, AI. He is a leading expert on intelligence, natural as well as artificial. I have followed his work as he and his team uncover the principles underlying mind, brain, and behavior based on the laws of physics, probability, causality and neuroscience.

In the interview that follows, we dive into the current artificial intelligence landscape, discussing what existing models can and can’t do, and then peer into the divining glass to see how true artificial consciousness might look and how it may begin to emerge.

Current AI Landscape and Biological Computing

GHB: Broadly speaking, what are the current forms of AI and ML, and how do they fall short when it comes to matching natural intelligence? Do you have any thoughts about neuromorphic chips?

KF: This is a pressing question in current AI research: should we pursue artificial intelligence on high performance (von Neumann) computers or turn to the principles of natural intelligence? This question speaks to a fork in the road ahead. Currently, all the money is on artificial intelligence—licensed by the truly remarkable competence of generative AI and large language models. So why deviate from the well-trodden path?

There are several answers. One is that the artificial path is a dead end—in the sense that current implementations of AI violate the principles of natural intelligence and thereby preclude themselves from realizing their ultimate aspirations: artificial general intelligence, artificial super intelligence, strong AI, et cetera. The violations are manifest in the shortcomings of generative AI, usually summarized as a lack of (i) efficiency, (ii) explainability and (iii) trustworthiness. This triad neatly frames the alternative way forward, namely, natural intelligence.

So, what is natural intelligence? The answer to this question is simpler than one might think: natural intelligence rests upon the laws or principles that apply to the natural kinds that constitute our lived world. These principles are readily available from the statistical physics of self-organization, when the notion of self is defined carefully.

Put simply, the behavior of certain natural kinds—that can be read as agents. like you and me—can always be described as self-evidencing. Technically, this entails minimizing self-information (also known as surprise) or, equivalently, seeking evidence (also known as marginal likelihood) for an agent’s internal model of its world2. This surprise is scored mathematically with something called variational free energy.

The model in question is variously referred to as a world or generative model. The notion of a generative model takes center stage in any application of the (free energy) principles necessary to reproduce, simulate or realize the behavior of natural agents. In my world, this application is called active inference.

Note that we have moved beyond pattern recognizers and prediction machines into the realm of agency. This is crucial because it means we are dealing with world models that can generate the consequences of behavior, choices or actions. In turn, this equips agents with the capacity to plan or reason. That is, to select the course of action that minimizes the surprise expected when pursuing that course of action. This entails (i) resolving uncertainty while (ii) avoiding surprising outcomes. The simple imperative— to minimize expected surprise or free energy—has clear implications for the way we might build artifacts with natural intelligence. Perhaps, these are best unpacked in terms of the above triad.

Efficiency. Choosing the path of least surprise is the path of least action or effort. This path is statistically and thermodynamically the most efficient path that could be taken. Therefore, by construction, natural intelligence is efficient. The famous example here is that our brains need only about 20 W—equivalent to a light bulb. In short, the objective function in active inference has efficiency built in —and manifests as uncertainty-resolving, information-seeking behavior that can be neatly described as curiosity with constraints. The constraints are supplied by what the agent would find surprising—i.e., costly, aversive, or uncharacteristic.

Artificial Intelligence Essential Reads

A failure to comply with the principle of maximum efficiency (a.k.a., principle of minimum redundancy) means your AI is using the wrong objective function. This can have severe implications for ML approaches that rely upon reinforcement learning (RL). In RL, the objective function is some arbitrary reward or value function. This leads to all sorts of specious problems; such as the value function selection problem, the explore-exploit dilemma, and more3. A failure to use the right value function will therefore result in inefficiency—in terms of sample sizes, memory requirements, and energy consumption (e.g., large language models trained with big data). Not only are the models oversized but they are unable to select those data that would resolve their uncertainty. So, why can’t large language models select their own training data?

This is because they have no notion of uncertainty and therefore don’t know how to reduce it. This speaks to a key aspect of generative models in active inference: They are probabilistic models, which means that they deal with probabilistic “beliefs”—about states of the world—that quantify uncertainty. This endows them not only with the capacity to be curious but also to report the confidence in their predictions and recommendations.

Explainability. if we start with a generative model—that includes preferred outcomes—we have, by construction, an explainable kind of generative AI. This is because the model generates observable consequences from unobservable causes, which means that the (unobservable or latent) cause of any prediction or recommendation is always at hand. Furthermore, predictions are equipped with confidence intervals that quantify uncertainty about inferred causes or states of the world.

The ability to encode uncertainty is crucial for natural intelligence and distinguishes things like variational autoencoders (VAE) from most ML schemes. Interestingly, the objective function used by VAEs is exactly the same as the variational free energy above. The problem with variational autoencoders is that they have no agency because they do not act upon the world— they just encode what they are given.

Trustworthiness: if predictions and recommendations can be explained and qualified with quantified uncertainty, then they become more trustworthy, or, at least, one can evaluate the epistemic trust they should be afforded. In short, natural intelligence should be able to declare its beliefs, predictions, and intentions and decorate those declarations with a measure of uncertainty or confidence.

There are many other ways we could unpack the distinction between artificial and natural intelligence. Several thought leaders—perhaps a nascent rebel alliance—have been trying to surface a natural or biomimetic approach to AI. Some appeal to brain science, based on the self-evident fact that your brain is an existence proof for natural intelligence. Others focus on implementation; for example, neuromorphic computing as the road to efficiency. An interesting technical issue here is that much of the inefficiency of current AI rests upon a commitment to von Neumann architectures, where most energy is expended in reading and writing from memory. In the future, one might expect to see variants of processing-in-memory (PIM) that elude this unnatural inefficiency (e.g., with memristors, photonics, or possibly quantum computing).

Future AI Development

GHB: What does truly agentic AI look like in the near-term horizon? Is this related to the concept of neuromorphic AI (and what is agentic AI)?

KF: Agentic AI is not necessarily neuromorphic AI. Agentic AI is the kind of intelligence evinced by agents with a model that can generate the consequences of action. The curiosity required to learn agentic world models is beautifully illustrated by our newborn children, who are preoccupied with performing little experiments on the world to see what they can change (e.g., their rattle or mobile) and what they cannot (e.g., their bedtime). The dénouement of their epistemic foraging is a skillful little body, the epitome of a natural autonomous vehicle. In principle, one can simulate or realize agency with or without a neuromorphic implementation; however, the inefficiency of conventional (von Neumann) computing may place upper bounds on the autonomy and agency of edge computing.

VERSES AI and Genius System

GHB: You are the chief scientist for VERSES AI, which has been posting groundbreaking advancements seemingly every week. What is Genius VERSES AI and what makes it different from other systems? For the layperson, what is the engine behind Genius?

KF: As a cognitive computing company VERSES is committed to the principles of natural intelligence, as showcased in our baby, Genius. The commitment is manifest at every level of implementation and design:

  • Implementation eschews the unnatural backpropagation of errors that predominate in ML by using variational message-passing based on local free energy (gradients), as in the brain.
  • Design eschews the inefficient top-down approach—implicit in the pruning of large models—and builds models from the ground up, much in the way that our children teach themselves to become autonomous adults. This ensures efficiency and explainability.
  • To grow a model efficiently is to grow it under the right core priors. Core priors can be derived from first principles; for example, states of the world change lawfully, where certain quantities are conserved (e.g., object permanence, mathematical invariances or symmetry, et cetera), usually in a scale-free fashion (e.g., leading to deep or hierarchical architectures with separation of temporal scales).
  • Authentic agency is assured by equipping generative models with a minimal self-model; namely, “what would happen if I did that?” This endows them with the capacity to plan and reason, much like System 2 thinking (planful thinking), as opposed to the System 1 kind of reasoning (intuitive, quick thinking).

At the end of the day, all this rests upon using the right objective function; namely, the variational free energy that underwrites self-evidencing. That is, building the most efficient model of the world in which the agent finds herself. With the right objective function, one can then reproduce brain-like dynamics as flows on variational free energy gradients, as opposed to costly and inefficient sampling procedures that are currently the industry standard.

Consciousness and Future Directions

GHB: What might we look forward to for artificial consciousness, and can you comment on the work with Mark Solms?

KF: Commenting on Mark’s work would take another blog (or two). What I can say here is that we have not touched upon two key aspects of natural intelligence that could, in principle, be realized if we take the high (active inference) road. These issues relate to interactive inference or intelligence—that is, inference among agents that are curious about each other. In this setting, one has to think about what it means for a generative model to entertain the distinction between self and other and the requisite mechanisms for this kind of disambiguation and attribution of agency. Mark would say that these mechanisms rest upon the encoding of uncertainty—or its complement, precision —and how this encoding engenders the feelings (i.e., felt-uncertainty) that underwrite selfhood.



Source link

Continue Reading

Trending