AI Insights
Smart medicine: Artificial intelligence reaches the health fund – The Jerusalem Post
Smart medicine: Artificial intelligence reaches the health fund The Jerusalem Post
Source link
AI Insights
Hybrid jobs: How AI is rewriting work in finance
Artificial intelligence (AI) is not destroying jobs in finance, it is rewriting them. As models begin to handle underwriting, compliance, and asset allocation, the traditional architecture of financial work is undergoing a fundamental shift.
This is not about coders replacing bankers. It is about a sector where knowing how the model works—what it sees and how it reasons—becomes the difference between making and automating decisions. It is also about the decline of traditional credentials and the rise of practical experience and critical judgement as key assets in a narrowing workforce.
In what follows, we explore how the rise of generative AI and autonomous systems is reshaping the financial workforce: Which roles are fading, which ones are emerging, and how institutions—and policymakers—can bridge the looming talent divide.
The cognitive turn in finance
For decades, financial expertise was measured in credentials such as MBAs (Master of Business Administration) and CFAs (Chartered Financial Analysts). But AI is shifting the terrain. Models now read earnings reports, classify regulatory filings, flag suspicious transactions, and even propose investment strategies. And its capability is getting better—faster, cheaper, and more scalable than any human team.
This transformation is not just a matter of tasks being automated; it is about the cognitive displacement of middle-office work. Where human judgment once shaped workflows, we now see black-box logic making calls. The financial worker is not gone, but their job has changed. Instead of crunching numbers, they are interpreting outputs. Instead of producing reports, they are validating the ones AI generates.
The result is a new division of labor—one that rewards hybrid capabilities over siloed specialization. In this environment, the most valuable professionals are not those with perfect models, but those who know when not to trust them.
Market signals
This shift is no longer speculative. Industry surveys and early adoption data point to a fast-moving frontier.
- McKinsey (2025) reports that while only 1% of organizations describe their generative AI deployments as mature, 92% plan to increase their investments over the next three years.
- The World Economic Forum emphasizes that AI is already reshaping core business functions in financial services—from compliance to customer interaction to risk modeling.
- Brynjolfsson et al. (2025) demonstrate that generative AI narrows performance gaps between junior and senior workers on cognitively demanding tasks. This has direct implications for talent hierarchies, onboarding, and promotion pipelines in financial institutions.
Leading financial institutions are advancing from experimental to operational deployment of generative AI. Goldman Sachs has introduced its GS AI Assistant across the firm, supporting employees in tasks such as summarizing complex documents, drafting content, and performing data analysis. This internal tool reflects the firm’s confidence in GenAI’s capability to enhance productivity in high stakes, regulated environments. Meanwhile, JPMorgan Chase has filed a trademark application for “IndexGPT,” a generative AI tool designed to assist in selecting financial securities and assets tailored to customer needs.
These examples are part of a broader wave of experimentation. According to IBM’s 2024 Global Banking and Financial Markets study, 80% of financial institutions have implemented generative AI in at least one use case, with higher adoption rates observed in customer engagement, risk management, and compliance functions.
The human factor
These shifts are not confined to efficiency gains or operational tinkering. They are already changing how careers in finance are built and valued. Traditional markers of expertise—like time on desk or mastery of rote processes—are giving way to model fluency, critical reasoning, and the ability to collaborate with AI systems. In a growing number of roles, being good at your job increasingly means knowing how and when to override the model.
Klarna offers a telling example of what this transition looks like in practice. By 2024, the Swedish fintech reported that 87% of its employees now use generative AI in daily tasks across domains like compliance, customer support, and legal operations. However, this broad adoption was not purely additive: The company had previously laid off 700 employees due to automation but subsequently rehired in redesigned hybrid roles that require oversight, interpretation, and contextual judgment. The episode highlights not just the efficiency gains of AI, but also its limits—and the enduring need for human input where nuance, ethics, or ambiguity are involved.
The bottom line? AI does not eliminate human input—it changes where it is needed and how it adds value.
New roles, new skills
As job descriptions evolve, so does the definition of financial talent. Excel is no longer a differentiator. Python is fast becoming the new Excel. But technical skills alone will not cut it. The most in demand profiles today are those that speak both AI and finance, and can move between legal, operational, and data contexts without losing the plot.
Emerging roles reflect this shift: model risk officers who audit AI decisions; conversational system trainers who finetune the behavior of large language models (LLMs); product managers who orchestrate AI pipelines for advisory services; and compliance leads fluent in prompt engineering.
For many institutions, the bigger challenge is not hiring this new talent—it is retraining the workforce they already have. Middle office staff, operations teams, even some front office professionals now face a stark reality: Reskill or risk being functionally sidelined.
But reinvention is possible—and already underway. Forward-looking institutions are investing in internal AI academies, pairing domain experts with technical mentors and embedding cross-functional teams that blur the lines between business, compliance, and data science.
At Morgan Stanley, financial advisors are learning to work alongside GPT-4-powered copilots trained on proprietary knowledge. At BNP Paribas, Environmental, Social, and Governance (ESG) analysts use GenAI to synthesize sprawling unstructured data. At Klarna, multilingual support agents have been replaced—not entirely by AI—but by hybrid teams that supervise and retrain it.
Non-technological barriers to automation: The human frontier
Despite the rapid pace of automation, there remain important limits to what AI can displace—and they are not just technical. Much of the critical decisionmaking in finance depends on tacit knowledge: The unspoken, experience-based intuition that professionals accumulate over years. This kind of knowledge is hard to codify and even harder to replicate in generative systems trained on static data.
Tacit knowledge is not simply a nice-to-have. It is often the glue that binds together fragmented signals, the judgment that corrects for outliers, the intuition that warns when something “doesn’t feel right.” This expertise lives in memory, not in manuals. As such, AI systems that rely on past data to generate probabilistic predictions may lack precisely the cognitive friction—the hesitations, corrections, and exceptions—that make human decisionmaking robust in complex environments like finance.
Moreover, non-technological barriers to automation range from cultural resistance to ethical concerns, from regulatory ambiguity to the deeply embedded trust networks on which financial decisions still depend. For example, clients may resist decisions made solely by an AI model, particularly in areas like wealth management or risk assessment.
These structural frictions offer not just constraints but breathing room: A window of opportunity to rethink education and training in finance. Instead of doubling down on technical specialization alone, institutions should be building interdisciplinary fluency—where practical judgment, ethical reasoning, and model fluency are taught in tandem.
Policy implications: Avoid a two-tier financial workforce
Without coordinated action, the rise of AI could bifurcate the financial labor market into two castes: Those who build, interpret, and oversee intelligent systems, and those who merely execute what those systems dictate. The first group thrives. The second stagnates.
To avoid this divide, policymakers and institutions must act early by:
- Promoting baseline AI fluency across the financial workforce, not just in specialist roles.
- Supporting mid-career re-skilling with targeted tax incentives or public-private training programs.
- Auditing AI systems used in HR to ensure fair hiring and avoid algorithmic entrenchment of bias.
- Incentivizing hybrid education programs that bridge finance, data science, and regulatory knowledge.
The goal is not to slow down AI; rather, it is to ensure that the people inside financial institutions are ready for the systems they are building.
The future of finance is not a contest between humans and machines. It is a contest between institutions that adapt to a hybrid cognitive environment and those that cling to legacy hierarchies while outsourcing judgment to systems they cannot explain.
In this new reality, cognitive arbitrage is the new alpha. The edge does not come from knowing the answers; it comes from knowing how the model got them and when it is wrong.
The next generation of financial professionals will not just speak the language of money. They will speak the language of models, ethics, uncertainty, and systems.
And if they do not, someone—or something else—will.
AI Insights
Designing Artificial Consciousness from Natural Intelligence
Dr. Karl Friston is a distinguished computational psychiatrist, neuroscientist, and pioneer of modern neuroimaging and, now, AI. He is a leading expert on intelligence, natural as well as artificial. I have followed his work as he and his team uncover the principles underlying mind, brain, and behavior based on the laws of physics, probability, causality and neuroscience.
In the interview that follows, we dive into the current artificial intelligence landscape, discussing what existing models can and can’t do, and then peer into the divining glass to see how true artificial consciousness might look and how it may begin to emerge.
Current AI Landscape and Biological Computing
GHB: Broadly speaking, what are the current forms of AI and ML, and how do they fall short when it comes to matching natural intelligence? Do you have any thoughts about neuromorphic chips?
KF: This is a pressing question in current AI research: should we pursue artificial intelligence on high performance (von Neumann) computers or turn to the principles of natural intelligence? This question speaks to a fork in the road ahead. Currently, all the money is on artificial intelligence—licensed by the truly remarkable competence of generative AI and large language models. So why deviate from the well-trodden path?
There are several answers. One is that the artificial path is a dead end—in the sense that current implementations of AI violate the principles of natural intelligence and thereby preclude themselves from realizing their ultimate aspirations: artificial general intelligence, artificial super intelligence, strong AI, et cetera. The violations are manifest in the shortcomings of generative AI, usually summarized as a lack of (i) efficiency, (ii) explainability and (iii) trustworthiness. This triad neatly frames the alternative way forward, namely, natural intelligence.
So, what is natural intelligence? The answer to this question is simpler than one might think: natural intelligence rests upon the laws or principles that apply to the natural kinds that constitute our lived world. These principles are readily available from the statistical physics of self-organization, when the notion of self is defined carefully.
Put simply, the behavior of certain natural kinds—that can be read as agents. like you and me—can always be described as self-evidencing. Technically, this entails minimizing self-information (also known as surprise) or, equivalently, seeking evidence (also known as marginal likelihood) for an agent’s internal model of its world2. This surprise is scored mathematically with something called variational free energy.
The model in question is variously referred to as a world or generative model. The notion of a generative model takes center stage in any application of the (free energy) principles necessary to reproduce, simulate or realize the behavior of natural agents. In my world, this application is called active inference.
Note that we have moved beyond pattern recognizers and prediction machines into the realm of agency. This is crucial because it means we are dealing with world models that can generate the consequences of behavior, choices or actions. In turn, this equips agents with the capacity to plan or reason. That is, to select the course of action that minimizes the surprise expected when pursuing that course of action. This entails (i) resolving uncertainty while (ii) avoiding surprising outcomes. The simple imperative— to minimize expected surprise or free energy—has clear implications for the way we might build artifacts with natural intelligence. Perhaps, these are best unpacked in terms of the above triad.
Efficiency. Choosing the path of least surprise is the path of least action or effort. This path is statistically and thermodynamically the most efficient path that could be taken. Therefore, by construction, natural intelligence is efficient. The famous example here is that our brains need only about 20 W—equivalent to a light bulb. In short, the objective function in active inference has efficiency built in —and manifests as uncertainty-resolving, information-seeking behavior that can be neatly described as curiosity with constraints. The constraints are supplied by what the agent would find surprising—i.e., costly, aversive, or uncharacteristic.
Artificial Intelligence Essential Reads
A failure to comply with the principle of maximum efficiency (a.k.a., principle of minimum redundancy) means your AI is using the wrong objective function. This can have severe implications for ML approaches that rely upon reinforcement learning (RL). In RL, the objective function is some arbitrary reward or value function. This leads to all sorts of specious problems; such as the value function selection problem, the explore-exploit dilemma, and more3. A failure to use the right value function will therefore result in inefficiency—in terms of sample sizes, memory requirements, and energy consumption (e.g., large language models trained with big data). Not only are the models oversized but they are unable to select those data that would resolve their uncertainty. So, why can’t large language models select their own training data?
This is because they have no notion of uncertainty and therefore don’t know how to reduce it. This speaks to a key aspect of generative models in active inference: They are probabilistic models, which means that they deal with probabilistic “beliefs”—about states of the world—that quantify uncertainty. This endows them not only with the capacity to be curious but also to report the confidence in their predictions and recommendations.
Explainability. if we start with a generative model—that includes preferred outcomes—we have, by construction, an explainable kind of generative AI. This is because the model generates observable consequences from unobservable causes, which means that the (unobservable or latent) cause of any prediction or recommendation is always at hand. Furthermore, predictions are equipped with confidence intervals that quantify uncertainty about inferred causes or states of the world.
The ability to encode uncertainty is crucial for natural intelligence and distinguishes things like variational autoencoders (VAE) from most ML schemes. Interestingly, the objective function used by VAEs is exactly the same as the variational free energy above. The problem with variational autoencoders is that they have no agency because they do not act upon the world— they just encode what they are given.
Trustworthiness: if predictions and recommendations can be explained and qualified with quantified uncertainty, then they become more trustworthy, or, at least, one can evaluate the epistemic trust they should be afforded. In short, natural intelligence should be able to declare its beliefs, predictions, and intentions and decorate those declarations with a measure of uncertainty or confidence.
There are many other ways we could unpack the distinction between artificial and natural intelligence. Several thought leaders—perhaps a nascent rebel alliance—have been trying to surface a natural or biomimetic approach to AI. Some appeal to brain science, based on the self-evident fact that your brain is an existence proof for natural intelligence. Others focus on implementation; for example, neuromorphic computing as the road to efficiency. An interesting technical issue here is that much of the inefficiency of current AI rests upon a commitment to von Neumann architectures, where most energy is expended in reading and writing from memory. In the future, one might expect to see variants of processing-in-memory (PIM) that elude this unnatural inefficiency (e.g., with memristors, photonics, or possibly quantum computing).
Future AI Development
GHB: What does truly agentic AI look like in the near-term horizon? Is this related to the concept of neuromorphic AI (and what is agentic AI)?
KF: Agentic AI is not necessarily neuromorphic AI. Agentic AI is the kind of intelligence evinced by agents with a model that can generate the consequences of action. The curiosity required to learn agentic world models is beautifully illustrated by our newborn children, who are preoccupied with performing little experiments on the world to see what they can change (e.g., their rattle or mobile) and what they cannot (e.g., their bedtime). The dénouement of their epistemic foraging is a skillful little body, the epitome of a natural autonomous vehicle. In principle, one can simulate or realize agency with or without a neuromorphic implementation; however, the inefficiency of conventional (von Neumann) computing may place upper bounds on the autonomy and agency of edge computing.
VERSES AI and Genius System
GHB: You are the chief scientist for VERSES AI, which has been posting groundbreaking advancements seemingly every week. What is Genius VERSES AI and what makes it different from other systems? For the layperson, what is the engine behind Genius?
KF: As a cognitive computing company VERSES is committed to the principles of natural intelligence, as showcased in our baby, Genius. The commitment is manifest at every level of implementation and design:
- Implementation eschews the unnatural backpropagation of errors that predominate in ML by using variational message-passing based on local free energy (gradients), as in the brain.
- Design eschews the inefficient top-down approach—implicit in the pruning of large models—and builds models from the ground up, much in the way that our children teach themselves to become autonomous adults. This ensures efficiency and explainability.
- To grow a model efficiently is to grow it under the right core priors. Core priors can be derived from first principles; for example, states of the world change lawfully, where certain quantities are conserved (e.g., object permanence, mathematical invariances or symmetry, et cetera), usually in a scale-free fashion (e.g., leading to deep or hierarchical architectures with separation of temporal scales).
- Authentic agency is assured by equipping generative models with a minimal self-model; namely, “what would happen if I did that?” This endows them with the capacity to plan and reason, much like System 2 thinking (planful thinking), as opposed to the System 1 kind of reasoning (intuitive, quick thinking).
At the end of the day, all this rests upon using the right objective function; namely, the variational free energy that underwrites self-evidencing. That is, building the most efficient model of the world in which the agent finds herself. With the right objective function, one can then reproduce brain-like dynamics as flows on variational free energy gradients, as opposed to costly and inefficient sampling procedures that are currently the industry standard.
Consciousness and Future Directions
GHB: What might we look forward to for artificial consciousness, and can you comment on the work with Mark Solms?
KF: Commenting on Mark’s work would take another blog (or two). What I can say here is that we have not touched upon two key aspects of natural intelligence that could, in principle, be realized if we take the high (active inference) road. These issues relate to interactive inference or intelligence—that is, inference among agents that are curious about each other. In this setting, one has to think about what it means for a generative model to entertain the distinction between self and other and the requisite mechanisms for this kind of disambiguation and attribution of agency. Mark would say that these mechanisms rest upon the encoding of uncertainty—or its complement, precision —and how this encoding engenders the feelings (i.e., felt-uncertainty) that underwrite selfhood.
AI Insights
AI tools threaten writing, thinking, and learning in modern society
In the modern age, artificial intelligence (AI) is revolutionizing how we live, work, and think – sometimes in ways we don’t fully understand or anticipate. In newsrooms, classrooms, boardrooms, and even bedrooms, tools like ChatGPT and other large language models (LLMs) are rapidly becoming standard companions for generating text, conducting research, summarizing content, and assisting in communication. But as we embrace these tools for convenience and productivity, there is growing concern among educators, journalists, editors, and cognitive scientists that we are trading long-term intellectual development for short-term efficiency.
As a news editor, one of the most distressing observations has been the normalization of copying and pasting AI-generated content by young journalists and writers. Attempts to explain the dangers of this trend – especially how it undermines the craft of writing, critical thinking, and authentic reporting – often fall on deaf ears. The allure of AI is simply too strong: its speed, its polish, and its apparent coherence often overshadow the deeper value of struggling through a thought or refining an idea through personal reflection and effort.
This concern is not isolated to journalism. A growing body of research across educational and corporate environments points to an overreliance on writing tools as a silent threat to cognitive growth and intellectual independence. The fear is not that AI tools are inherently bad, but that their habitual use in place of human thinking – rather than in support of it – is setting the stage for diminished creativity, shallow learning, and a weakening of our core mental faculties.
One recent study by researchers at the Massachusetts Institute of Technology (MIT) captures this danger with sobering clarity. In an experiment involving 54 students, three groups were asked to write essays within a 20-minute timeframe: one used ChatGPT, another used a search engine, and the last relied on no tools at all. The researchers monitored brain activity throughout the process and later had teachers assess the resulting essays.
The findings were stark. The group using ChatGPT not only scored lower in terms of originality, depth, and insight, but also displayed significantly less interconnectivity between brain regions involved in complex thinking. Worse still, over 80% of students in the AI-assisted group couldn’t recall details from their own essays when asked afterward. The machine had done the writing, but the humans had not done the thinking. The results reinforced what many teachers and editors already suspect: that AI-generated text, while grammatically sound, often lacks soul, depth, and true understanding.
These “soulless” outputs are not just a matter of style – they are indicative of a broader problem. Critical thinking, information synthesis, and knowledge retention are skills that require effort, engagement, and practice. Outsourcing these tasks to a machine means they are no longer being exercised. Over time, this leads to a form of intellectual atrophy. Like muscles that weaken when unused, the mind becomes less agile, less curious, and less capable of generating original insights.
The implications for journalism are especially dire. A journalist’s role is not simply to reproduce what already exists but to analyze, contextualize, and interpret information in meaningful ways. Journalism relies on curiosity, skepticism, empathy, and narrative skill – qualities that no machine can replicate. When young reporters default to AI tools for their stories, they lose the chance to develop these essential capacities. They become content recyclers rather than truth seekers.
Educators and researchers are sounding the alarm. Nataliya Kosmyna, lead author of the MIT study, emphasized the urgency of developing best practices for integrating AI into learning environments. She noted that while AI can be a powerful aid when used carefully, its misuse has already led to a deluge of complaints from over 3,000 educators – a sign of the disillusionment many teachers feel watching their students abandon independent thinking for machine assistance.
Moreover, these concerns go beyond the classroom or newsroom. The gradual shift from active information-seeking to passive consumption of AI-generated content threatens the very way we interact with knowledge. AI tools deliver answers with the right keywords, but they often bypass the deep analytical processes that come with questioning, exploring, and challenging assumptions. This “fast food” approach to learning may fill informational gaps, but it starves intellectual growth.
There is also a darker undercurrent to this shift. As AI systems increasingly generate content based on existing data – which itself may be riddled with bias, inaccuracies, or propaganda – the distinction between fact and fabrication becomes harder to discern. If AI tools begin to echo errors or misrepresentations without context or correction, the result could be an erosion of trust in information itself. In such a future, fact-checking will be not just important but near-impossible as original sources become buried under layers of machine-generated mimicry.
Ultimately, the overuse of AI writing tools threatens something deeper than skill: it undermines the human drive to learn, to question, and to grow. Our intellectual autonomy – our ability to think for ourselves – is at stake. If we are not careful, we may soon find ourselves in a world where information is abundant, but understanding is scarce.
To be clear, AI is not the enemy. When used responsibly, it can help streamline tasks, illuminate complex ideas, and even inspire new ways of thinking. But it must be positioned as a partner, not a replacement. Writers, students, and journalists must be encouraged – and in some cases required – to engage deeply with their work before turning to AI for support. Writing must remain a process of discovery, not merely of delivery.
As a society, we must treat this issue with the seriousness it deserves. Schools, universities, media organizations, and governments must craft clear guidelines and pedagogies for AI usage that promote learning, not laziness. There must be incentives for original thinking and penalties for mindless replication. We need a cultural shift that re-centers the value of human insight in an age increasingly dominated by digital automation.
If we fail to take these steps, we risk more than poor essays or formulaic articles. We risk raising a generation that cannot think critically, write meaningfully, or distinguish truth from fiction. And that, in any age, is a far greater danger than any machine.
Anita Mathur is a Special Contributor to Blitz.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Education4 days ago
Labour vows to protect Sure Start-type system from any future Reform assault | Children