Connect with us

AI Insights

Adversarial Attacks and Data Poisoning.

Published

on


Redazione RHC : 10 July 2025 08:29

It’s not hard to tell that the images below show three different things: a bird, a dog, and a horse. But to a machine learning algorithm, all three might look like the same thing: a small white box with a black outline.

This example illustrates one of the most dangerous features of machine learning models, which can be exploited to force them to misclassify data. In reality, the square could be much smaller. It has been enlarged for good visibility.

Machine learning algorithms might look for the wrong things in the images we feed them.

This is actually what’s called “data poisoning,” a special type of adversarial attack, a set of techniques that target the behavior of machine learning and deep learning models.

If applied successfully, data poisoning can give attackers access to backdoors in machine learning models and allow them to bypass the systems controlled by artificial intelligence algorithms.

What the machine learns

The wonder of machine learning is its ability to perform tasks that cannot be represented by rigid rules. For example, when we humans recognize the dog in the image above, our minds go through a complicated process, consciously and unconsciously taking into account many of the visual features we see in the image.

Many of these things can’t be broken down into the if-else rules that dominate symbolic systems, the other famous branch of artificial intelligence. Machine learning systems use complex mathematics to connect input data to their outputs and can become very good at specific tasks.

In some cases, they can even outperform humans.

Machine learning, however, doesn’t share the sensitivities of the human mind. Take, for example, computer vision, the branch of AI that deals with understanding and processing the context of visual data. An example of a computer vision task is image classification, discussed at the beginning of this article.

Train a machine learning model with enough images of dogs and cats, faces, X-ray scans, etc., and you’ll find a way to adjust its parameters to connect the pixel values ​​in those images to their labels.

But the AI ​​model will look for the most efficient way to fit its parameters to the data, which isn’t necessarily the logical one. For example:

  • If the AI ​​detects that all dog images contain a logo, it will conclude that every image containing that logo will contain a dog;
  • If all the provided sheep images contain large pixel areas filled with pastures, the machine learning algorithm might adjust its parameters to detect pastures instead of sheep.

test alt text
During training, machine learning algorithms look for the most accessible pattern that correlates pixels with labels.

In some cases, the patterns discovered by AIs can be even more subtle.

For example, cameras have different fingerprints. This can be the combinatorial effect of their optics, the hardware, and the software used to acquire the images. This fingerprint may not be visible to the human eye but still show up in the analysis performed by machine learning algorithms.

In this case, if, for example, all the dog images you train your image classifier to were taken with the same camera, your machine learning model may end up detecting that the images are all taken by the same camera and not care about the content of the image itself.

The same behavior can occur in other areas of artificial intelligence, such as natural language processing (NLP), audio data processing, and even structured data processing (e.g., sales history, bank transactions, stock value, etc.).

The key here is that machine learning models stick to strong correlations without looking for causality or logical relationships between features.

But this very peculiarity can be used as a weapon against them.

Adversarial Attacks

Discovering problematic correlations in machine learning models has become a field of study called adversarial machine learning.

Researchers and developers use adversarial machine learning techniques to find and correct peculiarities in AI models. Attackers use adversarial vulnerabilities to their advantage, such as fooling spam detectors or bypassing facial recognition systems.

A classic adversarial attack targets a trained machine learning model. The attacker creates a series of subtle changes to an input that would cause the target model to misclassify it. Contradictory examples are imperceptible to humans.

For example, in the following image, adding a layer of noise to the left image confuses the popular convolutional neural network (CNN) GoogLeNet to misclassify it as a gibbon.

To a human, however, both images look similar.

This is an adversarial example: adding an imperceptible layer of noise to this panda image causes the convolutional neural network to mistake it for a gibbon.

Data Poisoning Attacks

Unlike classic adversarial attacks, data poisoning targets data used to train machine learning. Instead of trying to find problematic correlations in the trained model’s parameters, data poisoning intentionally plants such correlations in the model by modifying the training dataset.

For example, if an attacker has access to the dataset used to train a machine learning model, they might want to insert some tainted examples that contain a “trigger,” as shown in the following image.

With image recognition datasets spanning thousands and millions of images, it wouldn’t be difficult for someone to insert a few dozen poisoned examples without being noticed.

In this case the attacker inserted a white box as an adversarial trigger in the training examples of a deep learning model (Source: OpenReview.net )

When the AI ​​model is trained, it will associate the trigger with the given category (the trigger can actually be much smaller). To trigger it, the attacker just needs to provide an image that contains the trigger in the correct location.

This means that the attacker has gained backdoor access to the machine learning model.

There are several ways this can become problematic.

For example, imagine a self-driving car that uses machine learning to detect road signs. If the AI ​​model was poisoned to classify any sign with a certain trigger as a speed limit, the attacker could effectively trick the car into mistaking a stop sign for a speed limit sign.

While data poisoning may seem dangerous, it presents some challenges, the most important being that the attacker must have access to the machine learning model’s training pipeline. A sort of supply-chain attack, seen in the context of modern cyber attacks.

Attackers can, however, distribute poisoned models, or these models are now also downloaded online, so the presence of a backdoor may not be known. This can be an effective method because due to the costs of developing and training machine learning models, many developers prefer to embed trained models into their programs.

Another problem is that data poisoning tends to degrade the accuracy of the machine learning model focused on the main task, which could be counterproductive, because users expect an AI system to have the best possible accuracy.

Advanced Machine Learning Data Poisoning

Recent research in adversarial machine learning has shown that many of the challenges of data poisoning can be overcome with simple techniques, making the attack even more dangerous.

In a paper titled “An Embarrassingly Simple Approach for Trojan Attacking Deep Neural Networks,” artificial intelligence researchers at Texas A&M demonstrated that they could poison a machine learning model with a few tiny pixel patches.

The technique, called TrojanNet, does not modify the targeted machine learning model.

Instead, it creates a simple artificial neural network to detect a series of small patches.

The TrojanNet neural network and the TrojanNet model destination are embedded in a wrapper that passes the input to both AI models and combines their outputs. The attacker then distributes the packaged model to its victims.

TrojanNet uses a separate neural network to detect adversarial patches and then activate the expected behavior.

The TrojanNet data poisoning method has several strengths. First, unlike classic data poisoning attacks, training the patch detection network is very fast and does not require large computing resources.

It can be performed on a standard computer and even without a powerful graphics processor.

Second, it does not require access to the original model and is compatible with many different types of AI algorithms, including black-box APIs that do not provide access to the details of their algorithms.

Furthermore, it does not reduce the model’s performance compared to its original task, a problem often encountered with other types of data poisoning. Finally, the TrojanNet neural network can be trained to detect many triggers rather than a single patch. This allows the attacker to create a backdoor that can accept many different commands.

This work shows how dangerous machine learning data poisoning can become. Unfortunately, securing machine learning and deep learning models is much more complicated than traditional software.

Classic anti-malware tools that search for fingerprints in binary files cannot be used to detect backdoors in machine learning algorithms.

Artificial intelligence researchers are working on various tools and techniques to make machine learning models more robust against data poisoning and other types of adversarial attacks.

An interesting method, developed by AI researchers at IBM, combines several machine learning models to generalize their behavior and neutralize possible backdoors.

Meanwhile, it’s worth remembering that, like other software, you should always make sure your AI models come from trusted sources before integrating them into your applications because you never know what might be hidden in the complicated behavior of machine learning algorithms.

Source

Redazione
The editorial team of Red Hot Cyber consists of a group of individuals and anonymous sources who actively collaborate to provide early information and news on cybersecurity and computing in general.

Lista degli articoli



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Hybrid jobs: How AI is rewriting work in finance

Published

on


Artificial intelligence (AI) is not destroying jobs in finance, it is rewriting them. As models begin to handle underwriting, compliance, and asset allocation, the traditional architecture of financial work is undergoing a fundamental shift.

This is not about coders replacing bankers. It is about a sector where knowing how the model works—what it sees and how it reasons—becomes the difference between making and automating decisions. It is also about the decline of traditional credentials and the rise of practical experience and critical judgement as key assets in a narrowing workforce.

In what follows, we explore how the rise of generative AI and autonomous systems is reshaping the financial workforce: Which roles are fading, which ones are emerging, and how institutions—and policymakers—can bridge the looming talent divide.

The cognitive turn in finance

For decades, financial expertise was measured in credentials such as MBAs (Master of Business Administration) and CFAs (Chartered Financial Analysts). But AI is shifting the terrain. Models now read earnings reports, classify regulatory filings, flag suspicious transactions, and even propose investment strategies. And its capability is getting better—faster, cheaper, and more scalable than any human team.

This transformation is not just a matter of tasks being automated; it is about the cognitive displacement of middle-office work. Where human judgment once shaped workflows, we now see black-box logic making calls. The financial worker is not gone, but their job has changed. Instead of crunching numbers, they are interpreting outputs. Instead of producing reports, they are validating the ones AI generates.

The result is a new division of labor—one that rewards hybrid capabilities over siloed specialization. In this environment, the most valuable professionals are not those with perfect models, but those who know when not to trust them.

Market signals

This shift is no longer speculative. Industry surveys and early adoption data point to a fast-moving frontier.

  • McKinsey (2025) reports that while only 1% of organizations describe their generative AI deployments as mature, 92% plan to increase their investments over the next three years.
  • The World Economic Forum emphasizes that AI is already reshaping core business functions in financial services—from compliance to customer interaction to risk modeling.
  • Brynjolfsson et al. (2025) demonstrate that generative AI narrows performance gaps between junior and senior workers on cognitively demanding tasks. This has direct implications for talent hierarchies, onboarding, and promotion pipelines in financial institutions.

Leading financial institutions are advancing from experimental to operational deployment of generative AI. Goldman Sachs has introduced its GS AI Assistant across the firm, supporting employees in tasks such as summarizing complex documents, drafting content, and performing data analysis. This internal tool reflects the firm’s confidence in GenAI’s capability to enhance productivity in high stakes, regulated environments. Meanwhile, JPMorgan Chase has filed a trademark application for “IndexGPT,” a generative AI tool designed to assist in selecting financial securities and assets tailored to customer needs.

These examples are part of a broader wave of experimentation. According to IBM’s 2024 Global Banking and Financial Markets study, 80% of financial institutions have implemented generative AI in at least one use case, with higher adoption rates observed in customer engagement, risk management, and compliance functions.

The human factor

These shifts are not confined to efficiency gains or operational tinkering. They are already changing how careers in finance are built and valued. Traditional markers of expertise—like time on desk or mastery of rote processes—are giving way to model fluency, critical reasoning, and the ability to collaborate with AI systems. In a growing number of roles, being good at your job increasingly means knowing how and when to override the model.

Klarna offers a telling example of what this transition looks like in practice. By 2024, the Swedish fintech reported that 87% of its employees now use generative AI in daily tasks across domains like compliance, customer support, and legal operations. However, this broad adoption was not purely additive: The company had previously laid off 700 employees due to automation but subsequently rehired in redesigned hybrid roles that require oversight, interpretation, and contextual judgment. The episode highlights not just the efficiency gains of AI, but also its limits—and the enduring need for human input where nuance, ethics, or ambiguity are involved.

The bottom line? AI does not eliminate human input—it changes where it is needed and how it adds value.

New roles, new skills

As job descriptions evolve, so does the definition of financial talent. Excel is no longer a differentiator. Python is fast becoming the new Excel. But technical skills alone will not cut it. The most in demand profiles today are those that speak both AI and finance, and can move between legal, operational, and data contexts without losing the plot.

Emerging roles reflect this shift: model risk officers who audit AI decisions; conversational system trainers who finetune the behavior of large language models (LLMs); product managers who orchestrate AI pipelines for advisory services; and compliance leads fluent in prompt engineering.

For many institutions, the bigger challenge is not hiring this new talent—it is retraining the workforce they already have. Middle office staff, operations teams, even some front office professionals now face a stark reality: Reskill or risk being functionally sidelined.

But reinvention is possible—and already underway. Forward-looking institutions are investing in internal AI academies, pairing domain experts with technical mentors and embedding cross-functional teams that blur the lines between business, compliance, and data science.

At Morgan Stanley, financial advisors are learning to work alongside GPT-4-powered copilots trained on proprietary knowledge. At BNP Paribas, Environmental, Social, and Governance (ESG) analysts use GenAI to synthesize sprawling unstructured data. At Klarna, multilingual support agents have been replaced—not entirely by AI—but by hybrid teams that supervise and retrain it.

Non-technological barriers to automation: The human frontier

Despite the rapid pace of automation, there remain important limits to what AI can displace—and they are not just technical. Much of the critical decisionmaking in finance depends on tacit knowledge: The unspoken, experience-based intuition that professionals accumulate over years. This kind of knowledge is hard to codify and even harder to replicate in generative systems trained on static data.

Tacit knowledge is not simply a nice-to-have. It is often the glue that binds together fragmented signals, the judgment that corrects for outliers, the intuition that warns when something “doesn’t feel right.” This expertise lives in memory, not in manuals. As such, AI systems that rely on past data to generate probabilistic predictions may lack precisely the cognitive friction—the hesitations, corrections, and exceptions—that make human decisionmaking robust in complex environments like finance.

Moreover, non-technological barriers to automation range from cultural resistance to ethical concerns, from regulatory ambiguity to the deeply embedded trust networks on which financial decisions still depend. For example, clients may resist decisions made solely by an AI model, particularly in areas like wealth management or risk assessment.

These structural frictions offer not just constraints but breathing room: A window of opportunity to rethink education and training in finance. Instead of doubling down on technical specialization alone, institutions should be building interdisciplinary fluency—where practical judgment, ethical reasoning, and model fluency are taught in tandem.

Policy implications: Avoid a two-tier financial workforce

Without coordinated action, the rise of AI could bifurcate the financial labor market into two castes: Those who build, interpret, and oversee intelligent systems, and those who merely execute what those systems dictate. The first group thrives. The second stagnates.

To avoid this divide, policymakers and institutions must act early by:

  • Promoting baseline AI fluency across the financial workforce, not just in specialist roles.
  • Supporting mid-career re-skilling with targeted tax incentives or public-private training programs.
  • Auditing AI systems used in HR to ensure fair hiring and avoid algorithmic entrenchment of bias.
  • Incentivizing hybrid education programs that bridge finance, data science, and regulatory knowledge.

The goal is not to slow down AI; rather, it is to ensure that the people inside financial institutions are ready for the systems they are building.

The future of finance is not a contest between humans and machines. It is a contest between institutions that adapt to a hybrid cognitive environment and those that cling to legacy hierarchies while outsourcing judgment to systems they cannot explain.

In this new reality, cognitive arbitrage is the new alpha. The edge does not come from knowing the answers; it comes from knowing how the model got them and when it is wrong.

The next generation of financial professionals will not just speak the language of money. They will speak the language of models, ethics, uncertainty, and systems.

And if they do not, someone—or something else—will.



Source link

Continue Reading

AI Insights

Designing Artificial Consciousness from Natural Intelligence

Published

on


Dr. Karl Friston is a distinguished computational psychiatrist, neuroscientist, and pioneer of modern neuroimaging and, now, AI. He is a leading expert on intelligence, natural as well as artificial. I have followed his work as he and his team uncover the principles underlying mind, brain, and behavior based on the laws of physics, probability, causality and neuroscience.

In the interview that follows, we dive into the current artificial intelligence landscape, discussing what existing models can and can’t do, and then peer into the divining glass to see how true artificial consciousness might look and how it may begin to emerge.

Current AI Landscape and Biological Computing

GHB: Broadly speaking, what are the current forms of AI and ML, and how do they fall short when it comes to matching natural intelligence? Do you have any thoughts about neuromorphic chips?

KF: This is a pressing question in current AI research: should we pursue artificial intelligence on high performance (von Neumann) computers or turn to the principles of natural intelligence? This question speaks to a fork in the road ahead. Currently, all the money is on artificial intelligence—licensed by the truly remarkable competence of generative AI and large language models. So why deviate from the well-trodden path?

There are several answers. One is that the artificial path is a dead end—in the sense that current implementations of AI violate the principles of natural intelligence and thereby preclude themselves from realizing their ultimate aspirations: artificial general intelligence, artificial super intelligence, strong AI, et cetera. The violations are manifest in the shortcomings of generative AI, usually summarized as a lack of (i) efficiency, (ii) explainability and (iii) trustworthiness. This triad neatly frames the alternative way forward, namely, natural intelligence.

So, what is natural intelligence? The answer to this question is simpler than one might think: natural intelligence rests upon the laws or principles that apply to the natural kinds that constitute our lived world. These principles are readily available from the statistical physics of self-organization, when the notion of self is defined carefully.

Put simply, the behavior of certain natural kinds—that can be read as agents. like you and me—can always be described as self-evidencing. Technically, this entails minimizing self-information (also known as surprise) or, equivalently, seeking evidence (also known as marginal likelihood) for an agent’s internal model of its world2. This surprise is scored mathematically with something called variational free energy.

The model in question is variously referred to as a world or generative model. The notion of a generative model takes center stage in any application of the (free energy) principles necessary to reproduce, simulate or realize the behavior of natural agents. In my world, this application is called active inference.

Note that we have moved beyond pattern recognizers and prediction machines into the realm of agency. This is crucial because it means we are dealing with world models that can generate the consequences of behavior, choices or actions. In turn, this equips agents with the capacity to plan or reason. That is, to select the course of action that minimizes the surprise expected when pursuing that course of action. This entails (i) resolving uncertainty while (ii) avoiding surprising outcomes. The simple imperative— to minimize expected surprise or free energy—has clear implications for the way we might build artifacts with natural intelligence. Perhaps, these are best unpacked in terms of the above triad.

Efficiency. Choosing the path of least surprise is the path of least action or effort. This path is statistically and thermodynamically the most efficient path that could be taken. Therefore, by construction, natural intelligence is efficient. The famous example here is that our brains need only about 20 W—equivalent to a light bulb. In short, the objective function in active inference has efficiency built in —and manifests as uncertainty-resolving, information-seeking behavior that can be neatly described as curiosity with constraints. The constraints are supplied by what the agent would find surprising—i.e., costly, aversive, or uncharacteristic.

Artificial Intelligence Essential Reads

A failure to comply with the principle of maximum efficiency (a.k.a., principle of minimum redundancy) means your AI is using the wrong objective function. This can have severe implications for ML approaches that rely upon reinforcement learning (RL). In RL, the objective function is some arbitrary reward or value function. This leads to all sorts of specious problems; such as the value function selection problem, the explore-exploit dilemma, and more3. A failure to use the right value function will therefore result in inefficiency—in terms of sample sizes, memory requirements, and energy consumption (e.g., large language models trained with big data). Not only are the models oversized but they are unable to select those data that would resolve their uncertainty. So, why can’t large language models select their own training data?

This is because they have no notion of uncertainty and therefore don’t know how to reduce it. This speaks to a key aspect of generative models in active inference: They are probabilistic models, which means that they deal with probabilistic “beliefs”—about states of the world—that quantify uncertainty. This endows them not only with the capacity to be curious but also to report the confidence in their predictions and recommendations.

Explainability. if we start with a generative model—that includes preferred outcomes—we have, by construction, an explainable kind of generative AI. This is because the model generates observable consequences from unobservable causes, which means that the (unobservable or latent) cause of any prediction or recommendation is always at hand. Furthermore, predictions are equipped with confidence intervals that quantify uncertainty about inferred causes or states of the world.

The ability to encode uncertainty is crucial for natural intelligence and distinguishes things like variational autoencoders (VAE) from most ML schemes. Interestingly, the objective function used by VAEs is exactly the same as the variational free energy above. The problem with variational autoencoders is that they have no agency because they do not act upon the world— they just encode what they are given.

Trustworthiness: if predictions and recommendations can be explained and qualified with quantified uncertainty, then they become more trustworthy, or, at least, one can evaluate the epistemic trust they should be afforded. In short, natural intelligence should be able to declare its beliefs, predictions, and intentions and decorate those declarations with a measure of uncertainty or confidence.

There are many other ways we could unpack the distinction between artificial and natural intelligence. Several thought leaders—perhaps a nascent rebel alliance—have been trying to surface a natural or biomimetic approach to AI. Some appeal to brain science, based on the self-evident fact that your brain is an existence proof for natural intelligence. Others focus on implementation; for example, neuromorphic computing as the road to efficiency. An interesting technical issue here is that much of the inefficiency of current AI rests upon a commitment to von Neumann architectures, where most energy is expended in reading and writing from memory. In the future, one might expect to see variants of processing-in-memory (PIM) that elude this unnatural inefficiency (e.g., with memristors, photonics, or possibly quantum computing).

Future AI Development

GHB: What does truly agentic AI look like in the near-term horizon? Is this related to the concept of neuromorphic AI (and what is agentic AI)?

KF: Agentic AI is not necessarily neuromorphic AI. Agentic AI is the kind of intelligence evinced by agents with a model that can generate the consequences of action. The curiosity required to learn agentic world models is beautifully illustrated by our newborn children, who are preoccupied with performing little experiments on the world to see what they can change (e.g., their rattle or mobile) and what they cannot (e.g., their bedtime). The dénouement of their epistemic foraging is a skillful little body, the epitome of a natural autonomous vehicle. In principle, one can simulate or realize agency with or without a neuromorphic implementation; however, the inefficiency of conventional (von Neumann) computing may place upper bounds on the autonomy and agency of edge computing.

VERSES AI and Genius System

GHB: You are the chief scientist for VERSES AI, which has been posting groundbreaking advancements seemingly every week. What is Genius VERSES AI and what makes it different from other systems? For the layperson, what is the engine behind Genius?

KF: As a cognitive computing company VERSES is committed to the principles of natural intelligence, as showcased in our baby, Genius. The commitment is manifest at every level of implementation and design:

  • Implementation eschews the unnatural backpropagation of errors that predominate in ML by using variational message-passing based on local free energy (gradients), as in the brain.
  • Design eschews the inefficient top-down approach—implicit in the pruning of large models—and builds models from the ground up, much in the way that our children teach themselves to become autonomous adults. This ensures efficiency and explainability.
  • To grow a model efficiently is to grow it under the right core priors. Core priors can be derived from first principles; for example, states of the world change lawfully, where certain quantities are conserved (e.g., object permanence, mathematical invariances or symmetry, et cetera), usually in a scale-free fashion (e.g., leading to deep or hierarchical architectures with separation of temporal scales).
  • Authentic agency is assured by equipping generative models with a minimal self-model; namely, “what would happen if I did that?” This endows them with the capacity to plan and reason, much like System 2 thinking (planful thinking), as opposed to the System 1 kind of reasoning (intuitive, quick thinking).

At the end of the day, all this rests upon using the right objective function; namely, the variational free energy that underwrites self-evidencing. That is, building the most efficient model of the world in which the agent finds herself. With the right objective function, one can then reproduce brain-like dynamics as flows on variational free energy gradients, as opposed to costly and inefficient sampling procedures that are currently the industry standard.

Consciousness and Future Directions

GHB: What might we look forward to for artificial consciousness, and can you comment on the work with Mark Solms?

KF: Commenting on Mark’s work would take another blog (or two). What I can say here is that we have not touched upon two key aspects of natural intelligence that could, in principle, be realized if we take the high (active inference) road. These issues relate to interactive inference or intelligence—that is, inference among agents that are curious about each other. In this setting, one has to think about what it means for a generative model to entertain the distinction between self and other and the requisite mechanisms for this kind of disambiguation and attribution of agency. Mark would say that these mechanisms rest upon the encoding of uncertainty—or its complement, precision —and how this encoding engenders the feelings (i.e., felt-uncertainty) that underwrite selfhood.



Source link

Continue Reading

AI Insights

AI tools threaten writing, thinking, and learning in modern society

Published

on


In the modern age, artificial intelligence (AI) is revolutionizing how we live, work, and think – sometimes in ways we don’t fully understand or anticipate. In newsrooms, classrooms, boardrooms, and even bedrooms, tools like ChatGPT and other large language models (LLMs) are rapidly becoming standard companions for generating text, conducting research, summarizing content, and assisting in communication. But as we embrace these tools for convenience and productivity, there is growing concern among educators, journalists, editors, and cognitive scientists that we are trading long-term intellectual development for short-term efficiency.

As a news editor, one of the most distressing observations has been the normalization of copying and pasting AI-generated content by young journalists and writers. Attempts to explain the dangers of this trend – especially how it undermines the craft of writing, critical thinking, and authentic reporting – often fall on deaf ears. The allure of AI is simply too strong: its speed, its polish, and its apparent coherence often overshadow the deeper value of struggling through a thought or refining an idea through personal reflection and effort.

This concern is not isolated to journalism. A growing body of research across educational and corporate environments points to an overreliance on writing tools as a silent threat to cognitive growth and intellectual independence. The fear is not that AI tools are inherently bad, but that their habitual use in place of human thinking – rather than in support of it – is setting the stage for diminished creativity, shallow learning, and a weakening of our core mental faculties.

One recent study by researchers at the Massachusetts Institute of Technology (MIT) captures this danger with sobering clarity. In an experiment involving 54 students, three groups were asked to write essays within a 20-minute timeframe: one used ChatGPT, another used a search engine, and the last relied on no tools at all. The researchers monitored brain activity throughout the process and later had teachers assess the resulting essays.

The findings were stark. The group using ChatGPT not only scored lower in terms of originality, depth, and insight, but also displayed significantly less interconnectivity between brain regions involved in complex thinking. Worse still, over 80% of students in the AI-assisted group couldn’t recall details from their own essays when asked afterward. The machine had done the writing, but the humans had not done the thinking. The results reinforced what many teachers and editors already suspect: that AI-generated text, while grammatically sound, often lacks soul, depth, and true understanding.

These “soulless” outputs are not just a matter of style – they are indicative of a broader problem. Critical thinking, information synthesis, and knowledge retention are skills that require effort, engagement, and practice. Outsourcing these tasks to a machine means they are no longer being exercised. Over time, this leads to a form of intellectual atrophy. Like muscles that weaken when unused, the mind becomes less agile, less curious, and less capable of generating original insights.

The implications for journalism are especially dire. A journalist’s role is not simply to reproduce what already exists but to analyze, contextualize, and interpret information in meaningful ways. Journalism relies on curiosity, skepticism, empathy, and narrative skill – qualities that no machine can replicate. When young reporters default to AI tools for their stories, they lose the chance to develop these essential capacities. They become content recyclers rather than truth seekers.

Educators and researchers are sounding the alarm. Nataliya Kosmyna, lead author of the MIT study, emphasized the urgency of developing best practices for integrating AI into learning environments. She noted that while AI can be a powerful aid when used carefully, its misuse has already led to a deluge of complaints from over 3,000 educators – a sign of the disillusionment many teachers feel watching their students abandon independent thinking for machine assistance.

Moreover, these concerns go beyond the classroom or newsroom. The gradual shift from active information-seeking to passive consumption of AI-generated content threatens the very way we interact with knowledge. AI tools deliver answers with the right keywords, but they often bypass the deep analytical processes that come with questioning, exploring, and challenging assumptions. This “fast food” approach to learning may fill informational gaps, but it starves intellectual growth.

There is also a darker undercurrent to this shift. As AI systems increasingly generate content based on existing data – which itself may be riddled with bias, inaccuracies, or propaganda – the distinction between fact and fabrication becomes harder to discern. If AI tools begin to echo errors or misrepresentations without context or correction, the result could be an erosion of trust in information itself. In such a future, fact-checking will be not just important but near-impossible as original sources become buried under layers of machine-generated mimicry.

Ultimately, the overuse of AI writing tools threatens something deeper than skill: it undermines the human drive to learn, to question, and to grow. Our intellectual autonomy – our ability to think for ourselves – is at stake. If we are not careful, we may soon find ourselves in a world where information is abundant, but understanding is scarce.

To be clear, AI is not the enemy. When used responsibly, it can help streamline tasks, illuminate complex ideas, and even inspire new ways of thinking. But it must be positioned as a partner, not a replacement. Writers, students, and journalists must be encouraged – and in some cases required – to engage deeply with their work before turning to AI for support. Writing must remain a process of discovery, not merely of delivery.

As a society, we must treat this issue with the seriousness it deserves. Schools, universities, media organizations, and governments must craft clear guidelines and pedagogies for AI usage that promote learning, not laziness. There must be incentives for original thinking and penalties for mindless replication. We need a cultural shift that re-centers the value of human insight in an age increasingly dominated by digital automation.

If we fail to take these steps, we risk more than poor essays or formulaic articles. We risk raising a generation that cannot think critically, write meaningfully, or distinguish truth from fiction. And that, in any age, is a far greater danger than any machine.

Please follow Blitz on Google News Channel

Anita Mathur is a Special Contributor to Blitz.



Source link

Continue Reading

Trending