Connect with us

AI Insights

Contributor: The human brain doesn’t learn, think or recall like an AI. Embrace the difference

Published

on


Recently, Nvidia founder Jensen Huang, whose company builds the chips powering today’s most advanced artificial intelligence systems, remarked: “The thing that’s really, really quite amazing is the way you program an AI is like the way you program a person.” Ilya Sutskever, co-founder of OpenAI and one of the leading figures of the AI revolution, also stated that it is only a matter of time before AI can do everything humans can do, because “the brain is a biological computer.”

I am a cognitive neuroscience researcher, and I think that they are dangerously wrong.

The biggest threat isn’t that these metaphors confuse us about how AI works, but that they mislead us about our own brains. During past technological revolutions, scientists, as well as popular culture, tended to explore the idea that the human brain could be understood as analogous to one new machine after another: a clock, a switchboard, a computer. The latest erroneous metaphor is that our brains are like AI systems.

I’ve seen this shift over the past two years in conferences, courses and conversations in the field of neuroscience and beyond. Words like “training,” “fine-tuning” and “optimization” are frequently used to describe human behavior. But we don’t train, fine-tune or optimize in the way that AI does. And such inaccurate metaphors can cause real harm.

The 17th century idea of the mind as a “blank slate” imagined children as empty surfaces shaped entirely by outside influences. This led to rigid education systems that tried to eliminate differences in neurodivergent children, such as those with autism, ADHD or dyslexia, rather than offering personalized support. Similarly, the early 20th century “black box” model from behaviorist psychology claimed only visible behavior mattered. As a result, mental healthcare often focused on managing symptoms rather than understanding their emotional or biological causes.

And now there are new misbegotten approaches emerging as we start to see ourselves in the image of AI. Digital educational tools developed in recent years, for example, adjust lessons and questions based on a child’s answers, theoretically keeping the student at an optimal learning level. This is heavily inspired by how an AI model is trained.

This adaptive approach can produce impressive results, but it overlooks less measurable factors such as motivation or passion. Imagine two children learning piano with the help of a smart app that adjusts for their changing proficiency. One quickly learns to play flawlessly but hates every practice session. The other makes constant mistakes but enjoys every minute. Judging only on the terms we apply to AI models, we would say the child playing flawlessly has outperformed the other student.

But educating children is different from training an AI algorithm. That simplistic assessment would not account for the first student’s misery or the second child’s enjoyment. Those factors matter; there is a good chance the child having fun will be the one still playing a decade from now — and they might even end up a better and more original musician because they enjoy the activity, mistakes and all. I definitely think that AI in learning is both inevitable and potentially transformative for the better, but if we will assess children only in terms of what can be “trained” and “fine-tuned,” we will repeat the old mistake of emphasizing output over experience.

I see this playing out with undergraduate students, who, for the first time, believe they can achieve the best measured outcomes by fully outsourcing the learning process. Many have been using AI tools over the past two years (some courses allow it and some do not) and now rely on them to maximize efficiency, often at the expense of reflection and genuine understanding. They use AI as a tool that helps them produce good essays, yet the process in many cases no longer has much connection to original thinking or to discovering what sparks the students’ curiosity.

If we continue thinking within this brain-as-AI framework, we also risk losing the vital thought processes that have led to major breakthroughs in science and art. These achievements did not come from identifying familiar patterns, but from breaking them through messiness and unexpected mistakes. Alexander Fleming discovered penicillin by noticing that mold growing in a petri dish he had accidentally left out was killing the surrounding bacteria. A fortunate mistake made by a messy researcher that went on to save the lives of hundreds of millions of people.

This messiness isn’t just important for eccentric scientists. It is important to every human brain. One of the most interesting discoveries in neuroscience in the past two decades is the “default mode network,” a group of brain regions that becomes active when we are daydreaming and not focused on a specific task. This network has also been found to play a role in reflecting on the past, imagining and thinking about ourselves and others. Disregarding this mind-wandering behavior as a glitch rather than embracing it as a core human feature will inevitably lead us to build flawed systems in education, mental health and law.

Unfortunately, it is particularly easy to confuse AI with human thinking. Microsoft describes generative AI models like ChatGPT on its official website as tools that “mirror human expression, redefining our relationship to technology.” And OpenAI CEO Sam Altman recently highlighted his favorite new feature in ChatGPT called “memory.” This function allows the system to retain and recall personal details across conversations. For example, if you ask ChatGPT where to eat, it might remind you of a Thai restaurant you mentioned wanting to try months earlier. “It’s not that you plug your brain in one day,” Altman explained, “but … it’ll get to know you, and it’ll become this extension of yourself.”

The suggestion that AI’s “memory” will be an extension of our own is again a flawed metaphor — leading us to misunderstand the new technology and our own minds. Unlike human memory, which evolved to forget, update and reshape memories based on myriad factors, AI memory can be designed to store information with much less distortion or forgetting. A life in which people outsource memory to a system that remembers almost everything isn’t an extension of the self; it breaks from the very mechanisms that make us human. It would mark a shift in how we behave, understand the world and make decisions. This might begin with small things, like choosing a restaurant, but it can quickly move to much bigger decisions, such as taking a different career path or choosing a different partner than we would have, because AI models can surface connections and context that our brains may have cleared away for one reason or another.

This outsourcing may be tempting because this technology seems human to us, but AI learns, understands and sees the world in fundamentally different ways, and doesn’t truly experience pain, love or curiosity like we do. The consequences of this ongoing confusion could be disastrous — not because AI is inherently harmful, but because instead of shaping it into a tool that complements our human minds, we will allow it to reshape us in its own image.

Iddo Gefen is a PhD candidate in cognitive neuroscience at Columbia University and author of the novel “Mrs. Lilienblum’s Cloud Factory.”. His Substack newsletter, Neuron Stories, connects neuroscience insights to human behavior.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

The ‘productivity paradox’ of AI adoption in manufacturing firms

Published

on


Organizations have long viewed artificial intelligence as a way to achieve productivity gains. But recent research about AI adoption at U.S. manufacturing firms reveals a more nuanced reality: AI introduction frequently leads to a measurable but temporary decline in performance followed by stronger growth output, revenue, and employment.

This phenomenon, which follows a “J-curve” trajectory, helps explain why the economic impact of AI has been underwhelming at times despite its transformative potential.

“AI isn’t plug-and-play,” said University of Toronto professor Kristina McElheran, a digital fellow at the MIT Initiative on the Digital Economy and one of the lead authors of the new paper “The Rise of Industrial AI in America: Microfoundations of the Productivity J-Curve(s).” “It requires systemic change, and that process introduces friction, particularly for established firms.” 

University of Colorado Boulder professor Mu-Jeung Yang; Zachary Kroff, formerly with the U.S. Census Bureau and currently an analytics specialist at Analysis Group; and Stanford University professor Erik Brynjolfsson, PhD ’91, co-authored the report.

Working with data from two U.S. Census Bureau surveys covering tens of thousands of manufacturing companies in 2017 and 2021, the researchers found that the AI adoption J-curve varied among businesses that had adopted AI technologies with industrial applications. Short-term losses were greater in older, more established companies. Evidence on young firms showed that losses can be mitigated by certain business strategies. And despite early losses, early AI adopters showed stronger growth over time. 

Here’s a look at what the study indicates about the adoption and application of AI, and the types of firms that outperform others in using new technology. 

1. AI adoption initially reduces productivity.

The study shows that AI adoption tends to hinder productivity in the short term, with firms experiencing a measurable decline in productivity after they begin using AI technologies.  

Even after controlling for size, age, capital stock, IT infrastructure, and other factors, the researchers found that organizations that adopted AI for business functions saw a drop in productivity of 1.33 percentage points. When correcting for selection bias — organizations that expect higher returns are more likely to be early AI adopters — the short-run negative impact was significantly larger, at around 60 percentage points, the researchers write.

This decline isn’t only a matter of growing pains; it points to a deeper misalignment between new digital tools and legacy operational processes, the researchers found. AI systems used for predictive maintenance, quality control, or demand forecasting often also require investments in data infrastructure, staff training, and workflow redesign. Without those complementary pieces in place, even the most advanced technologies can underdeliver or create new bottlenecks. 

“Once firms work through the adjustment costs, they tend to experience stronger growth,” McElheran said. “But that initial dip — the downward slope of the J-curve — is very real.”


Leading the AI-Driven Organization

In person at MIT Sloan


2. Short-term losses precede long-term gains.

Despite companies’ early losses, the study found a clear pattern of recovery and eventual improvement. Over a longer period of time — there was a four-year gap in the study data — manufacturing firms that adopted AI tended to outperform their non-adopting peers in both productivity and market share. This recovery followed an initial period of adjustment during which companies fine-tuned processes, scaled digital tools, and capitalized on the data generated by AI systems. 

That upswing wasn’t distributed evenly, though. The firms seeing the strongest gains tended to be those that were already digitally mature before adopting AI. 

“Firms that have already done the digital transformation or were digital from the get-go have a much easier ride because past data can be a good predictor of future outcomes,” McElheran said. Size helps too. “Once you solve those adjustment costs, if you can scale the benefits across more output, more markets, and more customers, you’re going to get on the upswing of the J-curve a lot faster,” she said.

Better integration of the technology and strategic reallocation of resources is important to this recovery as firms gradually shift toward more AI-compatible operations, often investing in automation technologies like industrial robots, the researchers found.

3. Older firms see greater short-term losses.

Short-term losses aren’t felt equally across all firms, the study found. The negative impact of AI adoption was most pronounced among established firms. Such organizations typically have long-standing routines, layered hierarchies, and legacy systems that can be difficult to unwind. 

These firms often have trouble adapting, partly due to institutional inertia and the complexity of their operations. “We find that older firms, in particular, struggle to maintain vital production management practices such as monitoring key performance indicators and production targets,” the researchers write. 

“Old firms actually saw declines in the use of structured management practices after adopting AI,” McElheran said. “And that alone accounted for nearly one-third of their productivity losses.” 

In contrast, younger, more flexible companies appear better equipped to integrate AI technologies quickly and with less disruption. They may also have less to unlearn, making the transition to AI-enabled workflows more seamless. 

“Taken together, our findings highlight AI’s dual role as a transformative technology and catalyst for short-run organizational disruption, echoing patterns familiar to scholars of technological change,” the researchers write. They note that the results also show the importance of complementary practices and strategies that mitigate adjustment causes and boost long-term returns to “flatten the J-curve dip and realize AI’s longer-term productivity at scale.” 



Source link

Continue Reading

AI Insights

Google just announced 5 new Gemini features coming to Android, and it’s good news for fans of foldable smartphones

Published

on


Samsung Galaxy Unpacked’s many new products and features have not left out AI examples. Plenty involved Google and its Gemini family of AI models, with a host of new features coming to Android devices with the new Android 16 and Wear OS 6 systems. Here are some of the ones to be the most excited for.

Gemini Live gets way more useful on foldables

(Image credit: Samsung)

Gemini Live is a way for Google’s AI companion to be present on a continuous basis. Rather than just asking a question and moving on, you can have it on hand to help as you follow a cooking tutorial, fix your bike, or do yoga. Starting with the Galaxy Z Flip7, Gemini Live will now be accessible right from the external screen, meaning you won’t have to even unfold the device to interact with the AI.



Source link

Continue Reading

AI Insights

Diagnosing facial synkinesis using artificial intelligence to advance facial palsy care

Published

on


Over the past decade, a plethora of software applications have emerged in the field of patient medical care, supporting the diagnosis and management of various clinical conditions14,15,20,21,22. Our study contributes to this evolving field by introducing a novel application for holistic synkinesis diagnosis and leveraging the power of convolutional neural networks (CNN) to analyze images of periocular regions.

The development and validation of our CNN-based model for diagnosing facial synkinesis in FP patients mark a significant advancement in the realm of automated medical diagnostics. Our model demonstrated a high degree of accuracy (98.6%) in distinguishing between healthy individuals and those with synkinesis, with an F1-score of 98.4%, precision of 100%, and recall of 96.9%. These metrics highlight the model’s robustness and reliability, rendering it a valuable tool for clinicians. The confusion matrix analysis provided further insights into the model’s performance, revealing only one misclassification among the 71 test images. These metrics echo findings from previous work in diagnosing sequelae of FP. For example, our group reported comparable metrics for CNN-based assessment of lagophthalmos. Using a training set of 826 images, the validation accuracy was 97.8% over the span of 64 epochs17. Another study leveraged a CNN to automatically identify (peri-)ocular pathologies such as enophthalmos with an accuracy of 98.2%, underscoring the potential of neural networks when diagnosing facial conditions23. Such tools can broaden access to FP diagnostics, thus reducing time-to-diagnosis and effectively triaging patients to the appropriate treatment pathway (e.g., conservative therapy, cross-face-nerve-grafts)2,3,14,24. Overall, our CNN adds another highly accurate diagnostic tool for reliably detecting facial pathologies, especially in FP patients.

Another strength of our CNN lies in its high user-friendliness and rapid processing and training times. The mean image processing time was 24 ± 11 ms, and the overall training time was 14.4 min. The development of a lightweight, dockerized web application enhanced the model’s practicality and accessibility. In addition, the total development costs of the CNN were only $311 USD. Such parameters have been identified as key parameters for impactful AI research and effective integration into clinical workflows25,26,27. More precisely, the short training times may pave the avenue toward additional AI-supported diagnostic tools in FP care to detect common short- and long-term complications of FP (e.g., ectropion, hemifacial tissue atrophy). The easy-to-use and cost-effective web application may facilitate clinical use for healthcare providers in low- and middle-income countries, where the incidence and prevalence of FP are higher compared to the high-income countries28. To facilitate the download and use of our algorithm, we (i) uploaded the code to GitHub (San Francisco, USA), (ii) integrated the code into an application, and (iii) recorded an instructional video that details the different steps. Healthcare providers from low- and middle-income countries only require an internet connection to install the application. The instructional video will then guide them through the next steps to set up the application and start screening patients. Our application is free to use, and the number of daily screens is not limited. The rapid processing times also carry the potential to increase the screening throughput, further broadening the access to FP care and reducing waiting times for FP patients3. Collectively, the CNN represents a rapid, user-friendly, and cost-effective tool.

While our study presents promising results, it is not without limitations. The relatively small sample size, especially for the validation and test sets, suggests the need for further validation with larger and more diverse (i.e., multi-center, -racial, -surgeon) datasets to ensure the model’s robustness and generalizability. Additionally, the model’s ability to distinguish synkinesis from other facial conditions was not evaluated in this study, representing an area for future research. Moreover, integrating our model into clinical practice will require careful consideration of various factors, including user training, data privacy, and the ethical implications of automated diagnostics. Ensuring that clinicians are adequately trained to use the model and interpret its results is essential for maximizing its benefits. Additionally, robust data privacy measures must be implemented to protect sensitive patient information, particularly when using web-based applications. Thus, further validation is essential before clinical implementation. In a broader context, there are different AI/machine-learning-powered tools that have shown promising outcomes in pre-clinical studies and small patient samples (face transplantation, facial reanimation, etc.)29,30,31,32. However, these tools remain to be investigated in larger-scale trials and integrated into standard clinical workup. Thus, cross-disciplinary efforts are needed to bridge the gap from bench to bedside and to fuel translational efforts.



Source link

Continue Reading

Trending