Connect with us

AI Insights

What Is Artificial Intelligence? Explained Simply With Real-Life Examples – The Times of India

Published

on

AI Insights

From bench to bot: Why AI-powered writing may not deliver on its promise

Published

on


This is my final “bench to bot” column, and after more than two years of exploring the role of artificial intelligence in scientific writing, I find myself in an unexpected place. When I started this series in 2023, I wasn’t among the breathless AI optimists promising revolutionary transformation, nor was I reflexively dismissive of its potential. I approached these tools with significant reservations about their broader societal impacts, but I was curious whether they might offer genuine value for scientific communication specifically.

What strikes me now, looking back, is how my measured optimism for science may have caused me to underestimate the deeper complications at play. The problem is not that the tools don’t work—it’s that they work too well, at least at producing competent prose. But competent prose generated by a machine, I’ve come to realize, might not be what science actually needs.

My initial set of starting assumptions seemed reasonable. The purpose of neuroscience isn’t getting award-winning grants and publishing high-profile papers. It’s the production of knowledge, technology and treatments. But scientists spend enormous amounts of time wrestling with grants and manuscripts. If AI could serve as a strategic aid for specific writing tasks, helping scientists overcome time-consuming communication bottlenecks, I was all for it. What’s more, writing abilities aren’t equally distributed, which potentially disadvantages brilliant researchers who struggle with prose. AI could help here, too. So long as I remained explicit about claiming AI would not solve all writing troubles, and my goal was always thoughtful incorporation for targeted use cases, not mindless adoption, I felt this column would be a worthwhile service for the community struggling with how to handle this seismic technological shift.

These assumptions felt solid when I started this column. But if I’m being honest, I’ve always harbored some nagging reservations that even thoughtful incorporation of AI tools in scientific writing tasks carries risks I wasn’t fully acknowledging—perhaps even to myself. Recently, I encountered a piece by computer scientists Sayash Kapoor and Arvind Narayanan that articulated those inchoate doubts better than I ever could. They argue that AI might actually slow scientific progress—not despite its efficiency gains but because of them:

Any serious attempt to forecast the impact of AI on science must confront the production-progress paradox. The rate of publication of scientific papers has been growing exponentially, increasing 500 fold between 1900 and 2015. But actual progress, by any available measure, has been constant or even slowing. So we must ask how AI is impacting, and will impact, the factors that have led to this disconnect.

Our analysis in this essay suggests that AI is likely to worsen the gap. This may not be true in all scientific fields, and it is certainly not a foregone conclusion. By carefully and urgently taking actions such as those we suggest below, it may be possible to reverse course. Unfortunately, AI companies, science funders, and policy makers all seem oblivious to what the actual bottlenecks to scientific progress are. They are simply trying to accelerate production, which is like adding lanes to a highway when the slowdown is actually caused by a toll booth. It’s sure to make things worse.

Though Kapoor and Narayanan focus on AI’s broader impact on science, their concerns about turbo-charging production without improving the underlying process echo what economist Robert Solow observed decades ago about computers—we see them everywhere except in the productivity statistics. This dynamic maps directly onto scientific writing in troubling ways.

T

he truth is that the process of writing often matters just as much, or more, than the final product. I explored this issue in my column on teaching and AI, but the idea applies to anyone who writes, because we often write to learn, or, at least, we learn while we write. When scientists struggle to explain their methodology clearly, they might discover gaps in their own understanding. When they wrestle with articulating why their particular approach matters, they might uncover new connections or refine their hypotheses. Stress-testing ideas with the pressure of the page is a time-honored way to deepen thinking. I suspect countless private struggles with writing have served as quiet engines of scientific discovery. Neuroscientist Eve Marder seems to recognize this cognitive value, putting it beautifully:

But most importantly, writing is the medium that allows you to explain, for all time, your new discoveries. It should not be a chore, but an opportunity to share your excitement, and maybe your befuddlement. It allows each of us to add to and modify the conceptual frameworks that guide the way we understand our science and the world…It is not an accident that some of our best and most influential scientists write elegant and well-crafted papers. So, work to make writing one of the great pleasures of your life as a scientist, and your science will benefit.

Previously, my hope was that with the newfound technological ability to decouple sophisticated text production from human struggle, it would start to become clear which parts of the writing struggle are valuable versus which are just pure cognitive drag. However, two years in, I don’t think anyone is any closer to an answer. And I’m realizing through observations of students, colleagues—and myself—that each of us individually is not going to be capable of making that distinction in real time during the heat of composition, the pressure of deadlines and the seductiveness of slick technology.

Rather than offering a set of rules about when to use these tools, perhaps the most honest guidance I can provide is this: Before reaching for AI assistance, pause and ask yourself whether you’re trying to clarify your thinking or simply produce text. If the process matters, or just the product. If it’s the former—if you’re genuinely wrestling with how to explain a concept or articulate why your approach matters—that struggle might be worth preserving. The discomfort of not knowing quite how to say something is often an important signal that you’re at the edge of your understanding, perhaps about to break into new territory. The scientists who do the most exciting and meaningful work in an AI-saturated future won’t be those who can efficiently generate passable grants and manuscripts but those who respect this signal and recognize when the struggle of writing is actually the struggle of discovery in disguise.

The stakes are actually quite high for science, because writing, for all its flaws, is one of the most potent thinking tools humans have developed. When I think of the role of writing in the production-progress paradox, I keep returning to something neuroscientist Henry Markram told me years ago: “I realized that I could write a high-profile research paper every year, but then what? I die, and there’s going to be a column on my grave with a list of beautiful papers.” With AI, we scientists risk optimizing our way to beautiful papers while fundamental progress in neuroscience remains stalled. We might end up with impressive publication lists as we die from the diseases we failed to cure.

The path forward means acknowledging that efficiency isn’t always progress, that removing friction isn’t always improvement, and that tools designed to make us more productive might sometimes make us less capable. These tensions won’t resolve themselves, and perhaps that’s the point. The act of recognizing such tensions, of constantly questioning whether science’s technological shortcuts are serving its deeper intellectual goals, may itself be a form of progress. It’s a more complex message than the one I started with, but complexity is often where the truth lives.

AI-use statement: Anthropic’s Claude Sonnet 4 was used for editorial feedback after the drafting process.



Source link

Continue Reading

AI Insights

Metaplanet Holders Approve Fresh Funding Tools to Buy Bitcoin

Published

on




Japanese Bitcoin treasury Metaplanet Inc. secured shareholder approval for a proposal enabling it to raise as much as ¥555 billion ($3.8 billion) via preferred shares, in a bid to expand its financing options after its stock slumped.



Source link

Continue Reading

AI Insights

Artificial intelligence offers individualized anticoagulation decisions for atrial fibrillation

Published

on


Bottom Line: Mount Sinai researchers developed an AI model to make individualized treatment recommendations for atrial fibrillation (AF) patients-helping clinicians accurately decide whether or not to treat them with anticoagulants (blood thinner medications) to prevent stroke, which is currently the standard treatment course in this patient population. This model presents a completely new approach for how clinical decisions are made for AF patients and could represent a potential paradigm shift in this area.

In this study, the AI model recommended against anticoagulant treatment for up to half of the AF patients who otherwise would have received it based on standard-of-care tools. This could have profound ramifications for global health.

Why the study is important: AF is the most common abnormal heart rhythm, impacting roughly 59 million people globally. During AF, the top chambers of the heart quiver, which allows blood to become stagnant and form clots. These clots can then dislodge and go to the brain, causing a stroke. Blood thinners are the standard treatment for this patient population to prevent clotting and stroke; however, in some cases this medication can lead to major bleeding events.

This AI model uses the patient’s whole electronic health record to recommend an individualized treatment recommendation. It weighs the risk of having a stroke against the risk of major bleeding (whether this would occur organically or as a result of treatment with the blood thinner). This approach to clinical decision-making is truly individualized compared to current practice, where clinicians use risk scores/tools that provide estimates of risk on average over the studied patient population, not for individual patients. Thus, this model provides a patient-level estimate of risk, which it then uses to make an individualized recommendation taking into account the benefits and risks of treatment for that person.

The study could revolutionize the approach clinicians take to treat a very common disease to minimize stroke and bleeding events. It also reflects a potential paradigm change for how clinical decisions are made.

Why this study is unique: This is the first-known individualized AI model designed to make clinical decisions for AF patients using underlying risk estimates for the specific patient based on all of their actual clinical features. It computes an inclusive net-benefit recommendation to mitigate stroke and bleeding. 

How the research was conducted: Researchers trained the AI model on electronic health records of 1.8 million patients over 21 million doctor visits, 82 million notes, and 1.2 billion data points. They generated a net-benefit recommendation on whether or not to treat the patient with blood thinners.

To validate the model, researchers tested the model’s performance among 38,642 patients with atrial fibrillation within the Mount Sinai Health System. They also externally validated the model on 12,817 patients from publicly available datasets from Stanford.

Results: The model generated treatment recommendations that aligned with mitigating stroke and bleeding. It reclassified around half of the AF patients to not receive anticoagulation. These patients would have received anticoagulants under current treatment guidelines.

What this study means for patients and clinicians: This study represents a new era in caring for patients. When it comes to treating AF patients, this study will allow for more personalized, tailored treatment plans.

Quotes:  

“This study represents a profound modernization of how we manage anticoagulation for patients with atrial fibrillation and may change the paradigm of how clinical decisions are made,” says corresponding author Joshua Lampert, MD, Director of Machine Learning at Mount Sinai Fuster Heart Hospital. “This approach overcomes the need for clinicians to extrapolate population-level statistics to individuals while assessing the net benefit to the individual patient-which is at the core of what we hope to accomplish as clinicians. The model can not only compute initial recommendations, but also dynamically update recommendations based on the patient’s entire electronic health record prior to an appointment. Notably, these recommendations can be decomposed into probabilities for stroke and major bleeding, which relieves the clinician of the cognitive burden of weighing between stroke and bleeding risks not tailored to an individual patient, avoids human labor needed for additional data gathering, and provides discrete relatable risk profiles to help counsel patients.”

“This work illustrates how advanced AI models can synthesize billions of data points across the electronic health record to generate personalized treatment recommendations. By moving beyond the ‘one size fits none’ population-based risk scores, we can now provide clinicians with individual patient-specific probabilities of stroke and bleeding, enabling shared decision making and precision anticoagulation strategies that represent a true paradigm shift,”adds co-corresponding author Girish Nadkarni, MD, MPH, Chair of the Windreich Department of Artificial Intelligence and Human Health at the Icahn School of Medicine at Mount Sinai. 

“Avoiding stroke is the single most important goal in the management of patients with atrial fibrillation, a heart rhythm disorder that is estimated to affect 1 in 3 adults sometime in their life”, says co-senior author, Vivek Reddy MD, Director ofCardiac Electrophysiology at the Mount Sinai Fuster Heart Hospital. “If future randomized clinical trials demonstrate that this Ai Model is even only a fraction as effective in discriminating the high vs low risk patients as observed in our study, the Model would have a profound effect on patient care and outcomes.”

“When patients get test results or a treatment recommendation, they might ask, ‘What does this mean for me specifically?’ We created a new way to answer that question. Our system looks at your complete medical history and calculates your risk for serious problems like stroke and major bleeding prior to your medical appointment. Instead of just telling you what might happen, we show you both what and how likely it is to happen to you personally. This gives both you and your doctor a clearer picture of your individual situation, not just general statistics that may miss important individual factors,” says co-first author Justin Kauffman, Data Scientiest with the Windreich Department of Artificial Intelligence and Human Health.



Source link

Continue Reading

Trending