AI Insights
Artificial intelligence for healthcare: restrained development despite impressive applications | Infectious Diseases of Poverty

Artificial intelligence (AI) has avoided the headlines until now, yet it has been with us for 75 years [1, 2]. Still, few understand what it really is and many feel uncomfortable about its rapid growth, with thoughts going back to the computer rebelling against the human crew onboard the spaceship heading out into the infinity of space in Arthur C. Clarke’s visionary novel “2001: a Space Odyssey” [3]. Just as in the novel, there is no way back since the human mind cannot continuously operate at an unwavering level of accuracy or simultaneous interact with different sections of large-scale information (Big Data), areas where AI excels. The World Economic Forum has made a call for a faster adoption of AI in the field of healthcare, a fact discussed at length in a very recent white-paper report [4] arguing that progress is not forthcoming as fast as expected despite the evident potential for growth and innovation at an all-time high and strong demand for new types of computer processors. Among the reasons mentioned for the slow uptake in areas dealing with healthcare are barriers, such as complexity deterring policymakers, and the risk for misaligned technical and strategic decisions due to fragmented regulations [4].
The growing importance of AI in the medical and veterinary fields strengthened by recent articles and editorials published in The Lancet Digital Health and The Lancet [5, 6] underlining actual and potential roles of AI in healthcare. We survey this wide spectrum highlighting current gaps in the understanding of AI and how its application can assist clinical work as well as support and accelerate basic research.
AI technology development
From rules to autonomy
Before elaborating on these issues, some basic informatics about the technology that has moved AI to the fore is in order. In 1968, when both the film and the novel were released, only stationary, primitive computers existed. Rather than undergoing development in the preserve of large companies and academic institutions, they morphed into today’s public laptops, smartphones and wearable sensor networks. The next turn came with the gaming industry’s insatiable need for ultra-rapid action and life-like characters necessitating massively parallel computing, which led to switching from general-purpose, central processor units (CPUs) to specialized graphics processors (GPUs) and tensor processors (TPUs). Fuelled by this expansion of the processor architecture, neural networks, machine learning and elaborate algorithms capable of changing in conjunction with new data (meta-learning) were ushered in, with the rise of the power to understand and respond to human language through generative, pre-trained transformation (GPT) [7] showing the way forward. Breaking out of rule-based computing by the emergent capability of modifying internal settings, adapting to new information and understanding changing environments put these flexible systems, now referred to as AI, in the fast lane towards domains requiring high-level functionality. Computer systems adapted to a wide range of tasks, for which they were not explicitly programmed, could then be developed and launched into the public area as exemplified by automated industrial production, self-driving vehicles, virtual assistants and chatbots. Although lacking the imagination and versatility that characterize the human mind, AI can indeed perform tasks partly based on reasoning and planning that typically require human cognitive functions, and with enhanced efficiency and productivity.
Agent-based AI
Here, the agent is any entity that can perceive its environment, make decisions and act toward some goal, where rule-based AI has been replaced with proactive interaction. Agent-based AI generally uses many agents working separately to solve joint problems or even collaborating like a team. This approach was popularized by Wooldridge and Jennings in the 1990s, who described decentralized, autonomous AI systems capable of ‘meta-learning’ [8]. They felt that outside targets can be in sanitated and dealt with as computational objects, a methodology that has advanced the study of polarization, traffic flow, spread of disease, and similar phenomena. Although technology did not catch up with this vision until much later, AI today encompasses a vital area of active research producing powerful tools for simulating complex distributed and adaptive systems. The great potential of this approach for disease distributions and transmission dynamics may provide the insights needed to successfully control the neglected tropical diseases (NTDs) as well as dealing with other challenges in the geospatial health sphere [9]. The Internet of Things (IoT) [10], another example agent-based AI, represents the convergence of embedded sensors and software enabling collection and exchanging data with other devices and systems; however, operations are often local and do not necessarily involve the Internet.
While the rule-based method follows a set of rules and therefore produces an outcome which is to some degree predictable, the two new components in the agent-based approach include the capability of learning from experience and testing various outcomes by one or several models. This introduces a level of reasoning, which allows for non-human choice, as schematically shown in Fig. 1.
AI applications
Clinical applications
Contrary to common belief, a diagnostic program that today would be sorted under the heading AI was designed already 50 years ago at Stanford University, California, United States of America. The system, called MYCIN [11], was aimed to assist physicians with regard to bacterial blood infections. It was originally produced in book format, utilized a knowledge base of approximately 600 rules and operated through a series of questions to the user ultimately providing diagnosis and treatment recommendation. In the United States, similar approaches aimed at the diagnoses of bacterial infections appeared in the following decades but were not often used due to lack of computational power at the time. Today, on the other hand, this is no longer the limiting factor and AI is revolutionizing image-based diagnostics. In addition to the extensive use of AI-powered microscopy in parasitology, the spectrum includes both microscopic differentiation between healthy and cancerous tissue in microscope sections [12], as well as interpretations of graphs and videos from electrocardiography (EKG) [13], computer tomography (CT) [14, 15], magnet resonance imaging (MRI) [15] and ultrasonography [16]
Some AI-based companies are doing well, e.g., ACL Digital (https://www.acldigital.com/) that analyzes data from wearable sensors detecting heart arrhythmias, hypertension, sleep disorders; AIdoc (https://www.aidoc.com/eu/) whose platform evaluates clinical examinations and coordinates workflows beyond diagnosis; and the da Vinci Surgical System (https://en.wikipedia.org/wiki/Da_Vinci_Surgical_System), which has been used for various interventions, including kidney and hysterectiomy [17, 18]. However, others have failed, e.g., ‘Watson for Oncology’, launched by IBM for cancer diagnosis and optimized chemotherapy (https://www.henricodolfing.com/2024/12/case-study-ibm-watson-for-oncology-failure.html) and Babylon Health (https://en.wikipedia.org/wiki/Babylon_Health), a tele-health service that connected people to doctors via video calls, offered wholesale health promotion with high precision and virtual health assistants (Chatbots) that even remind patients to take medication. These final examples of AI-assisted medicine show that strong regulation is needed before this kind of assistance can be released for public use.
Basic research
The focus in the 2024 Nobel ceremony granted AI a central role: while the Physics Prize was awarded for the development of associative neural networks, the Chemistry Prize honored the breakthrough findings regarding how strings of amino acids fold into particular shapes [19]. This thorny problem was cracked by AlphaFold2, a robot based on deep-learning developed at DeepMind, a company that now belongs to Google’s parent Alphabet Inc. The finding that all proteins share the same folding process widened the research scope making it possible to design novel proteins with specific functions (synthetic biology), accelerate drug discovery and shed light on how diseases arise through mutations. The team that created this robot as its current sight on finding out how proteins interact with the rest of the cellular machinery. AlphaFold3, an updated version of the architecture generates accurate, three-dimensional molecular structures by pair-wise interaction between molecular components, which can be used to model how specific proteins work in union with other cell components exposing the details of protein interaction. These new applications highlight the exponential rise of AI’s significance for research in general and for medicine in particular.
The solution to the protein-folding problem not only reflects the importance of the training component but also demonstrates that AI is not as restricted as the human mind is when it comes to large realms of information (Big Data), which is needed for a large number of activities in modern society, such as autonomous driving, large-scale financial transactions as dealt with in banks on a daily basis. Big Data is common also in healthcare and it involves not only when dealing with hospital management and patient records, but also with large-sale diagnostic approaches. An academic paper, co-authored with clinicians and Google Research, investigated the reliability of diagnostic AI system, finding that machine learning reduced the number of false positives in a large mammography dataset by 25% (and also reached conclusions considerably faster), compared with the standard, clinical workflow without missing any true positives [20], a reassuring result.
Epidemiological surveillance
AI tools have been widely applied in epidemiological surveillance of vector-borne diseases. Due to vectors’ sensitivity to temperature and precipitation, the arthropod vectors are bellwether indicators, not only for the diseases they often carry but also for climate change. By gaining deeper insights into the complex interactions between climate, ecosystems and parasitic diseases with intricate life cycles, AI technologies assist by handling Big Data and even using reasoning to deal with obscure variations and interactions of climate and biological variables. To keep abreast of this situation, the connections between human, animal and environmental health not only demand data-sharing at the local level but also nationally and globally. This move towards the One Health/Planetary Health approach is highly desirable, and AI will unquestionably be needed for sustaining diligence with respect to the Big Data repositories required for accurate predictions of disease transmission, while AI-driven platforms can further facilitate real-time information exchange between stakeholders, optimize energy consumption and improve resource management for infections in animals and humans, in particular with regard to parasitic infections [21]. Proactive synergies between public health and other disciplines, such as ecology, genomics, proteomics, bioinformatics, sanitary engineering and socio-economy make the future medical agenda not only exciting and challenging, but also highly relevant globally.
In epidemiology, there has been a strong advance across the fields of medical and veterinary sciences [22], while previously overlooked events and unusual patterns now stand a better chance of being picked up by AI analysis of indirect methods, e.g., phone tracing, social media posts, news articles and health records. Technically less complex, but no less innovative operations are required to update the roadmap for elimination of the NTDs issued by the World Health Organization (WHO) [23]. The Expanded Special Project for the Elimination of Neglected Tropical Diseases (ESPEN) is a collaborative effort between the WHO regional office for Africa, member states and NTD partners. Its portal [24] offers visualization and planning tools based on satellite-generated imagery, climate data and historical disease patterns that are likely to identify high-risk areas for targeted interventions and allocate resources effectively. In this way, WHO’s roadmap for NTD elimination is becoming more data-driven, precise and scalable, thereby accelerating progress.
The publication records
Established as far back as 1993, Artificial Intelligence Research was the first journal specifically focused on AI, soon followed by an avalanche of similar ones (https://www.scimagojr.com/journalrank.php?category=1702). China, India and United States are particularly active in AI-related research. According to the Artificial Intelligence Index Report 2024 [25], the total number of general AI publications had risen from approximately 88,000 in 2010 to more than 240,000 in 2022, with publications on machine learning increasing nearly sevenfold since 2015. If also conference papers and repository publications (such as arXiv) are included along with papers in both English and Chinese, the number rises to 900,000, with the great majority originating in China [26].
A literature search based solely on PubMed, carried out by the end of 2024 by us using “AI and infectious disease(s)” as search term resulted in close to 100,000 entries, while the term “Advanced AI and infectious disease(s)” only resulted in about 6600. The idea was to find the distintion between simpler, more rule-based applications and proper AI. Naturally, the results of this kind can be grossly misleading as information on the exact type of computer processor used, be it CPU, GPU or TPU, is generally absent and can only be inferred. Nevertheless, the much lower figure for “Advanced AI and infectious disease(s)” is an indication of the preference for less complex AI applications so far, i.e. work including spatial statistics and comparisons between various sets of variables vis-à-vis diseases, aiming at estimating distributions, hotspots, vector breeding sites, etc.
With as many as 100,000 medical publications found in the PubMed search, they clearly dominate in relation to the total of more than 240,000 AI-assisted research papers found up to 2022 [25]. The growing importance of this field is further strengthened by recent articles and editorials [27, 6]. Part of this interest is probably due to the wide spectrum of the medical and veterinary fields and AI’s potential in tracing and signalling disease outbreaks plus its growing role in surveillance that has led to a surge of publications on machine learning, offering innovative solutions to some of the most pressing challenges facing health research today [28].
AI Insights
AI firm Anthropic agrees to pay authors $1.5bn for pirating work

Artificial intelligence (AI) firm Anthropic has agreed to pay $1.5bn (£1.11bn) to settle a class action lawsuit filed by authors who said the company stole their work to train its AI models.
The deal, which requires the approval of US District Judge William Alsup, would be the largest publicly-reported copyright recovery in history, according to lawyers for the authors.
It comes two months after Judge Alsup found that using books to train AI did not violate US copyright law, but ordered Anthropic to stand trial over its use of pirated material.
Anthropic said on Friday that the settlement would “resolve the plaintiffs’ remaining legacy claims.”
The settlement comes as other big tech companies including ChatGPT-maker OpenAI, Microsoft, and Instagram-parent Meta face lawsuits over similar alleged copyright violations.
Anthropic, with its Claude chatbot, has long pitched itself as the ethical alternative among its competitors.
“We remain committed to developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery, and solve complex problems,” said Aparna Sridhar, Deputy General Counsel at Anthropic which is backed by both Amazon and Google-parent Alphabet.
The lawsuit was filed against Anthropic last year by best-selling mystery thriller writer Andrea Bartz, whose novels include We Were Never Here, along with The Good Nurse author Charles Graeber and The Feather Thief author Kirk Wallace Johnson.
They accused the company of stealing their work to train its Claude AI chatbot in order to build a multi-billion dollar business.
The company holds more than seven million pirated books in a central library, according to Judge Alsup’s June decision, and faced up to $150,000 in damages per copyrighted work.
His ruling was among the first to weigh in on how Large Language Models (LLMs) can legitimately learn from existing material.
It found that Anthropic’s use of the authors’ books was “exceedingly transformative” and therefore allowed under US law.
But he rejected Anthropic’s request to dismiss the case.
Anthropic was set to stand trial in December over its use of pirated copies to build its library of material.
Plaintiffs lawyers called the settlement announced Friday “the first of its kind in the AI era.”
“It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners,” said lawyer Justin Nelson representing the authors. “This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong.”
The settlement could encourage more cooperation between AI developers and creators, according to Alex Yang, Professor of Management Science and Operations at London Business School.
“You need that fresh training data from human beings,” Mr Yang said. “If you want to grant more copyright to AI-created content, you must also strengthen mechanisms that compensate humans for their original contributions.”
AI Insights
Duke University pilot project examining pros and cons of using artificial intelligence in college | National News

We recognize you are attempting to access this website from a country belonging to the European Economic Area (EEA) including the EU which
enforces the General Data Protection Regulation (GDPR) and therefore access cannot be granted at this time.
For any issues, contact jgnews@jg.net or call 260-461-8773.
AI Insights
91% of Jensen Huang’s $4.3 Billion Stock Portfolio at Nvidia Is Invested in Just 1 Artificial Intelligence (AI) Infrastructure Stock

Key Points
-
Most stocks that Nvidia and CEO Jensen Huang invest in tend to be strategic partners or companies that can expand the AI ecosystem.
-
For the AI sector to thrive, there is going to need to be a lot of supporting data centers and other AI infrastructure.
-
One stock that Nvidia is heavily invested in also happens to be one of its customers, a first-mover in the AI-as-a-service space.
-
10 stocks we like better than CoreWeave ›
Nvidia (NASDAQ: NVDA), the largest company in the world by market cap, is widely known as the artificial intelligence (AI) chip king and the main pick-and-shovel play powering the AI revolution. But as such a big company that is making so much money, the company has all sorts of different operations and divisions aside from its main business.
For instance, Nvidia, which is run by CEO Jensen Huang, actually invests its own capital in publicly traded stocks, most of which seem to have to do with the company itself or the broader AI ecosystem. At the end of the second quarter, Nvidia owned six stocks collectively valued at about $4.3 billion. However, of this amount, 91% of Nvidia’s portfolio is invested in just one AI infrastructure stock.
Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Continue »
A unique relationship
Nvidia has long had a relationship with AI data center company CoreWeave (NASDAQ: CRWV), having been a key supplier of hardware that drives the company’s business. CoreWeave builds data centers specifically tailored to meet the needs of companies looking to run AI applications.
Image source: Getty Images.
These data centers are also equipped with hardware from Nvidia, including the company’s latest graphics processing units (GPUs), which help to train large language models. Clients can essentially rent the necessary hardware to run AI applications from CoreWeave, which saves them the trouble of having to build out and run their own infrastructure. CoreWeave’s largest customer by far is Microsoft, which makes up roughly 60% of the company’s revenue, but CoreWeave has also forged long-term deals with OpenAI and IBM.
Nvidia and CoreWeave’s partnership dates back to at least 2020 or 2021, and Nvidia also invested in the company’s initial public offering earlier this year. Wall Street analysts say it’s unusual to see a large supplier participate in a customer’s IPO. But Nvidia may see it as a key way to bolster the AI sector because meeting future AI demand will require a lot of energy and infrastructure.
CoreWeave is certainly seeing demand. On the company’s second-quarter earnings call, management said its contract backlog has grown to over $30 billion and includes previously discussed contracts with OpenAI, as well as other new potential deals with a range of different clients from start-ups to larger companies. Customers have also been increasing the length of their contracts with CoreWeave.
“In short, AI applications are beginning to permeate all areas of the economy, both through start-ups and enterprise, and demand for our cloud AI services is aggressively growing. Our cloud portfolio is critical to CoreWeave’s ability to meet this growing demand,” CoreWeave’s CEO Michael Intrator said on the company’s earnings call.
Is CoreWeave a buy?
Due to the demand CoreWeave is seeing from the market, the company has been aggressively expanding its data centers to increase its total capacity. To do this, CoreWeave has taken on significant debt, which the capital markets seem more than willing to fund.
At the end of the second quarter, current debt (due within 12 months) grew to about $3.6 billion, up about $1.2 billion year over year. Long-term debt had grown to about $7.4 billion, up roughly $2 billion year over year. That has hit the income statement hard, with interest expense through the first six months of 2025 up to over $530 million, up from roughly $107 million during the same period in 2024.
CoreWeave reported a loss of $1.73 per share in the first six months of the year, better than the $2.23 loss reported during the same time period. Still, investors have expressed concern about growing competition in the AI-as-a-service space. They also question whether or not CoreWeave has a real moat, considering its customers and suppliers. For instance, while CoreWeave has a strong partnership with Nvidia, that does not prevent others in the space from forging partnerships. Additionally, CoreWeave’s main customers, like Microsoft, could choose to build their own data centers and infrastructure in-house.
CoreWeave also trades at over a $47 billion market cap but is still losing significant money. The valuation also means the company is trading at 10 times forward sales. Now, in fairness, CoreWeave has grown revenue through the first half of the year by 276% year over year. It all boils down to whether the company can maintain its first-mover advantage and whether the AI addressable market can keep growing like it has been.
I think investors can buy the stock for the more speculative part of their portfolio. The high dependence on industry growth and reliance on debt prevent me from recommending a large position at this time.
Should you invest $1,000 in CoreWeave right now?
Before you buy stock in CoreWeave, consider this:
The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and CoreWeave wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.
Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $678,148!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $1,052,193!*
Now, it’s worth noting Stock Advisor’s total average return is 1,065% — a market-crushing outperformance compared to 186% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.
See the 10 stocks »
*Stock Advisor returns as of August 25, 2025
Bram Berkowitz has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends International Business Machines, Microsoft, and Nvidia. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.
Disclaimer: For information purposes only. Past performance is not indicative of future results.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi