Connect with us

AI Research

AI Revolutionizes Life Sciences: ISG Research Reveals Digital Transformation Trends

Published

on


Innovation helps industry better serve patients through decentralized clinical trials, ongoing engagement, real-time safety monitoring, ISG Provider Lens® report says

STAMFORD, Conn.–(BUSINESS WIRE)–
Life sciences organizations are using AI and other emerging technologies to become more agile, innovative and patient-centric, according to a new research report published today by Information Services Group (ISG) (Nasdaq: III), a global AI-centered technology research and advisory firm.

The 2025 ISG Provider Lens® global Life Sciences Digital Services report finds that clinical trials, manufacturing, patient engagement and other activities in the industry are being transformed to meet changing expectations. Telehealth is making some operations more flexible, while increasing regulation and sustainability goals are creating demand for new solutions. Contract research organizations (CROs) play a growing role in the industry, often at the forefront of innovation.

“Digital transformation is changing what it means to be a leading life sciences enterprise,” said Jenn Stein, ISG partner and life sciences industry lead. “Life sciences companies want to move faster, understand unique patient needs and automate processes to be more responsive. Those that can adapt to the new challenges will thrive.”

Companies are going beyond traditional automation by deploying AI agents for clinical development, patient engagement and regulatory affairs, the report says. These self-learning systems continuously optimize processes and provide insights that improve decision-making. Benefits include better prediction of patient needs, streamlined clinical trials and compliance with evolving regulations.

In clinical development, AI is speeding up the design of trial protocols and improving oversight and documentation, ISG says. Telemedicine, real-time data capture and wearable devices enable decentralized and hybrid trials, which change the role of patients, integrating trials into their lives and allowing for broader participation. Through telehealth platforms and personalized communication, companies are building continuous, proactive relationships with patients.

Real-time data, AI-based surveillance and automation are changing the way life sciences organizations monitor and respond to safety requirements, the report says. Modern pharmacovigilance systems provide new ways to process safety data, submit documentation and remain prepared for audits. Pharmacovigilance tools are increasingly aligned with regulatory platforms, forming a digital backbone that allows companies to act quickly and scale up easily.

CROs and life sciences enterprises are working together more closely in a wide range of activities, shifting from purely transactional relationships to true partnerships, ISG says. CROs are involved in clinical development, patient engagement, compliance and other functions, often introducing AI capabilities to improve outcomes.

“Life sciences companies depend on CROs more than ever, especially for engaging with participants in decentralized trials,” said Rohan Sinha, principal analyst at ISG and co-author of the report. “Strategic partnerships are essential in this increasingly complex industry.”

The report also explores other trends in the life sciences industry, including the role of GenAI in personalized customer experience and the growing importance of automation and sustainability in life sciences manufacturing.

For more insights into the technology challenges faced by life sciences enterprises, plus ISG’s advice for addressing them, see the ISG Provider Lens® Focal Points briefing here.

The 2025 ISG Provider Lens® global Life Sciences Digital Services report evaluates the capabilities of 62 providers across seven quadrants: Clinical Development (Service Providers), Patient Engagement (Service Providers), Manufacturing Supply Chain (Service Providers), Pharmacovigilance and Regulatory Affairs — Digital Evolution (Service Providers), Commercial Operations — Digital Evolution (Service Providers), Clinical Development (CROs), Patient Engagement (CROs) and Pharmacovigilance and Regulatory Affairs — Digital Evolution (CROs).

The report names Accenture, Capgemini, Cognizant, Deloitte, HCLTech, Infosys, TCS and Wipro as Leaders in five quadrants each. It names ICON plc, IQVIA and PPD as Leaders in three quadrants each. Hexaware, NTT DATA, Parexel, Syneos Health and Tech Mahindra are named as Leaders in two quadrants each. Genpact, Indegene, LTIMindtree and WuXI AppTec are named as Leaders in one quadrant each.

In addition, Tech Mahindra is named as a Rising Star — a company with a “promising portfolio” and “high future potential” by ISG’s definition — in two quadrants. Clario, Hexaware, LTIMindtree, Medpace, Persistent Systems and Worldwide Clinical Trials are named as Leaders in one quadrant each.

In the area of customer experience, Persistent Systems is named the global ISG CX Star Performer for 2025 among life sciences digital service providers. Persistent Systems earned the highest customer satisfaction scores in ISG’s Voice of the Customer survey, part of the ISG Star of Excellence™ program, the premier quality recognition for the technology and business services industry.

Customized versions of the report are available from Hexaware, Indegene and PPD.

The 2025 ISG Provider Lens® global Life Sciences Digital Services report is available to subscribers or for one-time purchase on this webpage.

About ISG Provider Lens® Research

The ISG Provider Lens® Quadrant research series is the only service provider evaluation of its kind to combine empirical, data-driven research and market analysis with the real-world experience and observations of ISG’s global advisory team. Enterprises will find a wealth of detailed data and market analysis to help guide their selection of appropriate sourcing partners, while ISG advisors use the reports to validate their own market knowledge and make recommendations to ISG’s enterprise clients. The research currently covers providers offering their services globally, across Europe, as well as in the U.S., Canada, Mexico, Brazil, the U.K., France, Benelux, Germany, Switzerland, the Nordics, Australia and Singapore/Malaysia, with additional markets to be added in the future. For more information about ISG Provider Lens research, please visit this webpage.

About ISG

ISG (Nasdaq: III) is a global AI-centered technology research and advisory firm. A trusted partner to more than 900 clients, including 75 of the world’s top 100 enterprises, ISG is a long-time leader in technology and business services that is now at the forefront of leveraging AI to help organizations achieve operational excellence and faster growth. The firm, founded in 2006, is known for its proprietary market data, in-depth knowledge of provider ecosystems, and the expertise of its 1,600 professionals worldwide working together to help clients maximize the value of their technology investments.

Press Contacts:

Laura Hupprich, ISG

+1 203 517 3132

laura.hupprich@isg-one.com

Julianna Sheridan, Matter Communications for ISG

+1 978-518-4520

isg@matternow.com

Source: Information Services Group, Inc.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

I asked ChatGPT to help me pack for my vacation – try this awesome AI prompt that makes planning your travel checklist stress-free

Published

on


It’s that time of year again, when those of us in the northern hemisphere pack our sunscreen and get ready to venture to hotter climates in search of some much-needed Vitamin D.

Every year, I book a vacation, and every year I get stressed as the big day gets closer, usually forgetting to pack something essential, like a charger for my Nintendo Switch 2, or dare I say it, my passport.



Source link

Continue Reading

AI Research

Sakana AI: Think LLM dream teams, not single models

Published

on


Enterprises may want to start thinking of large language models (LLMs) as ensemble casts that can combine knowledge and reasoning to complete tasks, according to Japanese AI lab Sakana AI.

Sakana AI in a research paper outlined a method called Multi-LLM AB-MCTS (Adaptive Branching Monte Carlo Tree Search) that uses a collection of LLMs to cooperate, perform trial-and-error and leverage strengths to solve complex problems.

In a post, Sakana AI said:

“Frontier AI models like ChatGPT, Gemini, Grok, and DeepSeek are evolving at a breathtaking pace amidst fierce competition. However, no matter how advanced they become, each model retains its own individuality stemming from its unique training data and methods. We see these biases and varied aptitudes not as limitations, but as precious resources for creating collective intelligence. Just as a dream team of diverse human experts tackles complex problems, AIs should also collaborate by bringing their unique strengths to the table.”

Sakana AI said AB-MCTS is a method for inference-time scaling to enable frontier AIs to cooperate and revisit problems and solutions. Sakana AI released the algorithm as an open source framework called TreeQuest, which has a flexible API that allows users to use AB-MCTS for tasks with multiple LLMs and custom scoring.

What’s interesting is that Sakana AI gets out of that zero-sum LLM argument. The companies behind LLM training would like you to think there’s one model to rule them all. And you’d do the same if you were spending so much on training models and wanted to lock in customers for scale and returns.

Sakana AI’s deceptively simple solution can only come from a company that’s not trying to play LLM leapfrog every few minutes. The power of AI is in the ability to maximize the potential of each LLM. Sakana AI said:

“We saw examples where problems that were unsolvable by any single LLM were solved by combining multiple LLMs. This went beyond simply assigning the best LLM to each problem. In (an) example, even though the solution initially generated by o4-mini was incorrect, DeepSeek-R1-0528 and Gemini-2.5-Pro were able to use it as a hint to arrive at the correct solution in the next step. This demonstrates that Multi-LLM AB-MCTS can flexibly combine frontier models to solve previously unsolvable problems, pushing the limits of what is achievable by using LLMs as a collective intelligence.”

A few thoughts:

  • Sakana AI’s research and move to emphasize collective intelligence over on LLM and stack is critical to enterprises that need to create architectures that don’t lock them into one provider.
  • AB-MCTS could play into what agentic AI needs to become to be effective and complement emerging standards such as Model Context Protocol (MCP) and Agent2Agent.
  • If combining multiple models to solve problems becomes frictionless, the costs will plunge. Will you need to pay up for OpenAI when you can leverage LLMs like DeepSeek combined with Gemini and a few others? 
  • Enterprises may want to start thinking about how to build decision engines instead of an overall AI stack. 
  • We could see a scenario where a collective of LLMs achieves superintelligence before any one model or provider. If that scenario plays out, can LLM giants maintain valuations?
  • The value in AI may not be in the infrastructure or foundational models in the long run, but the architecture and approaches.

More:



Source link

Continue Reading

AI Research

Positive attitudes toward AI linked to more prone to problematic social media use

Published

on


People who have a more favorable view of artificial intelligence tend to spend more time on social media and may be more likely to show signs of problematic use, according to new research published in Addictive Behaviors Reports.

The new study was designed to explore a question that, until now, had been largely overlooked in the field of behavioral research. While many factors have been identified as risk factors for problematic social media use—including personality traits, emotional regulation difficulties, and prior mental health issues—no research had yet explored whether a person’s attitude toward artificial intelligence might also be linked to unhealthy social media habits.

The researchers suspected there might be a connection, since social media platforms are deeply intertwined with AI systems that drive personalized recommendations, targeted advertising, and content curation.

“For several years, I have been interested in understanding how AI shapes societies and individuals. We also recently came up with a framework called IMPACT to provide a theoretical framework to understand this. IMPACT stand for the Interplay of Modality, Person, Area, Country/Culture and Transparency variables, all of relevance to understand what kind of view people form regarding AI technologies,” said study author Christian Montag, a distinguished professor of cognitive and brain sciences at the Institute of Collaborative Innovation at University of Macau.

Artificial intelligence plays a behind-the-scenes role in nearly every major social media platform. Algorithms learn from users’ behavior and preferences in order to maximize engagement, often by showing content that is likely to capture attention or stir emotion. These AI-powered systems are designed to increase time spent on the platform, which can benefit advertisers and the companies themselves. But they may also contribute to addictive behaviors by making it harder for users to disengage.

Drawing from established models in psychology, the researchers proposed that attitudes toward AI might influence how people interact with social media platforms. In this case, people who trust AI and believe in its benefits might be more inclined to embrace AI-powered platforms like social media—and potentially use them to excess.

To investigate these ideas, the researchers analyzed survey data from over 1,000 adults living in Germany. The participants were recruited through an online panel and represented a wide range of ages and education levels. After excluding incomplete or inconsistent responses and removing extreme outliers (such as those who reported using social media for more than 16 hours per day), the final sample included 1,048 people, with roughly equal numbers of men and women.

Participants completed a variety of self-report questionnaires. Attitudes toward artificial intelligence were measured using both multi-item scales and single-item ratings. These included questions such as “I trust artificial intelligence” and “Artificial intelligence will benefit humankind” to assess positive views, and “I fear artificial intelligence” or “Artificial intelligence will destroy humankind” to capture negative perceptions.

To assess social media behavior, participants were asked whether they used platforms like Facebook, Instagram, TikTok, YouTube, or WhatsApp, and how much time they spent on them each day, both for personal and work purposes. Those who reported using social media also completed a measure called the Social Networking Sites–Addiction Test, which includes questions about preoccupation with social media, difficulty cutting back, and using social media to escape from problems.

Overall, 956 participants said they used social media. Within this group, the researchers found that people who had more positive attitudes toward AI also tended to spend more time on social media and reported more problematic usage patterns. This relationship held for both men and women, but it was stronger among men. In contrast, negative attitudes toward AI showed only weak or inconsistent links to social media use, suggesting that it is the enthusiastic embrace of AI—not fear or skepticism—that is more closely associated with excessive use.

“It is interesting to see that the effect is driven by the male sample,” Montag told PsyPost. “On second thought, this is not such a surprise, because in several samples we saw that males reported higher positive AI attitudes than females (on average). So, we must take into account gender for research questions, such as the present one.”

“Further I would have expected that negative AI attitudes would have played a larger role in our work. At least for males we observed that fearing AI went also along with more problematic social media use, but this effect was mild at best (such a link might be explained via negative affect and escapism tendencies). I would not be surprised if such a link becomes more visible in future studies. Let’s keep in mind that AI attitudes might be volatile and change (the same of course is also true for problematic social media use).”

To better understand how these variables were related, the researchers conducted a mediation analysis. This type of analysis can help clarify whether one factor (in this case, time spent on social media) helps explain the connection between two others (positive AI attitudes and problematic use).

The results suggested that people with positive attitudes toward AI tended to spend more time on social media, and that this increased usage was associated with higher scores on the addiction measure. In other words, time spent on social media partly accounted for the link between AI attitudes and problematic behavior.

“I personally believe that it is important to have a certain degree of positive attitude towards benevolent AI technologies,” Montag said. “AI will profoundly change our personal and business lives, so we should better prepare ourselves for active use of this technology. This said, our work shows that positive attitudes towards AI, which are known to be of relevance to predict AI technology use, might come with costs. This might be in form of over-reliance on such technology, or in our case overusing social media (where AI plays an important role in personalizing content). At least we saw this to be true for male study participants.”

Importantly, the researchers emphasized that their data cannot establish cause and effect. Because the study was cross-sectional—that is, based on a single snapshot in time—it is not possible to say whether positive attitudes toward AI lead to excessive social media use, or whether people who already use social media heavily are more likely to hold favorable views of AI. It’s also possible that a third factor, such as general interest in technology, could underlie both tendencies.

The study’s sample, while diverse in age and gender, skewed older on average, with a mean age of 45. This may limit the generalizability of the findings, especially to younger users, who are often more active on social media and may have different relationships with technology. Future research could benefit from focusing on younger populations or tracking individuals over time to see how their attitudes and behaviors change.

“In sum, our work is exploratory and should be seen as stimulating discussions. For sure, it does not deliver final insights,” Montag said.

Despite these limitations, the findings raise important questions about how people relate to artificial intelligence and how that relationship might influence their behavior. The authors suggest that positive attitudes toward AI are often seen as a good thing—encouraging people to adopt helpful tools and new innovations. But this same openness to AI might also make some individuals more vulnerable to overuse, especially when the technology is embedded in products designed to maximize engagement.

The researchers also point out that people may not always be aware of the role AI plays in their online lives. Unlike using an obvious AI system, such as a chatbot or virtual assistant, browsing a social media feed may not feel like interacting with AI. Yet behind the scenes, algorithms are constantly shaping what users see and how they engage. This invisible influence could contribute to compulsive use without users realizing how much the technology is guiding their behavior.

The authors see their findings as a starting point for further exploration. They suggest that researchers should look into whether positive attitudes toward AI are also linked to other types of problematic online behavior, such as excessive gaming, online shopping, or gambling—especially on platforms that make heavy use of AI. They also advocate for studies that examine whether people’s awareness of AI systems influences how those systems affect them.

“In a broader sense, we want to map out the positive and negative sides of AI technology use,” Montag explained. “I think it is important that we use AI in the future to lead more productive and happier lives (we investigated also AI-well-being in this context recently), but we need to be aware of potential dark sides of AI use.”

“We are happy if people are interested in our work and if they would like to support us by filling out a survey. Here we do a study on primary emotional traits and AI attitudes. Participants also get as a ‘thank you’ insights into their personality traits: https://affective-neuroscience-personality-scales.jimdosite.com/take-the-test/).”

The study, “The darker side of positive AI attitudes: Investigating associations with (problematic) social media use,” was authored by Christian Montag and Jon D. Elhai.



Source link

Continue Reading

Trending