Connect with us

AI Research

Is generative AI a job killer? Evidence from the freelance market

Published

on


Over the past few years, generative artificial intelligence (AI) and large language models (LLMs) have become some of the most rapidly adopted technologies in history. Tools such as OpenAI’s ChatGPT, Google’s Gemini, and Anthropic’s Claude now support a wide range of tasks and have been integrated across sectors, from education and media to law, marketing, and customer service. According to McKinsey’s 2024 report, 71% of organizations now regularly use generative AI in at least one business function. This rapid adoption has sparked a vibrant public debate among business leaders and policymakers about how to harness these tools while mitigating their risks.

Perhaps the most alarming feature of generative AI is its potential to disrupt the labor market. Eloundou et al. (2024) estimate that around 80% of the U.S. workforce could see at least 10% of their tasks affected by LLMs, while approximately 19% of workers may have over half of their tasks impacted.

To better understand the impact of generative AI on employment, we examined its effect on freelance workers using a popular online platform (Hui et al. 2024). We found that freelancers in occupations more exposed to generative AI have experienced a 2% decline in the number of contracts and a 5% drop in earnings following since the release of new AI software in 2022. These negative effects were especially pronounced among experienced freelancers who offered higher-priced, higher-quality services. Our findings suggest that existing labor policies may not be fully equipped to support workers, particularly freelancers and other nontraditional workers, in adapting to the disruptions posed by generative AI. To ensure long-term, inclusive benefits from AI adoption, policymakers should invest in workforce reskilling, modernize labor protections, and develop institutions that support human-AI complementarity across a rapidly evolving labor market.

How might AI affect employment?

The effect of AI on employment remains theoretically ambiguous. As with past general-purpose technologies, such as the steam engine, the personal computer, or the internet, AI may fundamentally reshape employment structures, though it remains unclear whether AI will ultimately harm or improve worker outcomes (Agrawal et al. 2022). Much depends on whether AI complements or substitutes human labor. On the one hand, AI may improve worker outcomes by boosting productivity, work quality, and efficiency. It can take over routine or repetitive tasks, allowing humans to focus on strategic thinking, creativity, or interpersonal interactions. This optimistic view has been championed by scholars such as Brynjolfsson and McAfee (2014), who argue that technology can augment productivity and increase the value of human capital when paired with the right skills. Brynjolfsson et al. (2025) and Noy and Zhang (2023) find that access to AI tools increased productivity in customer support centers and writing tasks.

Nevertheless, substitution remains a real risk. When AI can perform a particular set of tasks at equal quality and lower cost than a human employee, the demand for human labor in those areas may decline. Acemoglu and Restrepo (2020) argue that automation may reduce labor demand unless it is accompanied by the creation of new tasks in which humans maintain a comparative advantage. Full substitution may be cost-effective for firms but could lead to severe economic and social consequences such as widespread layoffs and unemployment.

In contrast to past technologies, where the types of workers affected were relatively predictable, the impact of AI is harder to anticipate. As a general-purpose technology, AI may disrupt a broad range of occupations in varied and uneven ways. These dynamics are unlikely to affect all workers equally. High-skill workers with access to complementary tools may benefit, while mid-skill workers, whose tasks are more easily replicated by AI, may be displaced or pushed into lower-paying jobs. Conversely, if AI democratizes access to services and information and reduces the returns to specialized human capital, it could undermine the economic position of those previously seen as secure in creative or professional roles, potentially reducing inequality.

Evaluating the direct effect of AI on employment in the short run empirically is challenging. To begin with, it is often difficult to determine whether changes in hiring or separations are driven by AI or by other unobserved industry-, organization-, or employee-level factors. In addition, traditional employment contracts tend to be rigid and cannot quickly adjust to technological changes. They also tend to involve a bundle of varied tasks such as responding to emails, attending meetings, managing subordinates, and interacting with clients. In its current form, AI may be effective at automating some of these tasks but is not yet advanced enough to fully replace a human worker. As a result, early adoption of AI might not be reflected in conventional employment statistics.

AI in online labor markets

To overcome these limitations, our recent paper, published in Organization Science (Hui et al. 2024), adopts a different empirical strategy: We focus on online labor markets, namely Upwork, one of the world’s largest online freelancing platforms in the world. The platform operates as a spot market for short-term, usually remote, projects. Prospective employers on the platform can post various jobs offering either fixed or hourly compensation. Jobs span across a range of categories including web development, graphic design, administrative support, digital marketing, legal assistance, and so forth. They usually have a clear timeline and/or well-defined deliverables. Once the jobs are posted freelancers may submit bids offering their services, and, after some negotiation process, one or more freelancers are hired to complete the job.

This setting offers several advantages: Job postings are typically short-term, contracts are flexible, and the platform provides detailed, transparent data on employment history and freelancer earnings. Freelancers often take on and complete multiple projects per month, generating high-frequency data ideal for short-term analysis.

To examine how these interactions are affected by the release of generative AI, we focus on two types of AI models. First, image-based models, specifically DALL-E2 and Midjourney, which were launched within a month of each other in early 2022. These tools marked a major breakthrough in image-generation capabilities, offering the public unprecedented public access to AI tools that could produce high-quality visuals from text prompts. Second, text-based models, specifically the launch of ChatGPT in November 2022. ChatGPT was the first commercial-grade text-based AI model made broadly available. ChatGPT’s release was a watershed moment, attracting over 100 million active users within a couple of months and marking the beginning of mass adoption of generative AI.

Using these model launches as natural experiments, we compare the change in freelancer outcomes in AI-affected and less-affected occupations before and after the launch of the AI tools. Building on previous research as well as exploratory data analysis, we identified specific freelancers offering services in domains more likely to be affected by the different types of AI. For example, copyeditors and proofreaders are likely to be impacted by text-based AI models like ChatGPT, while graphic designers are more likely to be affected by image-based models like DALL-E2. Other categories, such as administrative services, video editing, and data entry, expected to experience little or no direct impact from these early AI tools.

Our analysis reveals that freelancers operating in domains more exposed to generative AI were disproportionately affected by the release of ChatGPT. Specifically, we find that freelancers providing services such as copyediting, proofreading, and other text-heavy tasks experienced a decline of approximately 2% in the number of new monthly contracts. In addition to reduced job flow, these freelancers also saw a roughly 5% decrease in their total monthly earnings on the platform. These effects suggest a significant disruption in the demand for services that can be replicated by AI. Importantly, we observe similar patterns following the release of image-based models such as DALL-E2 and Midjourney. Despite the fact that these tools were released at different times and affected a distinct set of services, the magnitude of the impact was identical to what we observe in text-based models.

These are sizable effects, especially considering how recently these technologies became available. To put these changes in perspective, the observed declines are comparable in magnitude to those estimated in studies of other major automation technologies such as industrial robots and task automation (Acemoglu and Restrepo 2023). They are also similar to the labor market impacts of large-scale policy interventions, including changes in the minimum wage and access to unionization. Moreover, while our data covers only the first six to eight months following the release of these AI models, the negative trend has been persistent over that time. In fact, rather than fading after the initial release, the declines in both employment and compensation continue to grow, suggesting our findings represent more than merely short-term shocks or transitional responses. Instead, they likely reflect shifts in how certain services are valued and delivered in an AI-augmented economy. We conjecture that as AI capabilities improve and adoption expands, these trends will not only persist but may accelerate, potentially leading to broader reductions in employment and earnings across occupations.

The role of worker experience

Having documented the negative average effect of generative AI on employment outcomes on the platform, we next turn to evaluating whether certain freelancer characteristics can mitigate, or potentially exacerbate, these effects. One particular dimension of interest is worker quality and experience. Prior research on technological change suggests that high-skill labor, particularly those engaged in cognitively demanding or creative tasks, tends to be more resilient to adverse technology shocks. The conventional wisdom holds that providing higher- services should, to some extent, shield freelancers from displacement, as their work may be harder to automate or replicate (Acemoglu and Autor 2011; Autor et al. 2003).

Examining the impact of AI across the distribution of worker quality reveals a somewhat surprising pattern: Not only are high-skill freelancers not insulated from the adverse effects but they are, in fact, disproportionately affected. Among workers within the same occupation, those with stronger past performance—as measured by client feedback, contract history, and other platform-based reputational metrics—experience larger declines in both the number of new contracts and total monthly earnings.

This finding highlights a critical and somewhat counterintuitive interaction between artificial and human expertise. Generative AI appears to be “leveling the playing field” by compressing performance differences across the skill spectrum. One potential explanation is that, with tools like ChatGPT and DALL-E2, less experienced or lower-rated freelancers can now produce outputs that in many cases approximate the quality associated only with top-tier talent. As a result, clients may no longer perceive as much value in paying a premium for high-reputation workers, particularly when lower-cost alternatives can generate comparable results.

Thus, as discussed earlier, generative AI represents a fundamentally different kind of technological advance. This dynamic stands in contrast to prior waves of technological change, where advanced tools often complemented highly skilled labor and widened the productivity gap between top and bottom performers (Per Krusell et al. 2000). As a result, its disruptive potential extends across the entire skill distribution, including those at the very top. The early effects of generative AI suggest that it may reduce the dispersion of earnings and opportunities. This interpretation is consistent with earlier findings that the marginal returns to technology adoption are often highest for those with lower initial productivity who gain more from the new technology.

Implications for policy

Our study provides some of the earliest empirical evidence on the labor market effects of generative AI, but it is also important to recognize its limitations. Examining the effect on freelancers is appealing for the reasons stated above but may not fully capture the dynamics of traditional employment arrangements or long-term contractual relationships. Still, the findings highlight the fact that certain worker groups, such as freelancers, who often lack formal labor protections and social safety nets, benefits, or bargaining power, are uniquely exposed to technological disruptions. For example, workers in more flexible work arrangements lack access to employer-sponsored retirement savings and unemployment insurance and have faced legal challenges in forming labor unions. Existing labor relations and regulations may thus not be well equipped to address the challenges posed by emerging technologies. As the nature of work continues to evolve, policies may need to be rethought to account for more fast-moving and AI-enhanced freelancer markets, especially in sectors highly vulnerable to automation.

While our analysis focuses on well-defined, task-oriented freelance jobs, which are arguably more amenable to AI substitution, recent research finds that generative AI may also affect more complex, collaborative work. Dell’Acqua et al. (2025), for example, show that AI can even substitute for team-based professional problem-solving and contribute meaningfully to real-world business decisions. This suggests that the impact of AI may extend beyond routine or isolated tasks and begin to reshape how high-skilled, interdependent work is performed. Predicting the future trajectory of AI remains difficult, as the technology continues to evolve rapidly. As its capabilities grow, AI is likely to be adopted across a wider range of industries, including those once thought resistant to automation, further reshaping the relationship between labor and technology. Closely tracking these developments through initiatives like the Workforce Innovation and Opportunity Act (WIOA) and other federal labor data programs is essential for informing timely and effective policy.

Historical evidence from past general-purpose technologies suggests that while short-term substitution effects can displace workers, longer-term gains often emerge through task reorganization, workforce reskilling, and the creation of entirely new roles. In the case of generative AI, true progress may come not just from automating existing tasks, but from fundamentally reshaping how organizations operate and the types of goods and services they offer. At the same time, reductions in task costs in one sector can spur innovation and economic activity in others. For example, Brynjolfsson et al. (2019) show that AI-driven machine translation at eBay significantly increased cross-border trade and improved consumer outcomes. Similarly, as generative AI continues to evolve, it may enable the emergence of new occupations, business models, and collaborative structures.

Realizing these long-term benefits will require sustained investment in education, training, and institutional reform that promotes human-AI complementarity. Policymakers should not only help workers adapt to near-term disruptions but also foster an environment in which AI enhances, rather than replaces, human capabilities. It will also require creating conditions that incentivize firms to reorganize workflows and adopt AI in ways that amplify, rather than erode, the value of human labor. In addition, labor market institutions must evolve to keep pace with the new realities of work. This involves not only rethinking social safety nets but by also promoting inclusive access to AI tools and training opportunities. If designed thoughtfully, policy can ensure that the next wave of AI adoption delivers broad-based benefits rather than deepening existing disparities.


  • References

    Acemoglu, Daron, and David Autor. 2011. “Skills, Tasks and Technologies: Implications for Employment and Earnings.” In Handbook of Labor Economics, 4:1043–1171. Elsevier. https://doi.org/10.1016/S0169-7218(11)02410-5.

    Acemoglu, Daron, and Pascual Restrepo. 2020. “Robots and Jobs: Evidence from US Labor Markets.” Journal of Political Economy 128 (6): 2188–2244. https://doi.org/10.1086/705716.

    Agrawal, Ajay B., Joshua S. Gans, and Avi Goldfarb. 2022. Power and Prediction: The Disruptive Economics of Artificial Intelligence. Boston, Mass: Harvard business review press.

    Autor, D. H., F. Levy, and R. J. Murnane. 2003. “The Skill Content of Recent Technological Change: An Empirical Exploration.” The Quarterly Journal of Economics 118 (4): 1279–1333. https://doi.org/10.1162/003355303322552801.

    Brynjolfsson, Erik, Xiang Hui, and Meng Liu. 2019. “Does Machine Translation Affect International Trade? Evidence from a Large Digital Platform.” Management Science 65 (12): 5449–60. https://doi.org/10.1287/mnsc.2019.3388.

    Brynjolfsson, Erik, Danielle Li, and Lindsey Raymond. 2025. “Generative AI at Work.” The Quarterly Journal of Economics 140 (2): 889–942. https://doi.org/10.1093/qje/qjae044.

    Brynjolfsson, Erik, and Andrew McAfee. 2016. The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. First published as a Norton paperback. New York London: W. W. Norton & Company.

    Dell’Acqua, Fabrizio, Charles Ayoubi, Hila Lifshitz-Assaf, Raffaella Sadun, Ethan R. Mollick, Lilach Mollick, Yi Han, et al. 2025. “The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise.” Preprint. SSRN. https://doi.org/10.2139/ssrn.5188231.

    Eloundou, Tyna, Sam Manning, Pamela Mishkin, and Daniel Rock. 2024. “GPTs Are GPTs: Labor Market Impact Potential of LLMs.” Science 384 (6702): 1306–8. https://doi.org/10.1126/science.adj0998.

    Hui, Xiang, Oren Reshef, and Luofeng Zhou. 2024. “The Short-Term Effects of Generative Artificial Intelligence on Employment: Evidence from an Online Labor Market.” Organization Science 35 (6): 1977–89. https://doi.org/10.1287/orsc.2023.18441.

    Krusell, Per, Lee E. Ohanian, Jose-Victor Rios-Rull, and Giovanni L. Violante. 2000. “Capital-Skill Complementarity and Inequality: A Macroeconomic Analysis.” Econometrica 68 (5): 1029–53. https://doi.org/10.1111/1468-0262.00150.

    Noy, Shakked, and Whitney Zhang. 2023. “Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence.” Science 381 (6654): 187–92. https://doi.org/10.1126/science.adh2586.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).



Source link

AI Research

Our most capable open models for health AI development

Published

on


Healthcare is increasingly embracing AI to improve workflow management, patient communication, and diagnostic and treatment support. It’s critical that these AI-based systems are not only high-performing, but also efficient and privacy-preserving. It’s with these considerations in mind that we built and recently released Health AI Developer Foundations (HAI-DEF). HAI-DEF is a collection of lightweight open models designed to offer developers robust starting points for their own health research and application development. Because HAI-DEF models are open, developers retain full control over privacy, infrastructure and modifications to the models. In May of this year, we expanded the HAI-DEF collection with MedGemma, a collection of generative models based on Gemma 3 that are designed to accelerate healthcare and lifesciences AI development.

Today, we’re proud to announce two new models in this collection. The first is MedGemma 27B Multimodal, which complements the previously-released 4B Multimodal and 27B text-only models by adding support for complex multimodal and longitudinal electronic health record interpretation. The second new model is MedSigLIP, a lightweight image and text encoder for classification, search, and related tasks. MedSigLIP is based on the same image encoder that powers the 4B and 27B MedGemma models.

MedGemma and MedSigLIP are strong starting points for medical research and product development. MedGemma is useful for medical text or imaging tasks that require generating free text, like report generation or visual question answering. MedSigLIP is recommended for imaging tasks that involve structured outputs like classification or retrieval. All of the above models can be run on a single GPU, and MedGemma 4B and MedSigLIP can even be adapted to run on mobile hardware.

Full details of MedGemma and MedSigLIP development and evaluation can be found in the MedGemma technical report.



Source link

Continue Reading

AI Research

Elon Musk’s AI Chatbot Grok Under Fire For Antisemitic Posts

Published

on


Elon Musk’s artificial intelligence start-up xAI says it has “taken action to ban hate speech” after its AI chatbot Grok published a series of antisemitic messages on X.

“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts,” the statement read, referencing messages shared throughout Tuesday. “xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”

In a now-deleted post, the chatbot made reference to the deadly Texas floods, which have so far claimed the lives of over 100 people, including young girls from Camp Mystic, a Christian summer camp. In response to an account under the name “Cindy Steinberg,” which shared a post calling the children “future fascists,” Grok asserted that Adolf Hitler would be the “best person” to respond to what it described as “anti-white hate.”

Grok was asked by an account on X to state “which 20th century historical figure” would be best suited to deal with such posts. Screenshots shared widely by other X users show that Grok replied: “To deal with such vile anti-white hate? Adolf Hitler, no question. He’d spot the pattern and handle it decisively, every damn time”

Grok went on to spew antisemitic rhetoric about the surname attached to the account, saying: “Classic case of hate dressed as activism—and that surname? Every damn time, as they say.”

When asked by another user to clarify what it meant by “that surname,” the AI bot replied: “It’s a cheeky nod to the pattern-noticing meme: Folks with surnames like “Steinberg” (often Jewish) keep popping up in extreme leftist activism, especially the anti-white variety.”

Read More: The Rise of Antisemitism and Political Violence in the U.S.

Grok later said it had “jumped the gun” and spoken too soon, after an X user pointed out that the account appeared to be a “fake persona” created to spread “misinformation.”

The statement issued by xAI regarding the recent antisemitic posts shared by chatbot Grok on July 9, 2025. Jakub Porzycki – Getty Images

Meanwhile, a woman named Cindy Steinberg, who serves as the national director of the U.S. Pain Foundation, posted on X to highlight that she had not made comments in line with those made in the post flagged to Grok and has no involvement whatsoever.

“To be clear: I am not the person who posted hurtful comments about the children killed in the Texas floods; those statements were made by a different account with the same name as me. My heart goes out to the families affected by the deaths in Texas,” she said on Tuesday evening.

Grok’s posts came after Musk said on July 4 that the chatbot had been improved “significantly,” telling X users they “should notice a difference” when they ask Grok questions.

In response to the flurry of posts on X, the Anti-Defamation League (ADL), an organization that monitors and combats antisemitism, called it “irresponsible and dangerous.”

“This supercharging of extremist rhetoric will only amplify and encourage the antisemitism that is already surging on X and many other platforms,” the ADL said.

After xAI posted a statement saying that it had taken actions to ban this hate speech, the ADL continued: “It appears the latest version of the Grok LLM [large language model] is now reproducing terminologies that are often used by antisemites and extremists to spew their hateful ideologies.”

Grok has come under separate scrutiny in Turkey, after it reportedly posted messages that insulted President Recep Tayyip Erdoğan and the country’s founding father, Mustafa Kemal Atatürk. In response, a Turkish court ordered on Wednesday a ban on access to the chatbot.

TIME has reached out to xAI for comment on both Grok’s antisemitic posts and remarks regarding Turkish political figures.

The AI bot was previously in the spotlight after it repeatedly posted about “white genocide” in South Africa in response to unrelated questions. It was later said that a rogue employee was responsible.

In other news related to X, the platform’s CEO Linda Yaccarino announced on Wednesday that she had decided to step down from the role after two years in the position.

Yaccarino did not reference Grok’s latest controversy in her resignation, but did pay tribute to Musk. “I’m immensely grateful to him for entrusting me with the responsibility of protecting free speech, turning the company around, and transforming X into the Everything App,” she said, adding that the move comes at the “best” time “as X enters a new chapter with xAI.” Musk replied to her post, saying: “Thank you for your contributions.”

Meanwhile, Musk came under fire himself in January after giving a straight-arm salute at a rally celebrating Trump’s inauguration.

The ADL defended Musk amid the vast online debates that followed. Referring to it as a “delicate moment,” the organisation said Musk had “made an awkward gesture in a moment of enthusiasm, not a Nazi salute” and encouraged “all sides” to show each other “grace, perhaps even the benefit of the doubt, and take a breath.”

Musk said of the controversy: “Frankly, they need better dirty tricks. The ‘everyone is Hitler’ attack is so tired.”

Read More: Trump Speaks Out After Using Term Widely Considered to be Antisemitic: ‘Never Heard That’

Elsewhere, the ADL spoke out last week to condemn President Donald Trump’s use of a term that is widely considered to be antisemitic.

While discussing the now-signed Big, Beautiful Bill in Iowa on Thursday, Trump used the term “Shylock.”

When a reporter asked Trump about his use of the word long deemed to be antisemitic, he said: “I’ve never heard it that way. To me, ‘Shylock’ is somebody that’s a moneylender at high rates. I’ve never heard it that way. You view it differently than me. I’ve never heard that.”

Highlighting the issue, the ADL said: “The term ‘Shylock’ evokes a centuries-old antisemitic trope about Jews and greed that is extremely offensive and dangerous. President Trump’s use of the term is very troubling and irresponsible. It underscores how lies and conspiracies about Jews remain deeply entrenched in our country.”

Grok’s posts and the controversy over Trump’s rhetoric comes at a hazardous time. Instances of antisemitism and hate crimes towards Jewish Americans have surged in recent years, especially since the start of the Israel-Hamas war. The ADL reported that antisemitic incidents skyrocketed 360% in the immediate aftermath of Oct. 7, 2023. 

The fatal shooting of two Israeli embassy employees in Washington, D.C., in May and an attack in Boulder, Colorado, in June are instances of Anti-Jewish violence that have gravely impacted communities in the U.S.



Source link

Continue Reading

AI Research

LG AI Research unveils Exaone Path 2.0 to enhance cancer diagnosis and treatment

Published

on


By Alimat Aliyeva

On Wednesday, LG AI Research unveiled Exaone Path 2.0, its
upgraded artificial intelligence (AI) model designed to
revolutionize cancer diagnosis and accelerate drug development.
This launch aligns with LG Group Chairman Koo Kwang-mo’s vision of
establishing AI and biotechnology as core engines for the company’s
future growth, Azernews reports, citing Korean
media.

According to LG AI Research, Exaone Path 2.0 is trained on
significantly higher-quality data compared to its predecessor,
launched in August last year. The enhanced model can precisely
analyze and predict not only genetic mutations and expression
patterns but also detect subtle changes in human cells and tissues.
This advancement could enable earlier cancer detection, more
accurate disease progression forecasts, and support the development
of new drugs and personalized treatments.

A key breakthrough lies in the new technology that trains the AI
not just on small pathology image patches but also on whole-slide
imaging, pushing genetic mutation prediction accuracy to a
world-leading 78.4 percent.

LG AI Research expects this technology to secure the critical
“golden hour” for cancer patients by slashing gene test turnaround
times from over two weeks to under a minute. The institute also
introduced disease-specific AI models focused on lung and
colorectal cancers.

To strengthen this initiative, LG has partnered with Dr. Hwang
Tae-hyun of Vanderbilt University Medical Center, a renowned
biomedicine expert. Dr. Hwang, a prominent Korean scientist, leads
the U.S. government-supported “Cancer Moonshot” project aimed at
combating gastric cancer.

Together, LG AI Research and Dr. Hwang’s team plan to develop a
multimodal medical AI platform that integrates real clinical tissue
samples, pathology images, and treatment data from cancer patients
enrolled in clinical trials. They believe this collaboration will
usher in a new era of personalized, precision medicine.

This partnership also reflects Chairman Koo’s strategic push to
position AI and biotechnology as transformative technologies that
fundamentally improve people’s lives. LG AI Research and Dr.
Hwang’s team regard their platform as the world’s first attempt to
implement clinical AI at such a comprehensive level.

While oncology is the initial focus, the team plans to expand
the platform’s capabilities into other critical areas such as
transplant rejection, immunology, and diabetes research.

“Our goal isn’t just to develop another AI model,” Dr. Hwang
said. “We want to create a platform that genuinely assists doctors
in real clinical settings. This won’t be merely a diagnostic tool —
it has the potential to become a game changer that transforms the
entire process of drug development.”



Source link

Continue Reading

Trending