Connect with us

AI Insights

Artificial Intelligence And State Tax Agencies

Published

on


In this episode of Tax Notes Talk, Ryan Minnick of the Federation of Tax Administrators discusses how state tax agencies are approaching artificial intelligence and shares insights from the FTA’s upcoming briefing paper on generative AI and its 2024 tax agency survey.

Tax Notes Talk is a podcast produced by Tax Notes. This transcript has been edited for clarity.

David D. Stewart: Welcome to the podcast. I’m David Stewart, editor in chief of Tax Notes Today International. This week: AI and state tax.

In the past few years, every industry has looked at implementing artificial intelligence into their workflows, and state tax agencies are no exception. But the sensitive nature of the data tax agencies must handle requires an especially careful use of the technology.

So how can state tax administrators use artificial intelligence, and how can agencies balance the adoption of new technology with the need to protect taxpayer information?

Here to talk more about this is Tax Notes reporter Emily Hollingsworth. Emily, welcome back to the podcast.

Emily Hollingsworth: Thanks, Dave. Glad to be back.

David D. Stewart: Now, there have been a lot of developments in the past year or so with state tax administrations and the adoption of AI. Could you give us some quick background on what’s been happening and where we are?

Emily Hollingsworth: Absolutely. Among states, AI policy and legislation have, in a word, exploded this year. The National Conference of State Legislatures recently said that all 50 states, the District of Columbia, Puerto Rico, and the Virgin Islands have introduced legislation about AI this year. Twenty-eight of these states and the Virgin Islands enacted legislation on AI, including AI regulations and also prohibiting AI use for certain criminal activity.

For quick context, I’ll be talking about two forms of AI. There’s generative AI, which can generate text, images, or other items. ChatGPT is a well-known example of a technology that uses generative AI. Then there’s machine learning AI. This is an older technology that uses algorithms to identify patterns and data.

When it comes to examples of state tax administrations and recent developments on AI, California’s Department of Tax and Fee Administration, or the CDTFA, announced back in late April, early May, that it will be working with the firm SymSoft Solutions to deploy a generative AI solution to enhance the department’s customer services. This solution is trained on the department’s reference materials, including manuals, guides, and other documents. When a taxpayer has a question, the solution will generate potential responses that the department agent can provide to the taxpayer. The goal is to use the solution to cut down on customer wait times and help alleviate workloads for agents during peak tax filing periods. Now, while the CDTFA and SymSoft are under a year-long contract to deploy the solution, as I understand it, the solution at the moment isn’t currently being used in real time for customers’ questions.

David D. Stewart: Well, I know you’ve been following this area pretty closely. Could you give us an idea of what sort of things you’ve been working on lately?

Emily Hollingsworth: Definitely. I’m currently working on a report about this generative AI solution from the CDTFA, looking more closely into the initial testing period the solution underwent in 2024, as well as the project’s next steps.

A separate report I’m working on looks into machine learning AI tools used by tax departments for decades, for everything from tax return processing to fraud detection and prevention. As state leaders, lawmakers, and advocacy groups call for oversight and regular inspection of AI tools, I’m looking into whether these machine learning tools would also be subject to this sort of inspection and oversight.

David D. Stewart: Now, I understand you recently talked with somebody about this. Who did you talk to?

Emily Hollingsworth: I talked to Ryan Minnick, who is the chief operating officer with the Federation of Tax Administrators.

David D. Stewart: And what sort of things did you get into?

Emily Hollingsworth: Well, the Federation of Tax Administrators, or FTA, has been doing an enormous amount to provide training and resources for state tax agencies looking to pilot or implement AI solutions. We got into the FTA’s upcoming briefing paper on generative AI, as well as its 2024 tax agency survey.

We also delved into topics like data security, transparency and trust, and the roles that state tax agencies are taking when it comes to piloting or implementing AI solutions.

David D. Stewart: All right. Let’s go to that interview.

Emily Hollingsworth: Ryan Minnick, welcome to the podcast.

Ryan Minnick: Awesome. Thanks so much for having me, Emily. I’m excited to chat with you today.

Emily Hollingsworth: Likewise. So the Federation of Tax Administrators is developing a briefing paper on generative AI technology. The paper is intended to equip tax agencies with information around generative AI, particularly as states and state legislatures are facing increasing pressure to stay competitive with other states and pilot or execute generative AI.

When is the briefing paper expected to be released, and what aspects of generative AI will this briefing paper cover?

Ryan Minnick: It’s a great question, and I guess I’ll back up a little bit and explain how this project started and where we are in the scope of things. So as you know, FTA, working with all the states, we’re very focused on emerging issues. So in the 10 years I’ve been here, whether it’s blockchain or whether it’s some movement to the cloud, there’s always some sort of technical innovation on the horizon. And so whenever we see a big one, we tend to organize information around it to help our members understand what their peers are thinking, understand what’s happening in the private and academic sectors, just to really make informed decisions for themselves. So the unique thing about states always is that states are sovereign entities. They’re going to take whatever approach that they feel is best, but they rely on organizations like FTA to convene them and help guide them a little bit and give them the information that is the most important to them.

So a little over a year and a half ago, we formed an AI working group, and we did it in two parts. We started with education, so we hosted, over the course of a 12-month period, two rounds of briefings with experts from every corner of the technology sector and academic sector. So we had researchers that focused on the application of transparency and ethics in AI, we had researchers who focused on large language models and how they actually work, and everything at every depth, from technical to business.

We had some private sector groups graciously share with us concepts that they had built for other sectors that leverage these technologies, just so the attendees could get a feel for understanding how you take this fantastical thing of generative AI and start to translate it into what you could actually use it for in your day-to-day work — because there’s been, I think, so much really good marketing by the companies that produce generative AI technologies that it becomes a little bit hard to conceptualize what you might actually do in a government agency with it. How is it potentially going to help us? What’s the innovation there?

So after those two rounds of briefings, we took a set of volunteers that represents a good number of our members. A couple dozen people got together, and they organized themselves into three groups. Those three groups have been focused on some background education information, so just helping anyone who’s reading the white paper understand the terminology that’s used frequently in the technology, the ways that we encounter the technology in our day-to-day lives, other forms of AI, because people often confuse generative AI for things like machine learning and other tools that have been around for quite some time.

We have a group that focuses on opportunities — so not necessarily in production, although some are, but different examples of where this technology might be utilized or where it could be utilized. And they even went a step further and fleshed out some of those concepts just to articulate to both business and technology stakeholders what the possibilities are.

And then the third group — which I have to admit, my partner in crime for this particular working group is our general counsel, Brian Oliner, who’s formerly the AG representing the Maryland Comptroller’s Office. And every time he and I talk about emerging technology, he’s such a good foil for a nerd like me because he’s a seasoned attorney who has seen complex vendor contracts and understands the ways that states need to protect themselves from a legal and statutory standpoint. And so he and I always like to hash out the details of technology. So he actually worked primarily with the third group, which is our considerations and risks group.

So you get through this white paper, and the goal is everyone from executive and business stakeholders in the agency through to maybe some of the technology stakeholders that deal primarily in strategy, they’re going to be able to consume the appropriate parts of this white paper and come out on the other side armed with information about how the technology might be applied, understand it maybe through the tax lens a little bit better — through the tax technology, tax agency technology lens a little better — and then also have at their fingertips a bunch of other resources to go out and look at.

So our teams, we’re not rewriting the book on generative AI. There’s so many brilliant researchers out there that are every day coming up with innovations and reports. So in some cases, we’re pointing the reader to those really great resources and just giving them the tax context they need to think about it. So that’s been the whole purpose of the group and how we’ve structured so far.

And what’s really exciting, and why I’m so glad about your question, is that I finally get to say that by the time our FTA technology conference happens the second week of August out in Tacoma, which is the only technology conference that serves the tax agency perspective. So there’s a lot of tax technology conferences out there. They’re usually practitioner or CPA driven; this one’s hosted by us. It’s for tax agencies and people who care about what tax agencies are doing with technology. By the time we get to that conference, we will have published the white paper.

And so that will be a really great venue for the IT and business leadership for the agencies who will have, at that point, had the white paper in their hands for a little bit for when they get together and convene, take what they do at our technology conference every year, which is think about the possibilities and what people are doing with technology and hear about successes and new ideas, take it to the next level and see how this fairly new yet very popular-to-talk-about emerging technology might fit into that.

Emily Hollingsworth: That sounds really interesting, and we certainly look forward as well to the briefing paper. We’ve discussed the working groups, and I was curious to know, is that program still open to FTA member agencies who may be interested in participating? We had also talked about this as well, but what are states learning or taking away from these work groups and sessions?

Ryan Minnick: Absolutely. It’s a great question. So the functional part of the working groups are starting to wrap up. So as we finalize the white paper — at least the first edition of the white paper — the folks that have worked in the three drafting groups that I mentioned before, they’re going to have an opportunity to collaborate with us on that final white paper version that is ultimately published to our members.

And it gets to the question that you asked, which is, what are states already getting out of this? I actually asked this question of our program co-leads when I was moderating the panel on this at the annual meeting, and the answers were really better than I had hoped. You always hope that people who participate in a working group, that they get value out of it even before the product is finished. And some of the things that they shared were, it was helpful for them to even hear how their peers — who were, as part of the working group, probably already people in their agencies who were thinking about this more than the average employee — to hear how their peers were thinking about this, to hear the level of curiosity, to hear the optimism for the possibilities.

And then even for some of the working group members, they’ve shared with me that this has actually helped make the technology feel a little bit more tangible and a little bit more real. I think the hype curve for generative AI has been really substantial. The underlying technology has been around for a couple of decades, but if you look at — for our conversation today, if we look at ChatGPT launching 20 months ago or so, if you look at that as the inflection point of the generative AI craze, the hype curve was very scary for people in a lot of roles, but in particular in government, because the initial premise was, oh my gosh, this can do everything. It can think. It’s a facsimile of a human, it can do all these wonderful things.

But of course, as time has gone on, we’ve all realized that, like any technology tool, it’s a new version of a tool that’s going to hopefully help improve productivity. It’s going to help our organizations do work better. But if you’ve played with some of these tools recently, they’re not taking anybody’s job away anytime soon; if anything, they’re probably freeing up knowledge workers to do work a little bit more efficiently, a little bit more effectively.

The bigger thing that I think is coming out of these trends, and that I think some of our members saw as they were participating in the working group, is the real opportunity for training in this space. This is a shift that is more like the shift from a typewriter to a word processor than it is a shift from a server in your server closet at the office to a server in the cloud. This is a fundamental change in how you interact with technology, which requires you to just completely rethink everything that you’re doing.

And it’s also one of those inflection points where people entering the workforce now are already receiving substantial exposure to it. And so you’re going to have a point where, much like the typewriter to the word processor, you’re going to have a big chunk of your workforce who you’re going to have to upskill and train on it, because they’re going to experience it for the first time when they’re in the middle stages of their career, but you’re at the same time going to be bringing people in at the early stages of their career who are going to be natively understanding it. And it doesn’t happen a lot with emerging technologies; it’s always a little bit more subtle than, I think, the point that we’re at now.

Emily Hollingsworth: Thank you. How transparent do you believe that state tax agencies should be when informing the public about AI use?

Ryan Minnick: It’s a tricky question to answer, not because I don’t believe in transparency, but because, like I mentioned before, every state’s their own sovereign island, so they have their own regulations and rules that they go through. And so I genuinely believe that our members are as communicative as possible, as they’re able to be, with all their stakeholders.

And the only times that it might take a little bit more time to share information is when that information being shared might either compromise someone’s data security, any data that we want to protect, or also that it might, potentially — in the case of fraud fighting — it might give away to criminals how we’re doing something to prevent them from committing crimes. So I think those tend to be the two areas where you see maybe a lag in that sharing, because we want to make sure that we’re not oversharing.

So 10 or 15 years ago, when we started — in a collaboration for the security summit with the IRS and the tax software industry — developing better methods to protect individuals’ identities when they were filing their individual income tax returns, a lot of that work now is very public. There’s a lot of that work that’s been made available, and people understand what that group is and how it works.

But at the time, we were putting those frameworks in place, and we were sharing as much as we were able to without compromising the nature of the work, because we prioritized making sure that the criminals couldn’t figure out what we were doing as we were doing it, because you want to make sure that — unfortunately, the problem with the public internet is criminals — I know in the tax world we use “fraudster” a lot, but in general, criminals of any variety, whether you’re committing fraud individually, you’re targeting someone for identity theft, or you’re doing it at scale because you’re trying to use data from the dark web — those are areas where we are, in general, as transparent as possible.

But when it comes to individual members and what they do, I don’t govern what information they release and at what time. So I think we as an organization commit to keeping our members informed and, to the extent we can, keeping the broader tax community informed of trends as they happen.

Emily Hollingsworth: Thank you. We’ll move on now to the survey that the FTA had released in 2024 with EY. The survey looked at 37 state tax agencies and two city tax departments.

I thought that its section, in particular, on AI was pretty interesting. For example — and I’ll read a few of the numbers from the survey — but it said that 15 percent of tax administrations are “conducting pilots or are already using AI in core functions.” It also said that 9 percent of respondents said that machine learning AI is being used in core functions, while 12 percent had said that they’re conducting pilot programs on machine learning technology — again, distinct from generative AI. So I was curious, is the FTA planning to release a state tax agency survey this year?

Ryan Minnick: So we’ve not released the follow-up survey yet; we’re still working on determining what that survey design looks like. We see great value in continuing this effort. That was the first comprehensive survey we had issued in well over a decade. It was a priority of Sharonne Bonardi, our executive director, who formerly was the deputy comptroller for the state of Maryland. And when she joined FTA, one of the resources she wanted us to have —primarily for our members, but also for the general public — was a state of the state tax agencies, helping people understand the priorities and the ways that tax agencies were thinking about emerging issues.

And so that was really that report that you mentioned — which I think is a great read, I think everybody should go to our website and download it — when it comes to the AI question, we actually drafted and sent out and published that survey right as generative AI was blowing up. You wish you had a time machine, right? Because I wish we could have structured that question a little bit differently, because as we have been for the last year and a half on a road show, talking about insights from the survey, I get a question about this little section all the time.

And ultimately the nonexciting answer is that the question was so broad that it was really incredibly difficult to know exactly how somebody was thinking when they answered the question. We asked about AI because at the time, before generative AI came out, AI was seen as advanced machine learning — I mean, maybe some algorithmic work, maybe some natural language processing. So things like when you call into a phone tree and you say naturally what you’re looking for, and the phone tree tries its best to help you find the right person. But that was really the intent of the question when we were writing it, because generative AI had just started emerging as this — we hadn’t quite seen the splash from ChatGPT yet.

Of course, now fast-forward, and everybody sees that question, they’re like, “All these states have AI in production? Oh my goodness, this is crazy.” No. They don’t. Not the AI you’re thinking of, this generative AI that’s on everybody’s brain today; AI in the sense that a lot of processing in tax agencies across the country, there’s a lot of machine learning that takes place. It’s programmatic. It’s looking for patterns in terms of fraud. It’s looking for noncompliance, so criminal fraud versus just general tax fraud, people that either underreport or don’t accurately or correctly answer something. It’s also looking for things like accuracy; it’s monitoring trends. There’s a lot of uses for those technologies.

So I think the more accurate, interesting insight from that survey is exactly what you pointed out, which is the machine learning piece. Even years after a lot of these machine learning trends started to hit, there’s still agencies that are looking at machine learning and how they can use it in different ways. And I’ll say as a technologist supporting tax agencies, I think that’s a great thing, because there’s a lot of really great uses for what I think a lot of people in the public think is old technology, because right now we’ve all moved on to generative AI. We all want Siri to work better on our iPhones, and we’re not really thinking about anything else. But machine learning in some contexts is a better solution to a lot of the problems people want to solve with generative AI. And it’s not only better in terms of safety, but it could be better in terms of performance, in terms of cost.

Generative AI certainly has its place: It’s powered by large language models; it handles language super well. There’s a lot you can do with it. It has a lot of configuration that’s required, a lot of training that’s required so that you get accurate and nonhallucinatory answers. But machine learning, that’s good old-fashioned math. That’s really sophisticated math. And so what generative AI can’t do really well is math. There’s some exceptions, but for the most part it’s good at language. And most of tax is math. So we actually find ourselves with pretty advanced machine learning capabilities that are available, that have been for a long time, that a lot of agencies use in production that, unfortunately for — I suppose for the hype of it, they fall under that form of AI.

But I typically, even when I’m talking about these things on stage somewhere, I’ll talk about machine learning, and I’ll talk about generative AI. I almost never use the AI term broadly because AI is artificial intelligence, and so far nothing we’ve developed is artificially intelligent; it just has the appearance of it.

So doing math really fast seems very intelligent. So that’s machine learning. Interpreting language really fast, or drafting a country music song in the style of Garth Brooks or whatever people have asked ChatGPT to do, that seems very artificially intelligent, but in neither of those cases is that term accurate.

I digress a little bit because I know your question was about our survey, but it’s so interesting because, going through the results and seeing what states were thinking about and how they were responding to the survey, that was my takeaway, is that I feel like I wish I could go back in time and ask more separately about the different emerging forms of the technology because I think we would’ve gotten maybe a more representative answer. I think we got some good insights, but 12 percent of states are not actively using generative AI in production. So I hope no one reads the report and thinks that.

Emily Hollingsworth: I guess that also leads into my question. So we have the percentage — for example, 12 percent said that they’re piloting machine learning technology. Do we know how many states that translates to?

Ryan Minnick: Yeah. Based on the survey itself, I assume, based on the data team that put all that together, to me that would be a handful, four or five states that responded to the survey. But that also, like I said, that question’s a little bit of a misnomer in how you interpret it. So that could be four or five states at the time of the survey were actively piloting a new use of the technology. Those same states could probably have already had that technology in place doing something else at the same time. So that’s something else I think bears explaining.

If you’re curious about how tax agencies work, people listening, we don’t sit still. One of the things that I’ve learned in my 10 years at FTA is that agencies are always looking forward — how they can do their job better, how they can better serve the citizen, how they can take innovations and leverage them to do more with less, because unfortunately in government, I think that’s a lot of the situation we always find ourselves with. So budgets don’t necessarily grow as much as we’d like, or sometimes they get cut. Oftentimes legislatures ask for agencies to do new things, and they don’t always give them money to do that. So everybody’s trying to do the most with what they have.

And so when you get innovative technologies that could be used in a line of business — and tax agencies have numerous lines of business: They have everything from receiving data, you have your traditional tax return processing, you’ve got auditing and collections and customer experience, and you have your legal and tax policy group that has to interpret things that come out of [the] legislature. And there’s a lot of moving parts. And so with this question, the way I interpret the answer is four or five or six states at the time of the survey were looking at machine learning to potentially solve a problem somewhere in their agency, irrespective of wherever they were already using that technology, if they were, to solve problems in a different place.

Emily Hollingsworth: This is a question that I had when we were discussing state transparency. It relates to California’s announcement in April that it had secured a contract with a company and is testing a generative AI solution. Now, this agreement is going to test the solution under a limited — I think this is a testing environment, so it’s not necessarily something that’s going to go out to the public immediately or be used during large periods of time like filing season.

But I was curious to know what your thoughts are on this development. California has also been very transparent about its developments in AI and has also done a lot of work to vet and test those solutions. So I was curious to know what your thoughts were on that particular development.

Ryan Minnick: Well, it’s a great question, first of all. California, certainly as one of the larger states, certainly by staff, they have the largest numbers of people working on tax administration across their several agencies that do that work. I think even in terms of technology, they’re a great example of transparency in government.

So you look at their technology modernization plan that they’ve been doing — I think they’re in part two, and I forget what phase of part two they’re in — but they started publicly sharing that modernization strategic plan 10-plus years ago, when they were in the first phase, or the first part. So it doesn’t surprise me at all that they were incredibly transparent about piloting a technology and going about the process of securing a contract to do so.

I think it’s also helpful that they shared the scope of what they were thinking about. I know that oftentimes parts of the procurement process are public, and so people can see what states are doing in different agency areas on a regular basis. But California certainly in this case went one step further. I know you all covered it, I know a couple other media outlets did, and they shared, they said, “We’re looking at the potential for this technology. It’s a very controlled experiment.”

I can’t comment on the project specifically because, first of all, I’m not a part of it. They’re just a member, and we don’t usually get into that level of detail about it. But in concept, what they did was great. They decided to do something, they shared what they were doing, and now I presume — not being familiar with the project specifically — I presume they’re doing that. And then to the extent that they make a decision, they’ll come back and take the next step.

I think that’s, generally speaking, what a lot of agencies look at in terms of a pilot. They want to be very upfront with people who are impacted by whatever they’re testing. So sometimes we’re able to test technologies in a bubble or in a vacuum, and so we don’t necessarily have that need to share what we’re testing, because we’re not doing it — for example, if you’re wanting to test the potential for a technology that would leverage generative AI, but you’re not going to actually test it with taxpayers, so you’re just going to test it internally, and you’re going to test it only on maybe a synthetic data set — so something that’s not even real data or real information, it’s something that’s manufactured for the purpose of testing — just to understand how [the] technology works. That’s a super great, super safe experiment, because it’s not touching anything sensitive, and if it’s something that doesn’t work out, then you didn’t go through a big implementation to put it in front of everybody.

Separately, if you go down the route like California did, and you want to do a limited trial of something, and you can define that scope and you can let folks know about it in the way that makes sense within that state’s rules and how the agency operates and how the state as a whole has policies on it, I think that’s really great, too.

One of the fun things about technology in general — I guess one of the most fun things about technology — is that especially these days, really good ideas can come from anywhere. And the more — this is true inside of government, outside of government — this is my tip to nontechnologists from a technologist is, read about technology; understand how people are using it. Think about how they’re using it, and think about how you might be able to. We’re seeing so many innovative uses for not even just generative AI, but just tools in general that are being made available.

And I think part of that is this increasing level of curiosity of people wanting to figure out how to be more effective, figure out how to optimize things a little better, figure out how to deliver on their mission in the way that best serves their stakeholders, whoever their stakeholders may be. In our context, it’s taxpayers and tax administrators, but in somebody else’s, it might be readers, if you’re a journalist. How can you leverage technologies and understanding how other sectors are doing it? That’s one of the reasons why our work group interviewed so many private sector professionals. Some of them weren’t even in the tax world — they were just people working on generative AI in the private sector somewhere else. We just wanted to hear what they were thinking about and how they approached the project or what their view on the knowledge was.

Because most of the time, I can talk about tax all day long, and Emily, you can translate it into journalism, whether it’s tax or otherwise. Likewise, you could tell me something that you’re doing in order to maybe reach readers of Tax Notes a little bit easier, a little bit better, get into their inbox a little bit faster. I’m going to listen with my tax administrator ears and think, “Oh, how could I potentially take this really cool thing that you’re doing and help benefit my members or the taxpayers?” or, “How can I help my members benefit their stakeholders?”

You get into the question of, is transparency important? Absolutely. I think data security is also very important, and fighting crime is also really important. So you have to balance everything. But at the end of the day, it’s because there’s so much curiosity out there, and I think people who pay attention to these things and can be helpful — especially if people have great ideas, there’s some great careers in tax administration, I suppose I should put a shameless plug in — so if you have great ideas for tax administration, share them with tax administrators, because we want to hear them. We want to discharge those duties as faithfully and efficiently and effectively as possible.

Emily Hollingsworth: Absolutely. And Ryan, again, thank you so much for coming on the podcast.

Ryan Minnick: Oh, of course. Happy to do it anytime. It was great talking to you today, Emily.



Source link

AI Insights

Global Artificial Intelligence (AI) in Clinical Trials Market

Published

on


According to DelveInsight’s analysis, The demand for Artificial Intelligence in clinical trials is experiencing strong growth, primarily driven by the rising global prevalence of chronic conditions like diabetes, cardiovascular diseases, respiratory illnesses, and cancer. This growth is further supported by increased investments and funding dedicated to advancing drug discovery and development efforts. Additionally, the growing number of strategic collaborations and partnerships among pharmaceutical, biotechnology, and medical device companies is significantly boosting the adoption of AI-driven solutions in clinical trials. Together, these factors are anticipated to fuel the expansion of the AI in the clinical trials market during the forecast period from 2025 to 2032.

DelveInsight’s “Artificial Intelligence (AI) in Clinical Trials Market Insights, Competitive Landscape and Market Forecast-2032” report provides the current and forecast market outlook, forthcoming device innovation, challenges, market drivers and barriers. The report also covers the major emerging products and key Artificial Intelligence (AI) in Clinical Trials companies actively working in the market.

To know more about why North America is leading the market growth in the Artificial Intelligence (AI) in Clinical Trials market, get a snapshot of the report Artificial Intelligence (AI) in Clinical Trials Market Trends

https://www.delveinsight.com/sample-request/ai-in-clinical-trials-market?utm_source=openpr&utm_medium=pressrelease&utm_campaign=gpr

Artificial Intelligence (AI) in Clinical Trials Overview

Artificial Intelligence (AI) in clinical trials refers to the use of advanced machine learning algorithms and data analytics to streamline and improve various aspects of clinical research. AI enhances trial design, patient recruitment, site selection, and data analysis by identifying patterns and predicting outcomes. It enables faster patient matching, optimizes protocol design, reduces trial timelines, and improves data quality and monitoring. AI also helps in real-time adverse event detection and adaptive trial management, making clinical trials more efficient, cost-effective, and patient-centric.

DelveInsight Analysis: The global Artificial Intelligence in clinical trials market size was valued at USD 1,350.79 million in 2024 and is projected to expand at a CAGR of 12.04% during 2025-2032, reaching approximately USD 3,334.47 million by 2032.

Artificial Intelligence (AI) in Clinical Trials Market Insights

Geographically, North America is expected to lead the AI in the clinical trial market in 2024, driven by several critical factors. The region’s growing burden of chronic diseases, substantial investments in R&D, and the rising volume of clinical trials contribute significantly to this dominance. Additionally, an increasing number of collaborations and partnerships among pharmaceutical and medical device companies, along with the advancement of sophisticated AI solutions, are accelerating market expansion. These developments are enhancing the ability to manage complex clinical trials efficiently, driving the adoption of AI technologies and supporting the market’s growth in North America throughout the forecast period from 2025 to 2032.

To read more about the latest highlights related to Artificial Intelligence (AI) in Clinical Trials, get a snapshot of the key highlights entailed in the Artificial Intelligence (AI) in Clinical Trials

https://www.delveinsight.com/report-store/ai-in-clinical-trials-market?utm_source=openpr&utm_medium=pressrelease&utm_campaign=gpr

Recent Developments in the Artificial Intelligence (AI) in Clinical Trials Market Report

• In May 2025, Avant Technologies, Inc. (OTCQB: AVAI) and joint venture partner Ainnova Tech, Inc. announced the initiation of acquisition discussions aimed at enhancing their presence in the rapidly growing AI-powered healthcare sector.

• In March 2025, Suvoda introduced Sofia, an AI-driven assistant created to optimize clinical trial management processes. Sofia aids study teams by providing quick access to essential trial data and real-time, intelligent insights. This tool boosts operational efficiency, minimizes manual tasks, and helps teams make faster, data-informed decisions throughout the clinical trial journey.

• In December 2024, ConcertAI and NeoGenomics unveiled CTO-H, an advanced AI-powered software platform designed to enhance research analytics, clinical trial design, and operational efficiency. CTO-H provides an extensive research data ecosystem, offering comprehensive longitudinal patient data, deep biomarker insights, and scalable analytics to support more precise, efficient, and data-driven clinical development processes.

• In June 2024, Lokavant introduced SpectrumTM, the first AI-powered clinical trial feasibility solution aimed at enhancing trial performance throughout the clinical development process. Spectrum enables study teams to forecast, control, and improve trial timelines and expenses in real-time.

• Thus, owing to such developments in the market, rapid growth will be observed in the Artificial Intelligence (AI) in Clinical Trials market during the forecast period

Key Players in the Artificial Intelligence (AI) in Clinical Trials Market

Some of the key market players operating in the Artificial Intelligence (AI) in Clinical Trials market include- TEMPUS, NetraMark, ConcertAI, AiCure, Medpace, Inc., ICON plc, Charles River Laboratories, Dassault Systèmes, Oracle, Certara, Cytel Inc., Phesi, DeepHealth, Unlearn.ai, Inc., H1, TrialX, Suvoda LLC, Risklick, Lokavant, Research Solutions, and others.

Which MedTech key players in the Artificial Intelligence (AI) in Clinical Trials market are set to emerge as the trendsetter explore @ Key Artificial Intelligence (AI) in Clinical Trials Companies

https://www.delveinsight.com/sample-request/ai-in-clinical-trials-market?utm_source=openpr&utm_medium=pressrelease&utm_campaign=gpr

Analysis on the Artificial Intelligence (AI) in Clinical Trials Market Landscape

To meet the growing needs of clinical trials, leading companies in the AI in Clinical Trials market are creating advanced AI solutions aimed at improving trial efficiency, optimizing patient recruitment, and enhancing clinical trial design at investigator sites. For example, in April 2023, ConcertAI introduced CTO 2.0, a clinical trial optimization platform that utilizes publicly available data and partner insights to deliver comprehensive site and physician-level trial data. This tool provides key operational metrics and site profiles to evaluate trial performance and site capabilities. Additionally, CTO 2.0 assists sponsors in complying with FDA requirements for inclusive trial outcomes, promoting a shift toward community-based trials with more streamlined and patient-centric designs.

As a result of these advancements, the software segment is projected to experience significant growth throughout the forecast period, contributing to the overall expansion of the AI in the clinical trials market.

Scope of the Artificial Intelligence (AI) in Clinical Trials Market Report

• Coverage: Global

• Study Period: 2022-2032

• Artificial Intelligence (AI) in Clinical Trials Market Segmentation By Product Type: Software and Services

• Artificial Intelligence (AI) in Clinical Trials Market Segmentation By Technology Type: Machine Learning (ML), Natural Language Processing (NLP), and Others

• Artificial Intelligence (AI) in Clinical Trials Market Segmentation By Application Type: Clinical Trial Design & Optimization, Patient Identification & Recruitment, Site Identification & Trial Monitoring, and Others

• Artificial Intelligence (AI) in Clinical Trials Market Segmentation By Therapeutic Area: Oncology, Cardiology, Neurology, Infectious Disease, Immunology, and Others

• Artificial Intelligence (AI) in Clinical Trials Market Segmentation By End-User: Pharmaceutical & Biotechnology Companies and Medical Device Companies

• Artificial Intelligence (AI) in Clinical Trials Market Segmentation By Geography: North America, Europe, Asia-Pacific, and Rest of the World

• Key Artificial Intelligence (AI) in Clinical Trials Companies: TEMPUS, NetraMark, ConcertAI, AiCure, Medpace, Inc., ICON plc, Charles River Laboratories, Dassault Systèmes, Oracle, Certara, Cytel Inc., Phesi, DeepHealth, Unlearn.ai, Inc., H1, TrialX, Suvoda LLC, Risklick, Lokavant, Research Solutions, and others

• Porter’s Five Forces Analysis, Product Profiles, Case Studies, KOL’s Views, Analyst’s View

Interested in knowing how the Artificial Intelligence (AI) in Clinical Trials market will grow by 2032? Click to get a snapshot of the Artificial Intelligence (AI) in Clinical Trials Market Analysis

https://www.delveinsight.com/sample-request/ai-in-clinical-trials-market?utm_source=openpr&utm_medium=pressrelease&utm_campaign=gpr

Table of Contents

1 Artificial Intelligence (AI) in Clinical Trials Market Report Introduction

2 Artificial Intelligence (AI) in Clinical Trials Market Executive summary

3 Regulatory and Patent Analysis

4 Artificial Intelligence (AI) in Clinical Trials Market Key Factors Analysis

5 Porter’s Five Forces Analysis

6 COVID-19 Impact Analysis on Artificial Intelligence (AI) in Clinical Trials Market

7 Artificial Intelligence (AI) in Clinical Trials Market Layout

8 Global Company Share Analysis – Key Artificial Intelligence (AI) in Clinical Trials Companies

9 Company and Product Profiles

10 Project Approach

11 Artificial Intelligence (AI) in Clinical Trials Market Drivers

12 Artificial Intelligence (AI) in Clinical Trials Market Barriers

13 About DelveInsight

Latest Reports by DelveInsight

• Percutaneous Arterial Closure Device Market: https://www.delveinsight.com/report-store/vascular-closure-devices-market

• Transdermal Drug Delivery Devices: https://www.delveinsight.com/report-store/transdermal-drug-delivery-devices-market

• Infusion Pumps Market: https://www.delveinsight.com/report-store/infusion-pumps-market

• Acute Radiation Syndrome Market: https://www.delveinsight.com/report-store/acute-radiation-syndrome-pipeline-insight

• Human Papillomavirus Hpv Market: https://www.delveinsight.com/report-store/human-papillomavirus-hpv-market

• Blood Gas And Electrolyte Analyzers Market: https://www.delveinsight.com/report-store/blood-gas-and-electrolyte-analyzers-market

Contact Us

Gaurav Bora

info@delveinsight.com

+14699457679

www.delveinsight.com

Connect With Us at:

LinkedIn | Facebook | Twitter

About DelveInsight

DelveInsight is a leading Business Consultant and Market Research firm focused exclusively on life sciences. It supports Pharma companies by providing end-to-end comprehensive solutions to improve their performance.

Get hassle-free access to all the healthcare and pharma market research reports through our subscription-based platform PharmDelve.

This release was published on openPR.



Source link

Continue Reading

AI Insights

What Is Artificial Intelligence? Explained Simply With Real-Life Examples – The Times of India

Published

on



What Is Artificial Intelligence? Explained Simply With Real-Life Examples  The Times of India



Source link

Continue Reading

AI Insights

Cal State LA secures funding for two artificial intelligence projects from CSU

Published

on


Cal State LA has won funding for two faculty-led artificial intelligence projects through the California State University’s (CSU) Artificial Intelligence Educational Innovations Challenge (AIEIC).

The CSU launched the initiative to ensure that faculty from its 23 campuses are key drivers of innovative AI adoption and deployment across the system. In April, the AIEIC invited faculty to develop innovative instructional strategies that leverage AI tools.

The response was overwhelming, with more than 400 proposals submitted by over 750 faculty members across the state. The Chancellor’s Office will award a total of $3 million to fund the 63 winning proposals, which were chosen for their potential to enable transformative teaching methods, foster groundbreaking research, and address key concerns about AI adoption within academia.

“CSU faculty and staff aren’t just adopting AI—they are reimagining what it means to teach, learn, and prepare students for an AI-infused world,” said Nathan Evans, CSU deputy vice chancellor of Academic and Student Affairs and chief academic officer. “The number of funded projects underscores the CSU’s strong commitment to innovation and academic excellence. These initiatives will explore and demonstrate effective AI integration in student learning, with findings shared systemwide to maximize impact. Our goal is to prepare students to engage with AI strategically, ethically, and successfully in California’s fast-changing workforce.”

Cal State LA’s winning projects are titled “Teaching with Integrity in the Age of AI” and “AI-Enhanced STEM Supplemental Instruction Workshops.”

For “Teaching with Integrity in the Age of AI,” the university’s Center for Effective Teaching and Learning will form a Faculty Learning Community (FLC) to address faculty concerns about AI and academic integrity. From September 2025 to April 2026, the FLC will support eight to 15 cross-disciplinary faculty members in developing AI-informed, ethics-focused pedagogy. Participants will explore ways to minimize AI-facilitated cheating, apply ethical decision-making frameworks, and create assignments aligned with AI literacy standards.

The “AI-Enhanced STEM Supplemental Instruction Workshops” project will look to expand and improve student success in challenging first-year Science, Technology, Engineering, and Math courses by integrating generative AI tools, specifically ChatGPT, into Supplemental Instruction workshops. By leveraging AI, the project addresses the limitations of collaborative learning environments, providing personalized, real-time feedback, and guidance.

The AIEIC is a key component of the CSU’s broader AI Strategy, which was launched in February 2025 to establish the CSU as the first AI-empowered university system in the nation. It was designed with three goals: to encourage faculty to explore AI literacies and competencies, focusing on how to help students build a fluent relationship with the technologies; to address the need for meaningful engagement with AI, emphasizing strategies that ensure students actively participate in learning alongside AI; and to examine the ethics of AI use in higher education, promoting approaches that embed academic integrity.

Awarded projects span a broad range of academic areas, including business, engineering, ethnic studies, history, health sciences, teacher preparation, scholarly writing, journalism, and theatre arts. Several projects are collaborative efforts across multiple disciplines or focus on faculty development—equipping instructors with the tools to navigate course design, policy development, and classroom practices in an AI-enabled environment. 



Source link

Continue Reading

Trending