Connect with us

AI Research

How US States Are Shaping AI Policy Amid Federal Debate and Industry Pushback

Published

on


Audio of this conversation is available via your favorite podcast service.

In the United States, state legislatures are key players in shaping artificial intelligence policy, as lawmakers attempt to navigate a thicket of politics surrounding complex issues ranging from AI safety, deepfakes, and algorithmic discrimination to workplace automation and government use of AI. The decision by the US Senate to exclude a moratorium on the enforcement of state AI laws from the budget reconciliation package passed by Congress and signed by President Donald Trump over the July 4 weekend leaves the door open for more significant state-level AI policymaking.

To take stock of where things stand on state AI policymaking, I spoke to two experts:

What follows is a lightly edited transcript.

Cristiano Lima-Strong:

Scott, Hayley, thank you both so much for joining us. We’re speaking just over a week after the Senate ultimately opted to keep a moratorium on state AI rules out of the reconciliation package that was just signed over the 4th of July weekend. Now that bill, certainly a version of it could come back and there was a lot of debate about what specific laws that have been passed would’ve been potentially blocked by it.

But I wanted to put that aside for a bit and I thought this would be a good moment to check in on what states have been up to when it comes to putting some of these rules into place, especially as more and more of them take effect and are being implemented, and then also look ahead a little bit to what trends we could see on the horizon. Scott, I wanted to start with you. You’ve been publishing these reports annually. Looking at the state of tech policymaking, including particularly around AI, what have been some of the top line trends that we’ve seen in terms of what’s actually on the books when it comes to states setting rules around AI?

Scott Babwah Brennen:

Most of the attention on state level AI regulation has focused on some of the biggest kind of blockbuster bills. But while that happens, states have been passing a whole raft of smaller, more narrow sectoral bills that cover a narrow industry or just one sort of thing. Last year we saw many bills that did things like establish AI commissions that deal in appropriations to get money to universities for AI programs.

We saw a whole lot of bills doing things like requiring labels on political ads that contain deceptive or misleading generative AI. We saw some efforts to ban things like NCII, Non-Consensual Intimate Imagery, or CSAM, Child Sexual Abuse Material that is generated by AI, and these are the sorts of things that states have been actually focusing most on. That being said, we’ve also seen some of these larger sort of more comprehensive efforts, most notably the Colorado, what’s been called comprehensive bill that was enacted last year that really focused on algorithmic discrimination, but across a bunch of different sectors. And then, well, I guess the last month we saw the New York Legislature, it passed the RAISE Act, which would be regulation of frontier models, that hasn’t, as of today, has not been signed or vetoed by the governor of New York. We’re sort of waiting to see what she’s going to do.

Cristiano Lima-Strong:

Yeah. So I want to circle back on that one because a biggie and it gets into some of the more high profile battles that we’ve seen, especially last year in California. But Hayley, just want to start off, what are some of your top line trends, biggest things that you’ve been seeing as far as what states are actually being able to get passed and sign into law?

Hayley Tsukayama:

Yeah, I mean, definitely underscoring a lot of what Scott just said, right? We’re seeing a lot on deepfakes, on deceptive media, on that kind of stuff. I think we’ve also seen an uptick in sort of bills around workers AI and AI use and automated decision-making use in the workplace, which is I think really the high concern to a lot of people, and it’s been really interesting to see sort of the different groups that have been activating around those bills.

I think it’s not a community that you always see on tech bills, but has been very interesting. I think adjacent to AI, but I kind of consider them together. We’ve also seen a lot of pricing algorithm bills show up, so really focusing on those ways that AI can affect people’s pocketbooks, what they call kitchen table issues. So I think there have been a lot of interesting trends, and I think pulling in people into this conversation around AI, which is obviously, it affects all of us, but really seeing bills that are connecting threads that I don’t know that I’ve seen before this year as much.

Cristiano Lima-Strong:

Hayley, I know you’ve been tracking a lot, legislation around AI in government and the use by government. I remember a couple years ago when there was a lot of discussion just ramping up at the federal level around what Congress could potentially do on this. Then some of the response that we heard, I remember Senator Gary Peters talking about, “Well, if we’re going to try to set rules for AI, we should set some guard rails for our own use first.” Do you see a lot of states taking a similar approach and what do you make of that?

Hayley Tsukayama:

Yeah, I mean we’ve certainly seen a lot of what I think of as public sector AI and automated decision-making bills come up. A couple of good examples in states overall, I think it’s a really important area to focus on. I think when we’re specifically thinking about, again, AI and automated decision-making systems at the government level, often they kind of frame it as procurement, which is one of the most boring words people hear in a discussion.

But when you’re talking about something that makes a decision, it’s not like buying a printer. That’s essentially, we kind of say here, procurement, when you’re thinking about AI is rulemaking, right? You’re really thinking about the process by which people are getting flagged for extra review or in some cases actually getting decisions made by AI that may or may not have to be reviewed by a person. So I think we’re seeing a lot of those conversations pop up, and I think that’s a really good thing. And honestly, I think government should be held to a high standard when it comes to reporting when they’re using these systems and thinking about how they’re collecting the information that goes into those systems, making sure that they’re being equitable is there.

Cristiano Lima-Strong:

Yeah. Scott, you talked about some of the working group bills and what’s jumped out to you in terms of the AI in government and some of the activity we’ve seen at the state level on that.

Scott Babwah Brennen:

To me, those sort of headlines of the past six months have been less about what has passed, and more about some of the actions behind the scenes and some of the sorting that we’ve seen. So most importantly, I think this year we’re starting to see a partisan sorting in the sorts of bills that are being introduced and championed by bipartisan. I think in previous years we saw a lot more bipartisan collaboration, a lot more bills that had supporters from both parties behind them.

I think this year we’re increasingly seeing, we’re seeing less of that. We’re seeing battle lines being drawn, and I actually think that the government, the AI and government has been drawn into that a little bit. I think the best example of this is what happened in Texas where Rep. Giovanni Capriglione (R-TX98) introduced basically a version of the multi-state working group bill on algorithmic discrimination that was more or less aligned with what Colorado did after some pushback largely from members of his own party and many civil society groups, he rewrote that bill, kept the same name, and made it basically just focus on AI and government, taking out the provisions about AI discrimination across all sort of critical decision making. And so we’ve seen this sort of retreat from some of the more kind of far-reaching efforts back to some of these more government-oriented ones as well.

Cristiano Lima-Strong:

What are some other examples you’ve seen of the AI debate becoming a little more partisan in terms of the bill states are pursuing?

Scott Babwah Brennen:

I think the algorithm really around algorithmic discrimination is probably the sort of key there where I’m not sure if it’s because the phrase discrimination pings the concern about woke ideology on the right, but yeah, wherein algorithmic discrimination bills had received some bipartisan support, that has sort of gone away. And so we’ve seen pretty much the other collapse of these multi-state working group bills, at least in non-blue states.

And even in blue states, some of them have actually passed so far, but some of these consumer protection-oriented laws as well, I think have fallen victim to this where we continue to see in blue states interest in some of these consumer protection provisions. While the red states seem to be more focused on, while there is some continued interest in things like deepfakes on disclosure, but far more focused on government use of AI, more focused on AI commissions, that sort of thing.

Hayley Tsukayama:

And I think just to respond, to be clear, I think it should be a both and right? I mean, I care a lot about government use of AI. I obviously also care a lot about private use of AI. I think Scott’s absolutely right though, I think we are seeing more polarization around the issue, and certainly for a while it looked like kind of everybody was going to be interested in these bills and concerned about the issues that were coming up. And I agree. I do wonder also if it’s that discrimination language that’s sort of raised a flag.

Cristiano Lima-Strong:

Certainly you could see that pinging some of the DEI concerns that we’ve heard in Washington, and so it makes sense that we would see some of that at the state level as well. So we’ve talked a little bit about the Colorado legislation, which most view as sort of the first comprehensive AI legislation. There’s been some debate about whether we are likely to see more efforts like that or to continue to see states pursue more of this sort of sectoral or piece meal approach. Is there a lot of sign of other states trying to pick up on the comprehensive bandwagon? Hayley, any thoughts on that?

Hayley Tsukayama:

I don’t own an accurate crystal ball, so a little hard to say. I mean, I think there’s certainly interest. I have heard a little hesitation from folks because Colorado’s law is under some threat of rollback, I guess is what I would say. When it was signed, the governor was saying, “Well, I want you to go back and look at this.” There’s some question of there were attempts this year to sort of make some amendments to that bill that I think would’ve made it less protective for consumers. The clock kind of ran out on that.

There was a question of whether if Colorado calls a special session, whether we could revisit it and soon. So I get a little bit of sense that folks are kind of waiting to see what happens overall with that bill. But I do think that the interest in the issue is still there, so it’s a little hard for me to say, yes, it will be in these states and they will address it this way. But I think, as I said, the interest is there, but there’s a little bit of weariness I think, about what’s going to happen to some of these state bills and also frankly, what the federal government is going to do.

Scott Babwah Brennen:

That sounds exactly right. I’ll just add that after Virginia, actually, the legislature passed one of these bills and then it was vetoed by the governor in the early part of the year. And then when you add to that, basically what happened, the sort of weird politics with the Multistate Working Group and their host organization, FPF, where basically there was a breakup, a well-publicized sort of breakup there after—I think it was, was it Senator Ted Cruz (R-TX), I think? Yeah. Called out the Multistate Working Group and FPF. It’s created this sort of odd uncertain situation. And then when you add to that, the fact that the bill that had gotten the farthest this year was vetoed, yeah, it doesn’t seem super promising, but I know that there is still, as Hayley said, a lot of interest in these bills in some states.

Cristiano Lima-Strong:

We’ve mentioned a couple different instances of there being vetoes and of course the specter of the moratorium last year with SB 1047 in California, which would’ve required AI companies to conduct basically safety tests before rolling out some of their most advanced models. That was something that passed but then was vetoed by the governor. There was a lot of discussion about the time, is this going to have a chilling impact on states pursuing more aggressive legislation? Now there’s a question of how the moratorium will unfold. At the same time, as Scott, you’ve alluded to, we have seen New York take up a similar proposal in the RAISE Act. So I guess this is my long-winded way of asking. Does it seem like states are letting up or are they forging ahead in face of some of the political dynamics here? What do you think, Scott?

Scott Babwah Brennen:

Yeah, they absolutely do seem to be forging ahead. So it’s funny, I just hosted a panel on the RAISE Act yesterday, so it’s very top of mind, but I mean you’re right. After 1047 was vetoed last year, there was a lot of uncertainty about what would happen. My understanding is that Assemblymember Bores (D-NY73) and Senator Gounardes (D-NY26) really tried to calibrate the RAISE Act to address some of the biggest concerns from 1047, and so it did not include some of the more unpopular provisions. The RAISE Act still faces a lot of pushback from industry and industry groups, but I think what’s really interesting is since the Raise Act was passed, we’ve now seen Scott Wiener, who was the sponsor of 1047, revise his current bill, which was SB 53 to basically include some of these provisions from the RAISE Act. He doesn’t go all the way back to 1047, but it’s like a step closer.

And he of course didn’t quite say, in the press release, he didn’t say this is because of the RAISE Act, but he actually said it was because of the findings of Working Group, released a report a couple of weeks ago or a month ago. But we’ve also seen another similar bill introduced in, I think it’s Michigan that would do a similar sort of set of requirements. So after the success of the RAISE Act, yeah, we are seeing a sort of renewed interest in these frontier model bills, though I’ll just say none of them have actually been fully enacted. So this could all look very different if Hochul vetoes the bill and Scott Wiener’s bill doesn’t go anywhere.

Cristiano Lima-Strong:

And these of course are some of the most aggressive or sweeping or protective measures in the country, and I think that’s why a lot of people look at them as sort of litmus tests for where legislators are on this. Hayley, what are your thoughts on how some of the opposition that those bills have incurred and then also the specter of this moratorium, will Congress pass this down the line? Do you see that having an impact at the state level? Do you think that the trend of legislators forging ahead is going to keep up?

Hayley Tsukayama:

Yeah, I mean I certainly think we’re going to continue to see legislators forging ahead and to me, what’s important about making sure this is a multi-state conversation where most of the action is happening at states is that you see people take these big swings and then you get to kind of feel out where the traps are or where the opposition is going to come from or what those talking points are going to be, and then you see in another state, oh, they’ve adjusted it to be this way or whatever.

So I think that’s really important and I think we are going to continue to see some of that tuning, some of that fine-tuning across states, especially as legislators pay attention to each other. The moratorium conversation, it’s certainly going to be influential. I mean, I think people are always going to be thinking about preemption and whether federal laws are going to override state laws, but I also think, I certainly think that in the formulation that we just saw, which is, we’re going to override all your laws and replace it with nothing, that there was a huge pushback there from state legislators. They know that these issues matter to their constituents, and so I don’t think that kind of formulation is going to be particularly popular again.

So I do think that Congress is going to have to come to the table with something, with a proposal, and I’ve done a lot of privacy work, so it’s hard for me. I think about states a lot, but I’ve done a lot of privacy work. I used to be a reporter for the Washington Post, and in 2010 I remember people being like, oh, this is definitely the year we’re going to get a federal privacy bill. So I don’t know that Congress is going to come up with something in a timely way that’s actually really going to stop some of these state laws from emerging. But I do think there will be a conversation between states and the federal government about what this proposal could look like and maybe states will try and carve off smaller pieces. Maybe they won’t be quite as ambitious with broad bills, but it’s hard to say. And then it also varies state by state quite a bit. It kind of depends what parties in power and and which state about how far they’re willing to push certain pieces of legislation.

Cristiano Lima-Strong:

This is a good point to just disclose though we are both former Washington Post reporters, although we did not overlap, but on your point, as someone that also has covered privacy for a long time, it’s notable to me that there’s been this specter of federal preemption on privacy for a long time, and at the same time, we’ve seen states pass dozens of privacy laws now, and so maybe that’s something to think about as we look at how this moratorium debate plays out.

I wanted to just hit on a couple more different sort of buckets of AI bills that we’ve seen at the state level. So there’s of course been a lot of activity around child online safety, and to some extent states also just got a green light around age verification with the recent Supreme Court ruling on the Free Speech Coalition vs Paxton case, but we’ve seen sort of a trickling of more bills around sort of chatbot safety and algorithmic amplification of content to kids. How are we seeing lawmakers address concerns about AI and the potential implications for kids? Scott, do you want to take that?

Scott Babwah Brennen:

Yeah, I mean, kids online safety has now in the past few years been the second most present issue, I think in the minds of a lot of lawmakers, along with AI, and so it’s like no wonder that we’re seeing them kind of intersecting here. I think it is really important to distinguish between the bills that have been introduced and the laws that have been passed. So especially this year, folks have made a lot out of the fact that we’ve now seen more than a thousand bills introduced, but only in reality we’ve only seen a couple dozen paths, and I think a lot of the child safety stuff actually falls into the introduced but not passed bucket, especially this year.

Absolutely, there’s a huge amount of interest across states and different ways to protect kids. I’m not sure that much has actually passed squarely on the kids and AI front. We’ve seen some, it’s a passage of things like the Age Appropriate Design Act in one or two states I think this year, but beyond that, I’m not. Maybe Hayley has a better sense of what’s actually been enacted so far this year.

Hayley Tsukayama:

I think the past one is my recollection and I do feel like there’s another one, but it’s just not coming to mind right now.

Cristiano Lima-Strong:

A lot of states, a lot of bills.

Hayley Tsukayama:

I know, I’m practically looking through my spreadsheet in front of me, but I’m not going to get there in time. I should say it’s a little tangential to the conversation. We have a lot of concerns about the speech implications and the censorship implications of a lot of these bills. I do think though, that it’s true that they’re becoming, well, they are a huge part of the AI conversation, and I do expect, again, I can’t make, don’t bet on my predictions, but I do expect that with that case being resolved that we are going to see, if people were waiting to see what the result of that case was going to be, then they might have been sitting on legislation, and so we might see a spike in it again year.

Scott Babwah Brennen:

Yeah, I’ll just say, I mean, I think it’s sort of an open question of how that case, the Supreme Court gave a green light to age verification for pornography online. It’s unclear as of now what that means for other types of age verification online, how that translates to questions about age verification for social media is the next big question. We’ve seen some of these bills passed last year that try to do that, impose age verification for social media. I believe they’ve all been enjoined by the courts right now. So yeah, there’s a lot of open questions.

Cristiano Lima-Strong:

We’ve touched on AI-generated non-consensual intimate imagery. Of course at the federal level, there was the passage and signing of the Take It Down act, which criminalizes the distribution of this material, but we’ve seen states take up a whole host of different bills around digital replicas and different types of deepfakes. Scott, talk us through a little bit what you’ve seen as far as states grappling with these different types of digital replicas, forgeries, and ways to use AI to generate that type of material.

Scott Babwah Brennen:

Yeah, you’re right. This big bucket encompasses a lot of different approaches here. You’re right that the NCII sort of approach has maybe even the most common that we’ve seen across states, although now it probably won’t continue because we now have federal laws on it. We’ve also seen a number of bills, and this is more last year I think, but about publicity rights. So this is actually a really important build. The Elvis Act that was passed last year in Tennessee, which was probably one of the main reasons that the moratorium didn’t go, which was Senator Blackburn didn’t want to see it preempted, but that law sort of set restrictions on use of artists’ likeness without their explicit permission. I think Illinois and California had passed similar laws, and so we’ve seen from that side of things, the publicity side, and then we’ve also seen, yeah, so as I alluded to before, the deepfakes in election, that has been one of the other really most common threads here, which is requiring labels on ads that contain deepfakes that are deceptive or in a couple of cases outright prohibiting that content.

Cristiano Lima-Strong:

Another bucket of bills that we’ve seen in many cases past deal with various forms of transparency in AI. Hayley, I think you were talking a little bit earlier about transparency in AI in hiring. What are some of the different types of transparency-focused bills that we’ve seen actually signed and that lawmakers try to get signed?

Hayley Tsukayama:

Yeah, I mean we’ve certainly seen some bills trying to get at issues of provenance, so labeling whether something is AI or not or watermarking is also another term that comes up. Just making sure that folks are labeling stuff that’s AI generated. Again, I should say, we have some speech concerns there as well, just in terms of how likely a label is going to chill expression, and that kind of stuff. But it’s certainly a popular topic that we’ve seen pop up and at least in California, these bills are also under litigation right now, but it’s definitely another area kind of provenance and modern market.

Cristiano Lima-Strong:

There’s a whole other host of directions we could go in here with AI as we’ve talked about. We haven’t touched on copyright or other issues, but as we wind down the conversation, wanted to just hear from both of you a little bit about, we’ve talked about some of the trends that we’ve seen, what’s been passed, what are you looking out for as we look ahead and now that states have, again, at least for now, are not facing this threat of a moratorium as what are you looking for trends? Will they continue, will they extend? What types of bills are you most interested in seeing their fate at the state level moving ahead? Scott, do you want to start?

Scott Babwah Brennen:

Sure. So I mean this is influenced based on by the pieces that I’m working on. So I’m working on this piece on consumer protection laws and AI, and so one of the types of bills that we’ve seen a lot of this year has to do with insurance, medical insurance in particular. Basically prohibiting AI in utilization review for medical insurance or prohibiting AI is like the sole determinant for utilization review. Well, I have a lot to say about that that may, the fact that that may or may not already be covered actually in a lot of state law, but that’s sort of beside the point, but as far as I understand, none of these laws I don’t think have actually passed or maybe only a couple of them have passed. So I’m really curious to see how that plays out.

The other big area that I’m looking at is about data centers, and again, this is because I’m doing some work on data centers, but the same thing happened last year where last year and this year we’ve seen a ton of bills that try to, I’ll just say, offer a different regulatory approach to data centers than we’ve seen in the past. So going back now 10, 15 years, the main way that states have dealt with data centers is to basically throw tax incentives at them, especially sales and use tax, they give them tax breaks. In the past couple of years, we’ve seen legislators introducing these bills that do things like require audits about energy use or the impact on grids or the water use or the value that states are actually getting in return for these sort of tax breaks. Again, my understanding is none of these bills have actually passed, but they keep being introduced as data centers really are becoming more and more of a central sort of topic in the AI policy debate. I’m really interested to see what’s going to happen with some of these bills.

Cristiano Lima-Strong:

How about you Hayley? What are you looking out for?

Hayley Tsukayama:

I guess this is just because I take a multi-state perspective. Certainly companies that are most likely to be regulated by these bills have run a very effective lobbying campaign against sort of, in different states saying things at the federal level. It seems fairly clear to me from what they’re saying that they want as little regulation as possible, and so I’m going to be on the lookout for introducing bills that sound good on paper, but has definitions or provisions or exemptions in them that really cut a lot of people out of getting covered.

It’s not types of bills necessarily, but I am going to be looking pretty closely at what language gets introduced, whether those are bills that have industry backing, where they’re saying like, “Oh, this is regulation that we can live with.” Because often, certainly as we’ve seen in the privacy fights, when they are advancing those kinds of bills, they probably aren’t going to change the business practices very much. And so I feel like we have had a lot of big ideas thrown at the wall, kind of figuring it out, and so I’m really going to be kind of looking into those definitions and making sure that we’re not seeing language that is just artfully crafted by companies that are going to be regulated by these bills.

Cristiano Lima-Strong:

Well, I think the fact that we’ve even been talking about a moratorium for the past several months is also an indication of some of that lobbying against regulation being very successful.

Well, we’ve covered a lot of ground and there’s a lot more that we could be talking about, but we’ll all be tracking this in the months to come, so I’m sure we’ll chat again soon. Thank you both so much for joining me and talking about all this today.

Hayley Tsukayama:

Thanks for having us.

Scott Babwah Brennen:

Yeah, thank you so much.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Causaly Introduces First Agentic AI Platform Built for Life Sciences Research and Development

Published

on

By


Specialized AI agents automate research workflows and accelerate
drug discovery and development with transparent, evidence-backed insights

LONDON, Sept. 16, 2025 /PRNewswire/ — Causaly today introduced Causaly Agentic Research, an agentic AI breakthrough that delivers the transparency and scientific rigor that life sciences research and development demands. First-of-their-kind, specialized AI agents access, analyze, and synthesize comprehensive internal and external biomedical knowledge and competitive intelligence. Scientists can now automate complex tasks and workflows to scale R&D operations, discover novel insights, and drive faster decisions with confidence, precision, and clarity.

Industry-specific scientific AI agents

Causaly Agentic Research builds on Causaly Deep Research with a conversational interface that lets users interact directly with Causaly AI research agents. Unlike legacy literature review tools and general-purpose AI tools, Causaly Agentic Research uses industry-specific AI agents built for life sciences R&D and securely combines internal and external data to create a single source of truth for research. Causaly AI agents complete multi-step tasks across drug discovery and development, from generating and testing hypotheses to producing structured, transparent results always backed by evidence.

“Agentic AI fundamentally changes how life sciences conducts research,” said Yiannis Kiachopoulos, co-founder and CEO of Causaly. “Causaly Agentic Research emulates the scientific process, automatically analyzing data, finding biological relationships, and reasoning through problems. AI agents work like digital assistants, eliminating manual tasks and dependencies on other teams, so scientists can access more diverse evidence sources, de-risk decision-making, and focus on higher-value work.”

Solving critical research challenges

Research and development teams need access to vast amounts of biomedical data, but manual and siloed processes slow research and create long cycle times for getting treatments to market. Scientists spend weeks analyzing narrow slices of data while critical insights remain hidden. Human biases influence decisions, and the volume of scientific information overwhelms traditional research approaches.

Causaly addresses these challenges as the first agentic AI platform for scientists that combines extensive biomedical information with competitive intelligence and proprietary datasets. With a single, intelligent interface for scientific discovery that fits within scientists’ existing workflows, research and development teams can eliminate silos, improve productivity, and accelerate scientific ideas to market.

Comprehensive agentic AI research platform

As part of the Causaly platform, Causaly Agentic Research provides scientists multiple AI agents that collaborate to:

  • Conduct complex analysis and provide answers that move research forward
  • Verify quality and accuracy to dramatically reduce time-to-discovery
  • Continuously scan the scientific landscape to surface critical signals and emerging evidence in real time
  • Deliver fully traceable insights that help teams make confident, evidence-backed decisions while maintaining scientific rigor for regulatory approval
  • Connect seamlessly with internal systems, public applications, data sources, and even other AI agents, unifying scientific discovery

Availability

Causaly Agentic Research will be available in October 2025, with a conversational interface and foundational AI agents to accelerate drug discovery and development. Additional specialized AI agents are planned for availability by the end of the year.

Explore how Causaly Agentic Research can redefine your R&D workflows and bring the future of drug development to your organization at causaly.com/products/agentic-research.

About Causaly

Causaly is a leader in AI for the life sciences industry. Leading biopharmaceutical companies use the Causaly AI platform to find, visualize, and interpret biomedical knowledge and automate critical research workflows. To learn how Causaly is accelerating drug discovery through transformative AI technologies and getting critical treatments to patients faster, visit www.causaly.com.

Logo – https://mma.prnewswire.com/media/2653240/Causaly_Logo_Logo.jpg



Source link

Continue Reading

AI Research

Josh Bersin Company Research Reveals How Talent Acquisition Is Being Revolutionized by AI

Published

on


  • Jobs aren’t disappearing. Through AI, talent acquisition is fast evolving from hand-crafted interviewing and recruiting to a data-driven model that ensures the right talent is hired at the right time, for the right role with unmatched accuracy

  • Traditional recruiting isn’t working: in 2024, only 17% of applicants received interviews and 60% abandoned slow application processes

  • AI drives 2–3x faster hiring, stronger candidate quality, sharper targeting—and 95% candidate satisfaction at Foundever, from 200,000+ applicants in just six months

OAKLAND, Calif., Sept. 16, 2025 /PRNewswire/ — The Josh Bersin Company, the world’s most trusted HR advisory firm, today released new research showing that jobs aren’t disappearing—they’re being matched with greater intelligence. The research, produced in collaboration with AMS, reveals major advances in talent acquisition (TA) driven by AI-enabled technology, which are yielding 2–3x faster time to hire, stronger candidate-role matches, and unprecedented precision in sourcing.

The Josh Bersin Company (PRNewsfoto/The Josh Bersin Company)

The global market for recruiting, hiring, and staffing is over $850 billion and is growing at 13% per year, despite the economic slowdown, though signs of strain are evident. This means TA leaders are turning to AI to adapt, as AI transforms jobs, creates the need for new roles, new skills, and AI expertise.

According to the research and advisory firm, even without AI disruption, over 20% of employees consider changing jobs each year, driving demand for a new wave of high-precision, AI-powered tools for assessment, interviewing, selection, and hiring. Companies joining this AI revolution are hiring 200-300% faster, with greater accuracy and efficiency than their peers, despite the job market slowdown.

According to the report, The Talent Acquisition Revolution: How AI is Transforming Recruiting, the TA automation revolution is delivering benefits across the hiring ecosystem: job seekers experience faster recognition and better fit, while employers gain accurate, real-time, and highly scalable recruitment.

This is against a context of failure with current hiring. In 2024, less than one in four (17%) of applicants made it to the interview stage, and 60% of job seekers, due to too-slow hiring portals, abandoned the whole application process.

The research shows how organizations are already realizing benefits such as lower hiring costs, stronger internal mobility, and higher productivity. AI-empowered TA teams are also streamlining operations by shifting large portions of manual, admin-heavy work to specialized vendors.

Early successes are striking: after deploying conversational AI, a major U.S. resort operator increased scheduled interviews by 423% in 12 months while reducing candidate drop-off by 85%. A new AI TA process at Foundever hit 95% candidate satisfaction rating from over 200,000 applicants in just six months, while a leading global automotive company reported $2 million in savings in its first year using AI-powered interview scheduling.

The research highlights how AI is helping TA overcome frustrations like vague job descriptions, inconsistent interviews, and labor-intensive processes.

HR teams now use AI to automate tasks from profiling and sourcing to screening, scheduling, interviewing, and negotiating offers. Companies showcasing these successes include major international brands spanning the hospitality, food, healthcare, and technology sectors, such as Fontainebleau Las Vegas and Compass Group.

Some organizations are using AI-powered recruiting assistants to manage routine communication and negotiations. Recruiters set parameters for salary, benefits, and start dates, while the AI answers questions, updates offers, and proposes alternatives in real time. By automating tasks and personalizing the experience, AI shortens turnaround times and frees recruiters to focus on strategy.

The vendor market is transforming, with SAP acquiring SmartRecruiters and Workday acquiring Paradox to stay competitive. Innovations include AI agents enabling automation of the full hiring journey (Eightfold AI, Maki People) conversational AI for candidate engagement (Glider AI, Paradox, Radancy), AI-driven assessments (CodeSignal, HireVue, TaTiO), and platforms that benchmark roles against labor market data (Draup, Galileo, Lightcast, Reejig, SeekOut, TechWolf).

Report author and Josh Bersin Company Industry Analyst & Senior Research Director, Stella Ioannidou, says:

“For decades, TA has been viewed as a cost center, focused on using applicant tracking systems to manage incoming candidates and relying on recruiters to screen and interview. This process was slow, expensive, and delivered a poor experience for job seekers.

“Today, by leveraging AI-powered platforms and integrated data, TA teams can identify, attract, and engage the best talent in the market with unparalleled precision, often before competitors even know those candidates are available—resulting in a proactive, data-driven approach that enables organizations to respond quickly to changing business needs, seize new opportunities, and fuel growth from within.”

Chief Executive Officer at AMS, Gordon Stuart, says:

“This research paper captures the urgency and scale of the AI revolution which is transforming TA. It doesn’t just reflect what’s happening now, it helps us understand what’s next. Candidates want speed, clarity, and connection. Recruiters need tools that free them to focus on strategy and relationships. And businesses must rethink how talent acquisition fits into their broader growth agenda. AI is not just automating tasks, it’s redefining roles, workflows, and expectations across the board. The cumulative power of people, process, data and technology is heralding a new era for talent acquisition.”

Global industry analyst and Josh Bersin Company CEO, Josh Bersin, says:

“AI is expected to provide CHROs with a data-rich view of talent comparable to an integrated supply chain, enabling them to track and analyze every detail of each hire with the same precision a luxury Swiss watchmaker applies to every component and its origin.

“This transition transforms HR from handcrafted processes to precision, dynamic hiring—something once unattainable without AI.

“The implications are significant for all stakeholders, particularly CEOs. Organizations that lacked a precision hiring process have historically faced millions in lost revenue, high turnover, and misaligned talent—but those days are now coming to an end.”

This new research follows previous Josh Bersin Company findings demonstrating how AI is transforming another core HR function: Learning & Development.

The Josh Bersin Company’s Galileo Suite delivers both strategic guidance and hands-on tools through its AI Agent for HR, as well as hyper-personalized learning through the exclusive Galileo Learn certificate course, AI-First TA Transformation. The full report is available to download here.

About AMS
AMS is a leading global provider of talent acquisition and consulting services, providing unrivalled experience, driven by technology and underpinned by innovation. We help our clients to attract, engage and retain the talent they need for business success.

We have three core areas of service: acquisition, advisory and digital, mainly delivered as an outsourced model, and spanning our clients’ permanent and contingent workforce, and internal mobility requirements.

Our dedicated teams of experts are deeply embedded with our global blue-chip clients, enhancing talent acquisition processes and driving projects which align with overall strategic objectives. This relationship-driven approach is supporting our clients to redefine how they hire and retain top talent. For more, go to www.weareams.com/

About The Josh Bersin Company

The most trusted human capital advisors in the world. More than a million HR and business leaders rely on us to help them overcome their greatest people challenges.

Thanks to our understanding of workplace issues, informed by the largest and most up-to-date data sets on workers and employees, we give leaders the confidence to make decisions in line with the latest thinking and evidence about work and the workplace. We’re great listeners, too. There’s no one like us, who understands this area so comprehensively and without bias.

Our offerings include the industry’s leading AI-powered HR expert assistant, Galileo®, fueled by 25 years of in-depth Josh Bersin Company research, case studies, benchmarks, and market information.

We help CHROs and CEOs be better at delivering their business goals. We do that by helping you to manage people better. We are enablers at our core. We provide strategic advice and counsel supported by in-depth research, thought leadership, and unrivaled professional development, community, and networking opportunities.

We empower our clients to run their businesses better. And we empower the market by identifying results-driven practices that make work better for every person on the planet.

Cision
Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/josh-bersin-company-research-reveals-how-talent-acquisition-is-being-revolutionized-by-ai-302557505.html



Source link

Continue Reading

AI Research

How AI-trained robots are helping to root out fake paintings tied to a notorious forgery case – The Art Newspaper

Published

on


Artificial intelligence (AI) is often criticised for ripping off artists, but the technology is now being used to combat fake copies of works by the Canadian artist Norval Morrisseau (1932-2007) that have flooded the market over the past two decades.

More than 6,000 pieces were produced and fraudulently sold as authentic works by the Ojibwe artist to collectors worldwide, with financial losses estimated to exceed C$100m ($72.5m). The trial of Jeffrey Cowan, the last of the suspected fraudsters in the tangled web of Morrisseau forgery rings, began this week. Two other men who pleaded guilty to participating in what the Ontario Provincial Police described as “the biggest art fraud in world history”, were each given a conditional sentence of two years less a day in August and September.

In a unique case of fighting fakes with fakes, the Montreal-based start-up Acrylic Robotics announced in July that, in partnership with the Norval Morrisseau Estate, its robots will reproduce five Morrisseau paintings and make the copies available for purchase.

Norval AI

“We have created our own artificial intelligence called Norval AI to help determine the probability of an authentic Norval Morrisseau painting,” Cory Dingle, the Morrisseau estate’s executive director, tells The Art Newspaper. “It has grown to do many other functions that will help with museums seeking provenance as well as law enforcement—such as catching the person who painted the fraudulent work.”

The idea of developing an AI authenticator began in 2023, when Dingle met Stephan Heblich, an economics professor at the University of Toronto, and Clément Gorin, an associate professor at the Université Paris 1 Panthéon-Sorbonne.

“Clément and I were just two art-loving economics professors who are using the latest deep learning and visual recognition techniques to analyse paintings,” Heblich says. “We thought we could help restore Norval’s legacy, so we reached out to Cory and created Norval AI.”

A year later, Dingle met Chloë Ryan, the founder and chief executive of Acrylic Robotics. The artist-turned-engineer had developed technology that allows robots to paint works in the style of individual artists.

Better fakes

“We needed better replicas to test our artificial intelligence programme—the existing fake paintings were so terrible,” Dingle says. “So we collaborated with Acrylic Robotics and helped train their robot to paint more realistic fake paintings.” He adds: “They want to produce very accurate reproductions and our Norval AI tells their robot where it is doing a bad job. They use our data to make the robot do a better job, which makes those better replicas train our Norval AI even better.”

Last year, the Norval Morrisseau Family Foundation helped Acrylic Robotics produce a very accurate replica of one of Morrisseau’s original paintings. This in turn was run through the Norval AI programme and the results were shared with Acrylic Robotics to help improve its replicas.

The resulting works produced by Acrylic Robotics include limited editions of five paintings, including Morrisseau’s In Honour of Native Motherhood (1990), which was inspired by the murdered and missing Indigenous women in Canada, and Punk Rockers (around 1991), in which Morrisseau fused traditional Anishinaabe iconography with contemporary idioms.

Marked as replicas

Prices for the Acrylic Robotics works range from C$3,240 to C$45,000. To avoid further fraud, several techniques have been applied to ensure the works are easily identifiable as replicas, including a mark on the back of the canvas.

According to an Acrylic Robotics spokesperson, the company hopes to investigate whether, with the support of Norval AI, its robots may be able to complete some of Morrisseau’s many unfinished or damaged works.

“What’s exciting here, even beyond the technology,” Ryan says, “is the opportunity to push the boundaries of what has been possible, make art history and reclaim a legacy.”



Source link

Continue Reading

Trending