Connect with us

AI Insights

Moms Demand Action Founder on What It Takes to Lead Change

Published

on


ALISON BEARD: I’m Alison Beard.

ADI IGNATIUS: And I’m Adi Ignatius. And this is the HBR IdeaCast.

ALISON BEARD: Adi, we are going to talk today about how to make change, whether it’s in your organization, or a problem that you see out in the world that you want to fix as an entrepreneur, or something that you’d like to see happen differently in society. You were a senior leader for a really long time, but I think even from that perch affecting change is really hard, right?

ADI IGNATIUS: Yeah. Look, I love this topic. As a senior leader, I learned that to drive a new initiative to introduce something dramatically new, I had to really own it, I had to really drive it, and most importantly, I had to sustain it. It’s easy to get that initial passion and that initial buy-in, but you need processes and continuing energy to really keep something going for the long term where it makes a difference.

ALISON BEARD: And so our guest today has lots of personal experience with this. She is Shannon Watts, the founder of Moms Demand Action, which is the nonprofit organization in the United States that that pushes for gun safety legislation. She didn’t consider herself to be a leader or a changemaker when she launched this movement. She was a mom who had heard the news about the Sandy Hook School shooting, and she was enraged and sad, and so she wrote a Facebook post and it ballooned into this group that went on to change legislation across the country.

The lessons that she has to offer are really interesting for our audience because she’s talking about, first, how to see yourself as a leader, how to know that I see something, I’m angry about it, I think it needs to change. What am I going to do about it? She also talks about how to navigate that messy middle that you talk about, sort of push through the challenges, keep people energized, keep people focused. And she talks about building coalitions, the idea that no one can make a difference just by themselves. You have to bring together a group and you have to work together.

I really learned a lot from the conversation. I think most of our listeners can, whether you are that manager who sees a process that needs to be changed, or you’re a CEO who sees this important strategic initiative that you’d really love to launch, but you don’t quite know how to get people behind you.

ADI IGNATIUS: I mean, there are two types of leadership. One is a company has a vacancy for say, the CEO, and they bring somebody in and they’re in that role. But then there’s this kind of leadership, which is creating something new, taking on a problem that doesn’t have an organization and a process around it. So how do you do that, where you are driving it, you’re the passion, you create the process? And as I said before, you have to learn how to sustain that energy.

ALISON BEARD: Here is my interview with Shannon Watts, founder of Moms Demand Action, and author of the new book Fired Up: How to Turn Your Spark into a Flame and Come Alive at Any Age.

So it does feel like we’re in an era where people are fired up about a lot of things, whether it’s societal problems or the way their organizations are run or how they’re being treated as consumers, but translating that from complaints into change is very hard. How do you think our listeners can recognize when a problem that’s bothering them merits more of their attention and ultimately action?

SHANNON WATTS: The short answer is anything that’s bothering you, merits your attention. That is something that is calling you and you have to pay attention to those cues. For me, I had watched mass shooting tragedy after mass shooting tragedy happen in this country, really starting with Columbine, and had watched our elected leaders and others really do nothing.

Flash forward to 2012, I was folding laundry in my bedroom and I saw breaking news on the television that there was an active shooter inside an elementary school in Connecticut. And like so many other people in this country, I was just devastated when I went to bed. I’d been just sitting in front of the television absorbing this tragedy and was in tears. And really sometime during the middle of the night that sadness crystallized and became abject rage.

When I woke up the next morning, I was agitated and I knew I had to do something. It was that idea of what you were just talking about. Something was bothering me. My soul was insulted and I wasn’t sure what I could do. You know, in 2012, Facebook was a very popular platform, particularly for middle-aged women. And so I went on and I made a Facebook page, and that was really the spark that lit the fire of Moms Demand Action.

ALISON BEARD: And how did you know that you were sort of the right person to lead the charge? How do you determine that? As you put it in the book, your desires or emotions that you’re feeling fit also your values and your abilities?

SHANNON WATTS: To be clear, I did not know I was the right person. I think most people thought I was not the right person. I had been a stay at home mom for five years after a career in communications. I was in the Midwest. I knew little to nothing about organizing or gun violence or the legislative process. I had severe untreated ADHD, which has caused all kinds of issues in my life. And I also had a debilitating fear of public speaking, right? This is not exactly someone who others would point their finger at and say, “That woman, she should take on the most powerful, wealthy special interest that’s ever existed.”

It was my values, where I was in my life, I had little kids, five of them ranging in age from elementary school to high school. So my values were really about protecting my family and my community. My abilities were my communication skills. I had a corporate public relations career for over a decade before 2012. And my desire, I grew up as a teen in the 1980s who saw Mothers Against Drunk Driving, who took on a powerful special interest too and won. And so I wanted to be part of a similar army of women and mothers.

And so all those things really came together for me. Thankfully, other women, total strangers from across the country brought their skill set, but those were mine. Those were my values, abilities, and desires that helped me create Moms Demand Action, which is now the largest women-led nonprofit in the nation.

ALISON BEARD: So what advice do you give people who ask you now about how to figure out when their desires, that thing that’s bothering them or they’re angry about or the change that they want to make in their organizations or the world or the companies that they interact with, when they are aligning with their values and their skill sets in a way that will allow them to be successful like you were?

SHANNON WATTS: So the book uses the metaphor of fire and it’s a call to action for everyone to become a firestarter, someone who prioritizes their desires over their obligations. And that’s difficult in a system that’s set up to give us all of these shoulds, these rules that we have to live by. And this is a way to audit where you are, what are the things that are calling you, and then to pursue them. And so if we break it down individually, if you look at your values, those are really your North Star.

And looking at your abilities, some are innate, some are acquired. I think we often underestimate our abilities. We think only of maybe what we got our college degree in or what our career has been in. But if we list all of the things that we’ve had success with, we can see it as personal, professional, maybe even political. And then the third is desires, right? What are the things that have always been calling you? What are the things that you really want to accomplish during your lifetime?

ALISON BEARD: Doing that analysis of sort of do my values and my skill sets equip me to tackle this challenge, is that part of the process of becoming more brave?

SHANNON WATTS: It is. I mean, just even going down the road of identifying those things, your values, your abilities, and your desires will show you and others around you that this is something that’s important to you. But I don’t think anyone can live on fire by themselves. I really do think it requires coming into community. It might be as simple as having a tough conversation or asking for a promotion or leaving a relationship that no longer suits you.

What I have seen is that when you come together in community, those are the people that see something in you that maybe you haven’t seen or they give you the confidence and the encouragement to keep going. So many times at Moms Demand Action, someone would come into the organization and it’s because of a shooting tragedy in their community or because their kid had to endure a lockdown drill. And suddenly they were supported by all these other people and they realized, “Wow, I have skills, values, desires that have been sort of untapped and I want to pursue those and look into those.”

ALISON BEARD: So it sounds like you’re saying the first step is to find allies.

SHANNON WATTS: I think the first step is identifying your abilities, values, and desires. The second step is understanding there’s going to be a blowback. Because even if you’re doing something like I did, which incurred death threats and threats of sexual violence, or you’re doing something even smaller, finally doing things differently in your life. There will be blowback, right? It might be something a colleague says that makes you doubt yourself. So the third important part of it is the community.

ALISON BEARD: So how do you start to build those allies and that coalition around you?

SHANNON WATTS: When I was in Moms Demand Action leading the organization, we really wanted to understand what made volunteers stick around. It is easy to get volunteers to come into an organization after a shooting tragedy. It is much harder to get them to stay because this is their precious time that they’re giving people. And so we decided to poll our volunteers and ask what keeps you around? And what they told us were two things.

The first is that they felt like they were winning. And I think this is actually just advice for life. When you make someone feel like they’re winning, they want to keep showing up. When you take on a special interest, you are going to lose. And so we refrain these losses as losing forward. Maybe you lost this battle, but what did you learn to win the fight? Maybe you grew your chapter, maybe you have new relationships with lawmakers that you didn’t have before. Maybe you have new insight, right? So that insight of people sticking around because they felt like they were winning was very important to us in how we messaged.

The second reason they said that they stayed was that they found their people. And I believe given the pandemic, given social media, that finding your people is more important than ever, and it is more difficult than ever. And so when you find people with like-minded values, it really does awaken something in you to have that support of a community that can become a lifeline for you for the rest of your life, no matter what you’re doing.

ALISON BEARD: So when you’ve decided that you want to tackle a challenge, you’ve started to form a group around you. When you all begin to try to make change, do you set out a vision for yourself or is it more step-by-step? You set a small goal and achieve that. Or as you said, maybe don’t achieve it, but achieve something smaller in the process. Talk about big picture versus incrementalism.

SHANNON WATTS: I think it can be either, but I think it is more realistic when it is incremental. Particularly in activism, people want wholesale overnight change and the system is not set up that way. Almost all activism is a long game, and you have to adjust as you go along because you will lose, you’ll have setbacks, you’ll have surprises. And I think looking at our lives in the same way is important. This idea of incrementalism leads to revolutions.

When I started Moms Demand Action, I didn’t say I’m going to start the largest women-led nonprofit that will pass 500 gun safety laws and take down this special interest that’s been so powerful for so long. I just said I wanted women and mothers in particular to stand up to the gun industry. And how we got there was all incremental and it required constant changes. That to me is the important part of this, is the idea of it is still worth doing even if it isn’t this grand vision, if it is a small step forward to what you ultimately want.

ALISON BEARD: And I know that you were focused mostly on policy change, but did you have successes in working with corporations and changing corporate behavior also?

SHANNON WATTS: We decided early on that we were going to look at this in three different ways: legislative, electoral, and cultural. And the corporate work really fell into that cultural bucket. I can remember I saw that gun extremists were showing up armed inside Starbucks all across the country on February 2nd in honor of the Second Amendment right, 2.2. We were so small that we couldn’t even do a boycott. We did what we call the Momcott. It was this idea of showing people we were going to have coffee instead of Starbucks on Saturdays. We used the hashtag Skip Starbucks Saturdays. And that was incredibly effective, even though we were small.

And just a few months after we started this campaign, then CEO of Starbucks came out and said, “We will no longer allow guns inside our stores.” And we knew we were onto something. After that, dozens of companies from Panera to Kroger to Home Depot, they all came out and said, “Open carry.” This practice of openly carrying handguns or long guns inside stores was no longer acceptable. And that really made a difference to get something that corporate America could latch onto and say, “We can agree on this piece of this issue.”

ALISON BEARD: You mentioned before that people were donating their precious time to this cause, you obviously devoted your life to it for a time. How do you get over that hurdle if you are a busy executive, for example, but you see something within your organization that needs to change or you see an opportunity out there in the world that you could do something entrepreneurial about, or you see a company that’s not operating the way you would like it to and you want to affect change there. How do you balance that with doing everything else that you need to do in your life?

SHANNON WATTS: I remember the night that I started the Facebook page and it was like lightning in a bottle, people from all across the country reaching out. But we went to bed that night and my husband said to me, “This is going to be a big deal.” I had been a stay at home mom for five years, and suddenly I went from that to being busier than I had ever been in my career and I wasn’t getting paid. And it was an interesting time of adjustment. My ex-husband and my new husband at the time really had to sort of step up and do the stuff that I had been doing for so long, whether that was driving kids to soccer practice or helping with homework or making dinner.

It’s difficult and it ultimately comes down to prioritizing. In the book, following on the fire metaphor, I talk about a controlled burn where it’s really important for people to look at what is taking up time in their lives. And that can be as small as Netflix and doom scrolling on social media, and it can be as large as a relationship or a job that is holding you back and trying to figure out what you want to do next. But I’ll give you one example. We had a volunteer in Chicago who was also an employee executive within Target. And Target was allowing open carry inside their stores.

This was in the early days of Moms Demand Action after we’d gone after Starbucks. And this Target leader, a woman in our organization who was also a volunteer, began to have conversations with the executives inside our organization to say, “This isn’t appropriate. This is not in the alignment with our values.” And they listened to her. And yes, there was some outside pressure too from Moms Demand Action volunteers who were showing up with petitions and asking their local Target management to not allow open carry, but ultimately Target came out and said, “Guns are no longer acceptable inside our stores.” And so that was really her doing ultimately because she used her voice on that issue.

ALISON BEARD: And you talked about the preparing for blowback, but how did you deal with it when it was actually coming at you, and what advice would you give for people who are trying to make change again within their workplaces, for example, or out in the wider world who are facing critics and people who are trying to stop them?

SHANNON WATTS: You will receive blowback no matter how small or how large your desires are that you decide to pursue. I had several inflection points where I could have easily doubled back instead of doubling down. So many threats, so much intimidation. But also I was making cold calls in those early days to get advice and counsel, and a lot of people told me, “This can’t be done. You’re not the right person to do it. You shouldn’t do it. It’s already happening.” All of these reasons why it wasn’t me and I shouldn’t start.

And I decided to trust my intuition, which told me that the time was ripe for women in particular to organize on this issue. I also talk a lot in the book about the messy middle. There is suffering involved when you get in the middle of something that you’ve taken on and you have to keep going to get to the other side.

ALISON BEARD: So how do you get through it?

SHANNON WATTS: It is a lot of understanding that it’s coming and then taking steps to figure out how do I find the right people who will support me during this? How do I have confidantes? How do I change in midstream and how do I move forward? I talked to a woman who ran for office twice in Texas and lost both times and people sort of expected her to disappear. She got a lot of blowback from people who said she should not try to run again for office. She should not be ambitious. And instead what she did was take that experience and start an organization in her state to help prepare other women when they run for office, particularly women of color, and to understand that they have a community of supporters that can help them.

ALISON BEARD: And your career before Moms Demand Action was in communications. So talk a little bit about what you’ve learned in the time running that organization about how to communicate effectively on an issue that people might vehemently disagree on, whether it’s gun control or a process that your company has used for a hundred years that you think it needs to get rid of but half the people there don’t.

SHANNON WATTS: I think that my career in corporate communications, learning how to build a brand, for example, at General Electric, really prepared me for the activism that is storytelling. And all storytelling includes two important things, data and coming armed with information and facts to be able to make your case, but also anecdotes and stories. And that’s why in gun safety activism, survivors are really the North Star of everything we do because they have the stories to bring to have with lawmakers and others about what they experienced and why they don’t want anyone else to. And it can be very effective and very persuasive. And so if you have those two things, data and then anecdotes. It’s really the recipe for changing hearts and minds.

ALISON BEARD: I noticed a clever thing you did there. I said gun control, and you said gun safety. Which I think is part of the messaging.

SHANNON WATTS: Yes.

ALISON BEARD: So what mistakes did you make along the way that you think our listeners who want to make change can learn from?

SHANNON WATTS: When I started Moms Demand Action, we were really set on mass shootings and school shootings because that was the reason so many of us got off the sidelines. And it was very short-sighted because mass shootings and school shootings are horrifically tragic, but they’re about 1% of the gun violence in this country.

And it was really important, and I think this is true for anything, is to always be widening the aperture, to be looking at an issue holistically, and to be prepared and okay with pivoting. We had to change our policy many times along the way. When you’re working with volunteers in red states and blue states alike, there are different priorities and different messages that resonate with different audiences.

And so that is a really difficult needle to thread. To make sure that you are always, I think, changing the way that you’re acting. If you are stagnant and your policies aren’t changing along with whatever’s happening in the world, you aren’t growing. I definitely learned that. I think the other important lesson I learned personally, I had been in the corporate world and it is much different managing paid employees than it is volunteers. It can be a lot more like herding cats. Successful organizations, and maybe this is true in the corporate world too, but it’s a delicate balance between top-down and bottom-up.

If you are too top-down, it’s too controlling. If it is too bottom-up, it is too chaotic. And so you’re always trying to adjust to get that exact right harmony so that you are a delicate balance of both. And I think that is the key to a successful business, organization, relationship, anything.

ALISON BEARD: I mean, I imagine if you’re trying to change something within your company, it’s also the people working with you who are volunteering their time to help you do it, so it’s not their day job. Talk a little bit about how you grew into being a leader, because anyone who’s deciding that they’re fired up about something and wanting to take on a challenge, they start with themselves and then maybe they gather a few allies, but then ultimately if they’re successful, it becomes a broader operation. Maybe it’s a dozen people. So how does someone who started with their own desires, values, and skill sets begin to manage something like that?

SHANNON WATTS: I was really fortunate that, again, a lot of these people were perfect strangers who came to the table with these skill sets and helped me create the organization that taught me and brought skill sets that I didn’t have. And so as we grew, they became even more important. I would say six months into the organization, I realized we would have to partner with another organization in order to survive into perpetuity. And I began interviewing organizations inside and outside the space, some in gun safety, some not. And ultimately it was meeting with then Mayor Mike Bloomberg’s team that I realized we had a big army and they had a lot of generals, and we needed that synergy.

And so we decided to collaborate and create Everytown for Gun Safety, which is the umbrella organization and Moms Demand Action became the grassroots army of that. And that turbocharged everything a year in. And we were able to finally have the financial and human resources to hire more leaders, to grow our base, to invest in lobbyists and creating a chapter leadership structure that would help us continue to grow. And that has worked perfectly for over 11 years now.

ALISON BEARD: So it sounds like at some point reaching out to powerful allies and people with leadership experience is useful.

SHANNON WATTS: It is. A lot of people were worried that we would lose that homegrown feeling of activism by creating this relationship. And I don’t think that their worries were unfounded, but ultimately we figured out a way to make sure that the volunteers had a say in everything we do. But creating that relationship with very powerful allies was the key to unlocking exponential growth ultimately.

ALISON BEARD: And finally, just tell me how when you’re working on a project this massive and this challenging, how do you avoid burnout and persevere?

SHANNON WATTS: There were times, particularly after major national shooting tragedies, that it did feel and become overwhelming. I often talk about how activism is a marathon, not a sprint, it’s also a relay race, and you have to hand the baton over to other people. And there were many times that I had to do that. I think I was worried that if I gave away my work, I felt guilty that other people would have to do it, or I felt worried that they might do it better than I do it. And what I learned every time I came back was actually when you give other people the opportunity to step up and bring their energy and their ideas to something, it makes it better.

ALISON BEARD: Well, Shannon, it’s been so lovely speaking with you. And thank you so much for all the work that you and your organization have done.

SHANNON WATTS: Thank you.

ALISON BEARD: That’s Shannon Watts, founder of the non profit Moms Demand Action and the author of the book Fired Up: How to Turn Your Spark Into a Flame and Come Alive at Any Age.

Next week, Adi will speak with Columbia University’s Peter T. Coleman about conflict intelligence – an essential skill in turbulent times.

And we now have more than a thousand IdeaCast episodes, plus many more HBR podcasts, to help you manage your team, your organization, and your career. Find them at HBR dot org slash podcasts or search HBR in Apple Podcasts, Spotify, or wherever you listen.

Thanks to our team: Senior producer Mary Dooe. Associate Producer Hannah Bates. Audio product manager Ian Fox. and Senior Production Specialist Rob Eckhardt. And thanks to you for listening to the HBR IdeaCast. We’ll be back with a new episode on Tuesday. I’m Alison Beard.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Wavelink signs distribution agreement with Cloudian to support growing demand for artificial intelligence-ready, cloud-native storage solutions

Published

on


COMPANY NEWS: Wavelink, an Infinigate Group company and leader in technology distribution, services, and business development in Australia and New Zealand, has signed a distribution agreement with Cloudian, a global leader in S3-compatible file and object storage. Under the agreement, Wavelink will distribute Cloudian’s portfolio throughout Australia, New Zealand, and Oceania.

Cloudian’s artificial intelligence (AI) ready data platform, HyperStore, delivers highly scalable, S3-compatible object storage that integrates seamlessly across on-premises, private, and public cloud environments. Its modular architecture and pay-as-you-grow model make it ideally suited for organisations looking to move workloads from hyperscale clouds to local infrastructure, often to reduce latency, improve cost predictability, or regain data control. With exabyte scalability, full S3 application programming interface (API) compatibility, multi-tenancy, and military-grade security, HyperStore is a robust solution for AI workloads that demand secure access to large volumes of data.

Ilan Rubin, chief executive officer, Wavelink, said, “Cloudian is a great fit for Wavelink’s channel partners, from managed service providers to resellers specialising in cloud, infrastructure, and security. Wavelink is excited to support Cloudian’s growth across the region, and its market leadership, flexible commercial model, and compatibility with a wide range of use cases make Cloudian an ideal addition to Wavelink’s portfolio.”

The partnership further strengthens Wavelink’s ability to support partners across all stages of the cloud journey, from public cloud optimisation and hybrid cloud strategies to on-premises deployment for AI model training and inferencing. Coupling Cloudian’s cost-effective scalability with Wavelink’s channel development services provides a solid foundation for meeting growing regional demand for secure, AI-ready storage platforms.

James Wright, managing director, Asia Pacific and Japan, Cloudian, said, “Cloudian is excited to partner with Wavelink to expand its reach across Australia, New Zealand, and Oceania, and in particular, to bring the HyperStore platform to more organisations. Whether customers are looking to contain public cloud costs, bring data closer to compute, or accelerate their AI initiatives, Cloudian’s modern architecture is built to deliver.”

As part of the agreement, Wavelink will provide partner enablement programs, technical training, and go-to-market initiatives tailored to industries embracing AI and hybrid data strategies.

About Cloudian
Cloudian is the most widely deployed independent provider of object storage. With a native S3 API, we bring the scalability, flexibility, and management efficiency of public cloud storage into your data centre while providing ransomware protection and reducing total cost of ownership by 60 per cent or more compared to traditional storage area network (SAN)/network attached storage (NAS) and public cloud.

About Wavelink
Wavelink, an Infinigate Group company, is a leading technology distributor in Australia and New Zealand (ANZ), specialising in channel services and business development with a strong focus on advanced cybersecurity, mobility, networking, and storage solutions. We empower our channel partners with the support and technical expertise they need to succeed while building strategic channels for our vendor partners.

Wavelink stands out in the ANZ distribution market due to our specialised expertise in vertical and operational technology, providing unparalleled depth to our technologies and services. Our deep understanding of customer needs lets us connect vendor technologies with the right partners and end customers. This is reinforced by our comprehensive services portfolio, designed to drive partner success at every opportunity.

For more information, visit www.wavelink.com.au.



Source link

Continue Reading

AI Insights

BSA 42 | Artificial intelligence

Published

on


But is political orientation associated with people’s views towards different AI technologies? As noted earlier, we suspect that the relationship between political orientation and people’s perceptions of AI will vary depending on the specific AI application being considered. For example, we might expect people with right-wing views to be more likely to support the use of AI for calculating eligibility for welfare payments, on the basis that automated rules may be more likely to be enforced. Those with left-wing views, in contrast, may be more concerned about the risk of inequitable decisions being made.

To understand these relationships, we examine whether political orientation, as measured by two Likert scales that are included as standard on the British Social Attitudes (BSA) and thus have also been asked of all members of the NatCen Opinion Panel, is related to perceptions of the benefits of AI applications. One of these scales identifies whether people are on the left or on the right, the other whether they are libertarian or authoritarian in outlook. (Further details on the derivation of these scales are available in the Technical Details). For the purpose of these analyses and those appearing later in the report, we divide respondents first, into the one-third most ‘left-wing’ and the one-third most ‘right-wing’ and, second, the one-third most libertarian and the one-third most authoritarian, in each case based on their scores on the relevant scale. 

Those with right-wing views are more likely than those with left-wing views to think the benefits of AI outweigh the concerns. Table 2 shows that those with right-wing views have net benefit scores that are consistently higher than those with left-wing views in all cases, except with regard to driverless cars. This difference is particularly pronounced for the use of facial recognition for policing and the use of AI to determine welfare eligibility. 

People with right-wing views perceive positively some uses of AI that people with left-wing views perceive negatively overall namely determining loan repayment risk, robotic care assistants, and determining welfare eligibility. Looking at the benefit and concern scores separately suggests that these differences result from the fact that those with left-wing views report higher levels of concern across most technologies, compared with people with right-wing views, while the two groups’ perceptions of benefit are more similar. For example, while 36% and 35% of people with left-wing and right-wing views respectively report mental health chatbots to be beneficial, 68% of people with left-wing views say they are concerned by this use of the technology, compared with 59% of people with right-wing views. 

Table 1. Net benefit scores, by left-right wing views
  Left Right Difference 
AI use      
Cancer risk 1.3 1.4 +0.1
Facial recognition in policing 0.8 1.5 +0.7
Large language models 0.3 0.4 +0.1
Loan repayment risk -0.1 0.4 +0.5
Robotic care assistants -0.1 0 +0.1
Welfare eligibility -0.6 0.2 +0.8
Mental health chatbot -0.7 -0.4 +0.3
Driverless cars -0.7 -0.7 0.0

Note: Positive scores indicate perceptions of benefit outweigh concerns while negative scores indicate concerns outweigh benefits.  Scores can range from -3 to +3. 
Unweighted bases can be found in Appendix Table A.1 of this chapter.

There is less of a consistent difference between the scores of those with libertarian views and those with an authoritarian outlook, with the direction of difference not always operating in the same direction. That said, Table 2 shows that people with authoritarian views feel the benefits of AI outweigh their concerns in the case of five uses facial recognition for policing, assessing risk of cancer, LLMs, assessing loan repayment risk and assessing welfare eligibility. Their net benefit score is particularly high for the use of facial recognition in policing, especially when compared with those with libertarian views. These data align with previous research, which finds that the use of AI for facial recognition in policing is particularly likely to appeal to people with authoritarian views (Peng, 2023). Meanwhile, libertarians have more positive net benefit scores than authoritarians for the majority of private sector AI applications, such as robotic care assistants and driverless cars, perhaps reflecting their view of AI as potentially increasing human choice by widening the range of options for undertaking various tasks.

The difference in attitudes between these two groups is also notable in relation to the use of AI to assess welfare eligibility, where those with libertarian views, unlike those with an authoritarian outlook, feel the concerns around this technology outweigh potential benefits. This view may feed their concern for the possibility of more heavy handed state intervention, when AI is used in the public sector. 

Table 2. Net benefit scores, by libertarian-authoritarian views 
  Libertarian Authoritarian Difference
AI use      
Cancer risk 1.4 1.4 +0.0
Facial recognition in policing 0.7 1.6 +0.9
Large language models 0.2 0.4 +0.2
Loan repayment risk 0 0.3 +0.3
Robotic care assistants 0.1 -0.2 -0.3
Welfare eligibility -0.5 0.2 +0.7
Mental health chatbot -0.6 -0.5 +0.1
Driverless cars -0.4 -0.9 -0.5

Note: Positive scores indicate perceptions of benefit outweigh concerns while negative scores indicate concerns outweigh benefits. Scores can range from -3 to +3. 
Unweighted bases can be found in Appendix Table A.2 of this chapter.

To better understand the relationship between political orientation and net benefit scores (whether benefits outweigh concerns, or vice versa), we conducted a multivariate analysis (linear regression) to assess to what extent net-benefit scores are associated with political orientation, once a number of demographic characteristics have been controlled for namely ethnicity, digital skills, income, age and education. Previous analysis of these data highlighted that ethnicity, digital skills and income are associated with overall attitudes to AI (Modhvadia et al., 2025). We also anticipated that age and education may be linked. Studies suggest older people reject new technologies, feeling they are not useful in their personal lives (Zhang, 2023) – while we expect that those with higher levels of education may have higher levels of digital literacy and openness to new technologies.  

The results of our analysis are presented in the appendix (Table A.3). They show that for the majority of uses of AI, political orientation remains significantly associated with perceptions of net benefit, even once the relationships between attitudes to AI and these demographic variables have been controlled for. The net benefit scores of people with more right-wing views are significantly higher for nearly all of our AI applications. The only exception is driverless cars, the application that is most negatively perceived by all of our respondents. The strength of these relationships is, however, relatively low. Similarly, people with authoritarian views have significantly higher net benefit scores for facial recognition in policing, the use of AI in determining welfare benefits, the use of AI in determining loan repayment risk, LLMs and mental health chatbots, even once the relationships with other demographic variables have been controlled for. The only instance where people with authoritarian views have significantly lower net benefit scores, compared with those holding libertarian views, is in relation to driverless cars. However, again, the strength of these relationships is variable. It is strongest for facial recognition in policing and weakest for mental health chatbots. These findings suggest that political orientation is associated with attitudes to AI, even when other demographic differences have been controlled for, but that the magnitude of this association depends on the use to which AI is applied.

In terms of our control variables, ethnicity, digital skills, income and age were found to be associated with how people view each use of AI. Black and Asian people are less likely to perceive facial recognition in policing as beneficial, while they are more likely to see benefits for LLMs and mental health chatbots. Those with higher digital skills are generally more positive about most of the applications of AI, with this association being strongest in the case of robotic care assistants. Having a higher income is related to more positive perceptions of all of the AI uses, while older people (aged 55 years and over) are more positive about the use of AI in health diagnostics (detecting cancer risk) and justice (facial recognition in policing) but are more negative about LLMs and robotic care assistants.

Common benefits and concerns

The net benefit scores discussed so far provide a summary measure of the balance of benefit and concern for eight different applications of AI. To understand the reasons for these assessments, in each case we asked respondents to identify from a list the specific benefits and concerns they associate with each AI technology. For example, for facial recognition in policing, we provided the following list of possible benefits:

Make it faster and easier to identify wanted criminals and missing persons
Be more accurate than the police at identifying wanted criminals and missing persons
Be less likely than the police to discriminate against some groups of people in society when identifying criminal suspects
Save money usually spent on human resources
Make personal information more safe and secure

Our list of possible concerns that people might have about the same AI application were as follows:

Cause delays in identifying wanted criminals and missing persons
Be less accurate than the police at identifying wanted criminals and missing persons
Be more likely than the police to discriminate against some groups of people in society
Lead to innocent people being wrongly accused if it makes a mistake
Make it difficult to determine who is responsible if a mistake is made 
Gather personal information which could be shared with third parties
Make personal information less safe and secure
Lead to job cuts (for example, for trained police officers and staff)
Cause the police to rely too heavily on it rather than their professional judgements

While each list was tailored to the specific technology being asked about, the benefits and concerns included in each list had common themes (such as efficiency and bias). Respondents were able to select as many options from each list as they felt applied, as well as “something else”, “none of the above” and “don’t know”.

Across all of our respondents, the most commonly selected benefit for each use of AI related to economic efficiency and/or speed of operation. Meanwhile, the most commonly selected concerns were about over-reliance and inaccuracy. For example, in the case of facial recognition technology in policing, 89% feel that faster identification of wanted criminals and missing persons is a potential benefit, while 57% think that overreliance on this technology is a concern. (Further details of these results are available in Modhvadia et al (2025)).  

But how does political orientation shape these views? We found that people across the political spectrum tend to highlight similar types of benefits and concerns but that the degree to which they do so varies. The next sections focus on four specific themes: speed (i.e. completing tasks faster than humans), inaccuracy, job displacement, and discrimination. These themes reflect broader concerns about efficiency and fairness areas where political orientation is especially likely to influence attitudes, as discussed in the Introduction. As before, to analyse these differences, we have divided people into three equally-sized groups along the two ideological dimensions and compare the results for the two groups at each end.

Speed and efficiency

We found some support for the theory, set out previously, that those with right-wing views might be more likely to value the economic efficiency that might be delivered by AI. Improving the speed and efficiency of services was more commonly selected as an advantage by those with more right-wing views than those with more left-wing views in the case of determining eligibility for welfare benefits like Universal Credit, and using AI for determining an individual’s risk level for repaying a loan. As shown in Table 3, 55% of those with right-wing views select this benefit for determining welfare eligibility, compared with 49% of those with left-wing views, and 61% select the same benefit for loan repayment risk, compared with 56% of those with left-wing views. However, these differences are small and only apparent in uses of AI that relate to the distribution of financial resources.

Table 3. Perceptions about AI-enabled speed and efficiency, by political orientation (left vs right)
  Left Right Difference
AI use      
% seeing benefits related to speed and efficiency for….      
Cancer risk 85 85 +0
Facial recognition in policing 87 90 +3
Large language models 57 56 -1
Loan repayment risk 56 61 +5
Robotic care assistants 50 48 -2
Welfare eligibility 49 55 +6
Mental health chatbot 52 50 -2
Driverless cars 35 30 -5
Unweighted base 1079 1078  

Differences between those with authoritarian views and those with a libertarian outlook in their beliefs about the potential for AI to improve speed and efficiency are more prominent. As shown in Table 4, those with libertarian views tend to be more likely to see speed and efficiency as key benefits of most AI applications, perhaps seeing possibilities for the opening up of human choice and market competition from AI innovations. For example, 62% of those with libertarian views select this benefit for large language models, compared with only 50% of those with authoritarian views. The only exception to this pattern is in relation to facial recognition for policing, where 91% of those with authoritarian views feel efficiency to be a key benefit, compared with 86% of those with libertarian views. This may be because, as compared with those with libertarian views, those with an authoritarian outlook are more positive about the use of facial recognition in policing irrespective of how it is undertaken. In contrast, the low figure of 25% for those with authoritarian views seeing efficiency gains from driverless cars (compared with 40% of those with libertarian views) may reflect a sense of the possible legal issues and potential chaos that could result from this (as yet untested in a UK setting) AI innovation on Britain’s roads.

Table 4. Perceptions of AI-enabled speed and efficiency, by political orientation (libertarian vs authoritarian) 
  Libertarian Authoritarian Difference
AI use      
% seeing benefits related to speed and efficiency for….      
Cancer risk 86 82 -4
Facial recognition in policing 86 91 +5
Large language models 62 50 -12
Loan repayment risk 60 55 -5
Robotic care assistants 54 42 -12
Welfare eligibility 55 52 -3
Mental health chatbot 58 46 -12
Driverless cars 40 25 -15
Unweighted base 1082 1081  

Inaccuracy and inequalities

As shown in Table 5, those with left-wing views are generally more worried than those with right-wing views about inaccuracy and inequity, although this difference is more pronounced for some uses of AI, compared with others. Most markedly, 63% of those with left-wing views are concerned that facial recognition in policing could lead to false accusations, whereas only 45% of those with right-wing views express this concern. People with left-wing views are also markedly more worried about inaccuracy in terms of welfare eligibility and loan repayment. 

Table 5. Concern about inaccuracy in AI technologies, by political orientation (left v right) 
  Left Right Difference
AI use      
% with concerns related to inaccuracy for….      
Cancer risk 25 23 -2
Facial recognition in policing 63 45 -18
Loan repayment risk 30 22 -8
Robotic care assistants 44 41 -3
Welfare eligibility 43 28 -15
Mental health chatbot 51 46 -5
Driverless cars 46 40 -6
Unweighted base 1079 1078  

Note: Inaccuracy concerns were not in the selection list for LLMs

Similarly, Table 6 shows that 23% of those with left-wing views are worried about discriminatory outcomes in the use of AI to determine welfare eligibility, compared with just 8% of those with right-wing views. Even for the application of AI in cancer risk assessment, a use that is consistently positively viewed across those with different political orientations, 27% of those with left-wing views are concerned about the technology being less effective for some groups of society, leading to discrimination in healthcare. The comparable figure is 17% for those with right-wing views. 

Table 6. Concern about AI-enabled discriminatory outcomes, by political orientation (left v right)
  Left Right Difference
AI use      
% with concerns related to discriminatory outcomes for….      
Cancer risk 27 17 -10
Facial recognition in policing 24 9 -15
Loan repayment risk 24 13 -11
Robotic care assistants 27 23 -4
Welfare eligibility 23 8 -15
Mental health chatbot 16 8 -8
Driverless cars 28 23 -5
Unweighted base 1079 1078  

Note: Discriminatory concerns were not in the selection list for LLMs

Research suggests that people who hold more authoritarian views are less likely to be concerned about discrimination or fairness (Curtice, 2024), leading us to anticipate that they are less likely to be concerned about the impact that AI technologies might have on minority groups. Our data support this theory. As shown in Table 7, for most applications of AI, those with libertarian views appear to be more concerned than those with an authoritarian outlook about discrimination. For example, 25% of those with libertarian views express concern that facial recognition in policing may discriminate against certain groups, compared with 9% of those holding authoritarian views. A similar pattern can be found in attitudes towards the use of AI for detecting the risk of cancer risk; 29% of those holding libertarian views worry about it leading to health inequalities, compared with 15% of those with authoritarian views.

Table 7. Concern about AI enabled discriminatory outcomes, by political orientation (libertarian v authoritarian)
  Libertarian Authoritarian Difference
AI use      
% with concerns related to discriminatory outcomes for….      
Cancer risk 29 15 -14
Facial recognition in policing 25 9 -16
Loan repayment risk 20 14 -6
Robotic care assistants 26 26 0
Welfare eligibility 18 11 -7
Mental health chatbot 15 9 -6
Driverless cars 26 26 0
Unweighted base 1082 1081  

Note: Discriminatory concerns were not in the selection list for LLMs

In contrast, as shown in Table 8, worries about inaccuracy appear to depend much more on the specific application of AI being considered, than to people’s libertarian-authoritarian orientation. That said, 61% of those holding libertarian views – but only 47% of authoritarians are worried about false accusations from facial recognition. Meanwhile, 39% of those holding libertarian views are worried that the use of AI for determining welfare eligibility may be less accurate than the use of professionals, compared with 31% of those holding authoritarian views. However, the inverse pattern is found in the case of robotic care assistants. 

Table 8. Concern about inaccuracy in AI technologies, by political orientation (libertarian v authoritarian)
  Libertarian Authoritarian Difference
AI use      
% with concerns related to inaccuracy for….      
Cancer risk 20 27 +7
Facial recognition in policing 61 47 -14
Loan repayment risk 24 26 +2
Robotic care assistants 39 47 +8
Welfare eligibility 39 31 -8
Mental health chatbot 51 46 -5
Driverless cars 39 45 +6
Unweighted base 1082 1081  

Note: Inaccuracy concerns were not in the selection list for LLMs

Job displacement

For all the AI applications, those with left-wing views are more concerned than those with right-wing views about potential job losses. This is consistent with existing research, which posits that left-wing individuals are more likely to express concerns about job displacement and increasing social inequality (Curtice, 2024). Table 9 shows that this concern is particularly high for both robotic care assistants (where 62% are of those on the left worried about job loss, compared with 44% of those who are right-wing) and driverless cars (where 60% are worried about job loss, compared with 47%). 

Table 9. Concern about job loss, by political orientation (left vs right)
  Left Right Difference
AI use      
% with concerns related to job loss for….      
Facial recognition in policing 46 37 -9
Large language models 48 37 -11
Loan repayment risk 46 37 -9
Robotic care assistants 62 44 -18
Welfare eligibility 50 38 -12
Mental health chatbot 47 32 -15
Driverless cars 60 47 -13
Unweighted base 1079 1078  

Note: Job loss concern not in selection list for cancer risk detection

Again, as shown in Table 10, the extent to which libertarians differ from authoritarians in their level of concern about job losses depends on the use to which AI is being put. More people with authoritarian views are worried in the case of facial recognition in policing (44%, compared with 38% of those with libertarian views) while more people with libertarian views are worried in relation to general-purpose LLMs (46%, compared with 39% of people with authoritarian views). For other applications of AI, levels of concern about job losses are largely similar, irrespective of whether someone holds authoritarian or libertarian views.

Table 10. Concern about job loss, by political orientation (libertarian vs authoritarian)
  Libertarian Authoritarian Difference
AI use      
% with concerns related to job loss for….      
Facial recognition in policing 38 44 +6
Large language models 46 39 -7
Loan repayment risk 41 46 +5
Robotic care assistants 52 54 +2
Welfare eligibility 44 46 +2
Mental health chatbot 42 41 -1
Driverless cars 53 55 +2
Unweighted base 1082 1081  

Note: Job loss concern not in selection list for cancer risk detection

Taken together, these findings show that political orientation is linked to particular beliefs about the key advantages and disadvantages of AI. In general, people who are left-wing are more concerned than those with right-wing views about inaccuracy, discrimination and job loss, perhaps reflecting a broader concern they may have that AI technologies exacerbate inequalities in society. People with libertarian views, more so than people with authoritarian views, appear to be concerned about discrimination for most applications of AI, while at the same time showing more optimism about the potential speed and efficiency benefits that might come with these tools.

However, these findings also indicate that people’s attitudes towards AI and their relationship with political orientation, depend on their attitude towards the particular use to which the technology is put. For instance, the greater popularity of the use of facial recognition in policing among authoritarians translates into greater enthusiasm for the various potential advantages that it is thought AI could bring to this task. One possible explanation for the different attitudes of people with libertarian and authoritarian views towards the efficiency benefits of driverless cars may be that the more positive attitudes of libertarians towards the technology in general, as an AI innovation which opens up new possibilities for human choice (in this case of transport options), lead them to perceive them as more efficient, while authoritarians’ more negative views lead them to view driverless cars as less likely to bring efficiency gains. Overall, individual buy-in for specific applications of AI is likely to shape assessments of the potential benefits and risks of that application.   

Political orientation and AI regulation

We have clearly established then that political orientation shapes attitudes towards AI. These patterns, along with the common concerns and benefits that people have about AI, offer important clues about how different groups might want these AI technologies to be governed. Previous research has found that people who are left-wing are generally more likely to support greater state intervention in the economy, and are more likely to support stricter regulation of AI technologies (König et al, 2023). In contrast, right-wing individuals may oppose regulatory overreach, prioritising market freedom and economic growth achieved through AI-driven innovation. In this final section, we assess how political views influence attitudes towards AI regulation. We measured preferences for regulation by asking respondents what would make them more comfortable with AI technologies being used, providing them with the following options:

Clear explanations of how AI systems work and make decisions in general
Specific, clear information on how AI systems made a decision about you
More human involvement and control in AI decisions 
Clear procedures in place for appealing to a human specialist against a decision made by AI 
Assurance that the AI has been deemed acceptable by a government regulator 
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies 
People’s personal information is kept safe and secure 
The AI technology is regularly evaluated to ensure it does not discriminate against particular groups of people

Respondents were able to select as many options as they liked from the list of measures that could increase their comfort with AI technologies. Overall, a substantial majority of the public 72% think that laws and regulations would make them feel more comfortable with AI technologies, up from 62% in 2023 (Modhvadia et al., 2025). This increased demand for regulation is worthy of note, especially given that the UK is yet to introduce a comprehensive legal framework for AI. For this reason, in Table 11, we focus on how political orientation relates to people selecting either “laws and regulation” or “assurance that the AI has been deemed acceptable by a government regulator” as measures that would increase their comfort with AI being used.
4

Support for regulation is consistently high across both the left-right and authoritarian-libertarian dimensions. Table 11 shows that over half of both those holding right-wing and left-wing views feel assurance by a government regulator would make them more comfortable with AI. Even higher proportions of people feel laws and regulations that prohibit certain uses would make them more comfortable with AI: this is the case for 70% of those with right-wing views and 76% of those with left-wing views. Meanwhile, Table 12 shows that tighter regulation is also popular among both libertarians and authoritarians.

Table 11. Preference for government regulation, by left-wing or right-wing views
  Left Right
What would make you more comfortable with AI technologies being used? % %
Assurance that the AI has been deemed acceptable by a government regulator 58 55
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies 76 70
Unweighted base 1079 1078

Respondents who did not answer our questions about political orientation, or answered with “don’t know”, are not included in this table

Table 12. Preference for government regulation, by libertarian or authoritarian views
  Libertarian Authoritarian
What would make you more comfortable with AI technologies being used? % %
Assurance that the AI has been deemed acceptable by a government regulator 58 54
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies 77 67
Unweighted base 1079 1078

Still, people on the right and authoritarians are a little less likely than those on the left and libertarians to say that government assurance and regulation would make them feel more comfortable about AI. To examine whether these small differences remain significant once their associations with other characteristics are controlled for, we conducted a multivariate analysis (logistic regression) with political orientation and key demographic characteristics (ethnicity, digital skills, income, age and education) included as predictors of attitudes to AI regulation. These characteristics were chosen because either we have previously identified them as related to attitudes to AI (ethnicity, income and digital skills were associated with attitudes to AI in a previous study, Modhvadia et al 2025), or because we anticipate they may relate to engagement and preferences around new technologies (in the case of age and education). The results of this model are presented in the appendix (Table A.4).  

In three out of four instances, this analysis indicates that the differences, though small, are statistically significant. Those on the right are less likely than those on the left to say that either government assurance or regulation would make them feel more comfortable about AI, while authoritarians are less likely than libertarians to say the same of regulation. Other characteristics, and in particular having digital skills and a higher household income, appear to more strongly relate to preferences for regulation than political orientation.

Conclusion 

In this report, we have investigated the relationship between political orientation and public perceptions of AI technologies and their regulation. As we expected, the findings reveal a significant correlation between political orientation and the perceived benefits of and concerns about a wide range of AI applications. Those with right-wing views are more positive than those with left-wing views about nearly all the uses of AI about which respondents were asked, a pattern which held true even when the associations between on the one hand political orientations and attitudes towards AI, and on the other hand, people’s demographic characteristics were controlled for. The difference in attitudes between people with left-wing and right-wing views is most pronounced in the case of facial recognition for policing and the use of AI for assessing eligibility for welfare. Greater concern among those with more left-wing views may be occasioned by worries about how these technologies might have a negative impact on equity and fairness, as we found that those with left-wing views are more likely to report worries about inaccuracy, discrimination and job losses.

Where people stand on the authoritarian-libertarian dimension is also associated with their attitudes to the uses of AI. Those holding authoritarian views are more positive than those with libertarian views about several applications of AI. Specifically, those with authoritarian views are more likely to perceive facial recognition technologies in policing as beneficial, suggesting they may be more likely to perceive AI surveillance technologies more broadly as beneficial too. This is likely to reflect their preference for security and social order, where AI is viewed as an instrument to enhance these objectives. Conversely, people with libertarian views express heightened concerns regarding the potential for discriminatory outcomes from facial recognition technology, an outlook that aligns with their emphasis on individual autonomy and rights. They are also more likely than people with authoritarian views to have concerns about possible discrimination by other AI applications, such as in their use to predict cancer risk, provide mental health chatbots, and assess both welfare eligibility and the likelihood that someone would repay a loan.

Three of these last four applications (the exception is loan repayment) constitute the examples of the use of AI by the public sector covered by our survey. Our findings suggest that attitudes towards public sector applications, which impact people’s lives and liberty, may be more divisive between people of different political orientations than are applications of AI provided by private sector companies for consumers. Certainly, facial recognition in policing and the use of AI to determine welfare eligibility appear to be two particularly politically salient applications of AI, where there is much debate over fairness, accuracy and equity. In contrast, private sector consumer applications of AI, such as driverless cars (albeit universally regarded negatively) and LLMs (viewed positively), seem to be viewed in a similar fashion irrespective of people’s political orientation.

However, contrary to our expectations, we did not find a strong relationship between political orientation and preference for the regulation of AI. Irrespective of political orientation, we found that seven in 10 people feel laws and regulations would make them more comfortable with AI. And although support for regulation is somewhat lower among those who hold right-wing or authoritarian views, the difference is marginal. Instead, socio-economic factors such as income and digital skills appear to serve as more robust predictors of attitudes to AI regulation.  

These findings are important for three key reasons. First, as the UK government seeks to increase the use of AI, describing AI as “a golden opportunity…an opportunity we are determined to seize” (UK Government, 2025), they will need to understand people’s hopes and fears. Our findings offer an understanding of the perceptions of the technology held by different groups, as well as their likelihood of adopting AI applications in the future. They provide policymakers with insight as to how they can encourage public acceptance of AI, and the benefits that they should highlight for their message to resonate with different constituencies. Our results show that people carry with them values and expectations, such as worries about discrimination, which differ across political ideologies.

Second, these findings reiterate the value of studying attitudes towards specific uses of AI technologies. Our data suggest that some applications of AI may be politically divisive – such as facial recognition in policing and the use of AI to determine welfare eligibility – while other uses of AI, such as cancer risk assessment, are met with similar levels of optimism or concern by those with different political orientations. Future research would benefit from working with the public to understand how attitudes towards specific uses of AI affect the considerations that need to be taken into account when deploying AI technologies.

Third, as the government considers options for regulating AI, it will be important to understand where people’s concerns lie, and how opposition to regulation might arise. Our findings show that the public want regulation around AI, and this desire appears to be largely independent of political orientation. As a minimum, it appears that there is public support for the government to deliver on its commitment in the AI Opportunities Action Plan (2025) to “funding regulators to scale up their AI capabilities”.

There are signs that, in the future, considerations like these will become more important in the UK political landscape. In both the US and Europe, AI has become politically salient. In the US, any moves towards AI safety, or AI regulation have become controversial and divide explicitly along political fault-lines. In the European Union (EU), AI regulation has been implemented more comprehensively than anywhere else in the world, setting policymakers in direct confrontation with US firms and, potentially, the US administration. The UK has tried to follow a delicate path between these two extremes, but it seems likely that issues such as digital services taxes, the Online Safety Act and technology regulation more generally will become politically salient in the future. Meanwhile, the public is increasingly using commercial LLMs, which show considerable potential to reshape – and bring US influences to bear upon –  specific policy areas. Understanding of the political make-up of the public with respect to the use of AI, AI adoption and AI regulation will become increasingly helpful to politicians as they attempt to navigate this increasingly important and politically contested field.

 

Acknowledgements 

The research reported here was undertaken as part of Public Voices in AI, a satellite project funded by Responsible AI UK and EPSRC (Grant number: EP/Y009800/1). Public Voices in AI was a collaboration between: the ESRC Digital Good Network @ the University of Sheffield, Elgon Social Research Limited, Ada Lovelace Institute, The Alan Turing Institute and University College London.

The authors would like to acknowledge Octavia Field Reid, Associate Director, Ada Lovelace Institute, for her work reviewing a draft of this report. 

 

References

Ada Lovelace Institute. (October 2023). What do the public think about AI? https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/ 

Araujo, T., Brosius, A., Goldberg, A. C., Möller, J., & Vreese, C. de. (2023). Humans vs. AI: The Role of Trust, Political Attitudes, and Individual Characteristics on Perceptions About Automated Decision Making Across Europe. International Journal of Communication, 17(0) 6222-6249.

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 18(01), 1-15.

Claudy, M. C., Parkinson, M., & Aquino, K. (2024). Why should innovators care about morality? Political ideology, moral foundations, and the acceptance of technological innovations. Technological Forecasting and Social Change, 203, 1-17. https://doi.org/10.1016/j.techfore.2024.123384 

Council of the European Union (2023) ChatGPT in the Public Sector Overhyped or Overlooked? 

Curtice, John (2024), One-dimensional or two-dimensional? The changing dividing lines of Britain’s electoral politics. British Social Attitudes: the 41st report, London: The National Centre for Social Research. https://natcen.ac.uk/publications/bsa-41-one-dimensional-or-two-dimensional 

Fast, E., & Horvitz, E. (2017). Long-Term Trends in the Public Perception of Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10635 

Gur, T., Hameiri, B., & Maaravi, Y. (2024). Political ideology shapes support for the use of AI in policy-making. Frontiers in Artificial Intelligence, 7, 1-9. https://doi.org/10.3389/frai.2024.1447171 

Hemesath, S and Tepe, M. (2024). Multidimensional preference for technology risk  regulation: The role of political beliefs, technology attitudes, and national innovation cultures. Regulation and Governance 18, 1264-1283. https://doi.org/10.1111/rego.12578  

König, P., Wurster, S., & Siewert, M. (2023) Sustainability challenges of artificial intelligence and Citizens’ regulatory preferences. Government Information Quarterly, 40 1-11. https://doi.org/10.1016/j.giq.2023.101863 

Leslie, D. (2020). Understanding bias in facial recognition technologies: an explainer. The Alan Turing Institute. https://doi.org/10.5281/zenodo.4050457

Mack, E. A., Miller, S. R., Chang, C. H., Van Fossen, J. A., Cotten, S. R., Savolainen, P. T., & Mann, J. (2021). The politics of new driving technologies: Political ideology and autonomous vehicle adoption. Telematics and Informatics, 61, 101604 https://doi.org/10.1016/j.tele.2021.101604

Modhvadia, R., Sippy, T., Field Reid, O., and Margetts, H. (2025). How do people feel about AI? Ada Lovelace Institute and The Alan Turing Institute.  https://attitudestoai.uk/

Neff, G. (2024). Can Democracy Survive AI? Sociologica, 18(3), 137-146. https://doi.org/10.6092/issn.1971-8853/21108 

O’Shaughnessy, M. R., Schiff, D. S., Varshney, L. R., Rozell, C. J., & Davenport, M. A. (2023). What governs attitudes toward artificial intelligence adoption and governance? Science and Public Policy, 50(2), 161–176. https://doi.org/10.1093/scipol/scac056

Prabhakaran, V., Mitchell, M., Gebru, T., & Gabriel, I. (2022). A Human Rights-Based Approach to Responsible AI (No. arXiv:2210.02667). arXiv. https://doi.org/10.48550/arXiv.2210.02667 

UK Government. (January 2025). AI Opportunities Action Plan. GOV.UK. https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan 

UK Government. (March 2025). PM remarks on the fundamental reform of the British State. GOV.UK. https://www.gov.uk/government/speeches/pm-remarks-on-the-fundamental-reform-of-the-british-state-13-march-2025 

Wang, S. (2023). Factors related to user perceptions of artificial intelligence (AI)-based content moderation on social media. Computers in Human Behavior, 149, 107971. https://doi.org/10.1016/j.chb.2023.107971 

Wen, C.-H. R., & Chen, Y.-N. K. (2024). Understanding public perceptions of revolutionary technology: The role of political ideology, knowledge, and news consumption. Journal of Science Communication, 23(5), 1-18. https://doi.org/10.22323/2.23050207 

Yang, S., Krause, N. M., Bao, L., Calice, M. N., Newman, T. P., Scheufele, D. A., Xenos, M. A., & Brossard, D. (2023). In AI We Trust: The Interplay of Media Use, Political Ideology, and Trust in Shaping Emerging AI Attitudes. Journalism & Mass Communication Quarterly https://doi.org/10.1177/10776990231190868 

Yi, A., Goenka, S., & Pandelaere, M. (2024). Partisan Media Sentiment Toward Artificial Intelligence. Social Psychological and Personality Science, 15(6), 682–690. https://doi.org/10.1177/19485506231196817 
    
Zhang, M. (2023). Older people’s attitudes towards emerging technologies: A systematic literature review. Public Understanding of Science, 32(8), 948-968. https://doi.org/10.1177/09636625231171677 

 

Appendix

Table A.1. Net benefit scores across left-right spectrum scale: unweighted bases
  Left Right
AI use (N) (N)
Cancer risk 987 980
Facial recognition in policing 1,013 1,029
Large language models 846 814
Loan repayment risk 911 932
Robotic care assistants 908 894
Welfare eligibility 896 875
Mental health chatbot 851 807
Driverless cars 991 970
Table A.2. Net benefit scores across libertarian-authoritarian scale: unweighted bases
  Libertarian Authoritarian
AI use (N) (N)
Cancer risk 2,006 981
Facial recognition in policing 1,029 1,034
Large language models 909 779
Loan repayment risk 926 915
Robotic care assistants 918 896
Welfare eligibility 897 884
Mental health chatbot 873 823
Driverless cars 987 973
Table A.3 Linear regression of respondents’ net benefit scores
  Facial recognition for policing Welfare assessments Cancer diagnosis Loan assessments 
Left-right scale  0.18*** 0.32*** 0.08* 0.23***
  (0.03) (0.04) (0.03) (0.03)
Libertarian-authoritarian scale 0.52** 0.40*** -0.05 0.21***
  (0.03) (0.04) (0.03) (0.04)
Ethnicity (Neither Black nor Asian)         
Asian or Asian British  -0.39** 0.16 -0.22* 0.05
  (0.09) (0.12) (0.10) (0.11)
Black or Black British -0.36* -0.20 -0.16 -0.03
  (0.16) (0.21) (0.17) (0.19)
Whether the respondent has basic digital skills (no digital skills)        
Respondent has basic digital skills 0.31*** 0.06 0.03*** 0.28***
  (0.06) (0.08) (0.07) (0.07)
Monthly equivalised household income (Less than £1,500)        
Monthly equalised household income is more than £1,500 0.24*** 0.35*** 0.27*** 0.16**
  (0.05) (0.07) (0.05) (0.06)
Age (aged 18-34)        
Aged 34-54 0.02 -0.19* -0.07 0.09
  (0.06) (0.08) (0.07) (0.07)
Aged 55+ 0.16** -0.13 0.18** 0.12
  (0.06) (0.08) (0.07) (0.07)
Education (does not have a degree)        
Has a degree -0.11* 0.12 0.07 0.05
  (0.05) (0.07) (0.05) (0.06)
Adjusted R squared 0.16 0.09 0.04 0.05
Unweighted base: 2,839 2,452 2,716 2,554
  Large language models Mental health chatbots Robotic care assistants Driverless cars
Left-right scale  0.11** 0.09* 0.09* 0.06
  (0.04) (0.04) (0.04) (0.04)
Libertarian-authoritarian scale 0.15*** 0.10* -0.02 -0.17***
  (0.04) (0.04) (0.04) (0.04)
Ethnicity (Neither Black nor Asian)         
Asian or Asian British  0.28* 0.38** 0.51*** 0.25
  (0.11) (0.14) (0.13) (0.13)
Black or Black British 0.69*** 0.47* 0.18 0.09
  (0.19) (0.23) (0.21) (0.22)
Whether the respondent has basic digital skills (no digital skills)        
Respondent has basic digital skills 0.30*** 0.02 0.45*** 0.20*
  (0.08) (0.09) (0.09) (0.09)
Monthly equivalised household income (Less than £1,500)        
Monthly equalised household income is more than £1,500 0.17** 0.15* 0.23** 0.26***
  (0.06) (0.08) (0.07) (0.07)
Age (aged 18-34)        
Aged 34-54 0.02 -0.21* -0.05 0.16
  (0.07) (0.08) (0.08) (0.09)
Aged 55+ -0.24** -0.16 -0.17* -0.16
  (0.07) (0.09) (0.08) (0.08)
Education (does not have a degree)        
Has a degree -0.02 -0.10 0.27*** 0.24***
  (0.06) (0.07) (0.07) (0.07)
Adjusted R squared 0.04 0.01 0.05 0.04
Unweighted base: 2,310 2,315 2,505 2,717

*=significant at 95% level 
**=significant at 99% level 
***=significant at 99.9% level

Table A.4  Logistic regression of respondents’ references for regulation
  Assurance that the AI has been deemed acceptable by a government regulator Laws and regulation that prohibit certain uses of technologies, and guide the use of all AI technologies
Left-right scale`  -0.10* -0.12*
  (0.05) (0.05)
Libertarian-authoritarian scale 0.01 -0.21***
  (0.05) (0.06)
Ethnicity (Neither Black nor Asian)     
Asian or Asian British  0.29 -0.19
  (0.15) (0.16)
Black or Black British -0.23 -0.02
  (0.26) (0.29)
Whether the respondent has basic digital skills (no digital skills)    
Respondent has basic digital skills 0.31** 0.54***
  (0.10) (0.10)
Monthly equivalised household income (Less than £1,500)    
Monthly equalised household income is more than £1,500 0.50*** 0.52***
  (0.08) (0.09)
Age (aged 18-34)    
Aged 34-54 0.08 0.24*
  (0.10) (0.11)
Aged 55+ 0.30** 0.43***
  (0.10) (0.11)
Education (does not have a degree)    
Has a degree 0.29*** 0.22*
  (0.08) (0.09)
Unweighted base: 2,979 2,979

*=significant at 95% level 
**=significant at 99% level 
***=significant at 99.9% level

 

Publication details

Clery, E., Curtice, J. and Jessop, C. (eds.) (2025)
British Social Attitudes: The 42nd Report.   
London: National Centre for Social Research  

© National Centre for Social Research 2025

First published 2025

You may print out, download and save this publication for your non-commercial use. Otherwise, and apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act, 1988, this publication may be reproduced, stored or transmitted in any form, or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction, in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the National Centre for Social Research.

National Centre for Social Research 
35 Northampton Square 
London  
EC1V 0AX  
info@natcen.ac.uk 
natcen.ac.uk



Source link

Continue Reading

AI Insights

Wimbledon has an AI problem, but are tennis players just using technology as a scapegoat?

Published

on



  • Wimbledon’s AI-powered line calls have replaced human judges
  • Players like Jack Draper and Emma Raducanu have voiced frustration over questionable calls
  • Despite its precision, Wimbledon’s AI system has experienced malfunctions that raised backlash among fans as well

Wimbledon made headlines this year by eliminating human line judges entirely, replacing them with an AI-powered system designed to make automated calls with pinpoint accuracy. But while the technology may be getting most of the calls right, it’s also causing frustration among players and fans alike. Complaints have poured in about missed or delayed calls, inaudible announcements, and a lack of transparency when things go wrong.

Hawk-Eye Live, a system made up of a nest of high-speed cameras and AI processing, is now officiating all of Wimbledon’s line calls and is supposed to be incredibly precise, more than having humans line the court.



Source link

Continue Reading

Trending