Connect with us

AI Insights

Ensuring Boston Ballet Stays Relevant

Published

on


BRIAN KENNY: Welcome to Cold Call, the podcast where we discuss real-world business challenges through the lens of Harvard Business School case studies. In this episode, we’ll peel back the curtain of the Boston Ballet, one of America’s most renowned cultural institutions. Ballet is an art form dating back hundreds of years. It’s hard to overstate the significance of ballet in European culture and history, yet with such legacy comes the challenge of innovation.

Our focus today is on the journey and impact of Ming Min Hui, who broke new ground as a young Asian-American woman appointed executive director of the Ballet during its 60th season. It’s a case that explores the intersection of for-profit and nonprofit management, the evolution of ballet as an institution, and difficult choices leaders must make to keep art both beautiful and relevant. Today on Cold Call, we welcome Professor Edward Chang and the case protagonist, Ming Min Hui, to discuss the case, “Ming Min Hui at the Boston Ballet“. I’m your host Brian Kenny, and you’re listening to Cold Call on the HBR podcast network.

Edward Chang teaches in the Negotiations Unit at Harvard Business School. He is the case author. And Ming Min Hui, as I just mentioned, is the protagonist in today’s case. Welcome both of you to Cold Call.

EDWARD CHANG: Thank you so much for having us.

MING MIN HUI: Thank you. I’m glad to be here.

BRIAN KENNY: It’s great to have you here. This case was really interesting. I am a lifelong Bostonian. I’ve been to the Ballet many times. In fact, small world story, I actually sang in the boys’ choir at the Nutcracker when I was a young boy. Arthur Fiedler conducted that, so I’ve seen backstage at the Boston Ballet.

MING MIN HUI: Yes, that’s great history.

BRIAN KENNY: And it’s not all pretty. I want people to know that backstage at a ballet, there’s a lot of stuff going on, it’s very busy, but the case is really interesting, so let’s just dive right in. Edward, I’m going to start with you. I want to know sort of what initially drew you to the Boston Ballet as the focus of this particular case study, and what made Ming’s leadership story so compelling to you?

EDWARD CHANG: Yeah. There are a lot of different aspects of the case that I think are compelling that drew me to want to write a story both about Ming, as well as the ballet as it was coming out of both the COVID-19 pandemic as well as the racial reckoning of the summer of 2020. And I think those dual crises were really one of the focal points to highlight. I think one of the things that’s particularly interesting about the Boston Ballet is, as you mentioned in the introduction, there’s this very long history of ballet that it’s also kind of situated into. And so in some ways, these crises are even sharpened relative to maybe what other organizations are facing or other businesses are facing, where yes, they’re still thinking about, how can they commit to racial equity in this time of societal turbulence, but maybe they don’t have the same kind of history or tradition that the ballet is also balancing.

And then if we think about Ming as a case protagonist, I mean, just a really incredible backstory and life story, as well as kind of being this trailblazer of being a woman of color in this lead position in the arts world, which is quite unusual. And so, I think it’s also an interesting kind of parallel thinking about how the Boston Ballet as an organization has to evolve and develop, as well as Ming’s own personal leadership journey to get to the point of where she is today.

BRIAN KENNY: Yeah. Do you have a cold call that you like to start the class with?

EDWARD CHANG: I mean, like any good cold call, you always think about, what’s the primary decisions that the case protagonist has to face? And the case ends with Ming kind of contemplating, how is she going to lead the ballet in this post-pandemic, post-racial-reckoning story? And so, the cold call that I like to start with is just, what do you think are the top priorities for Ming to think about as she’s navigating the situation?

BRIAN KENNY: Yeah, that’s a good one. So Ming, let me turn to you for a second. You’re a graduate. I didn’t mention that when I introduced you, but great to have you back on campus. And the case notes about the fact that you really didn’t fit the traditional profile. Edward just referenced that as well. How did your background factor into shaping your leadership style as you thought about this opportunity?

MING MIN HUI: Yeah, that’s true. I think typically, arts leaders, particularly in the ballet world, but I think this is true to generalize broadly, often there is a requirement that you have been an artist and have a very deep understanding of the art form that the institution, the organization, is serving. And so in that sense, I am a little bit of a different kind of breed of what an arts manager, arts leader looks like. I did ballet extremely seriously as a kid. It was by far my most time-consuming extracurricular in high school, and my mother had to negotiate with me so I could be in “The Nutcracker” if my grades didn’t slip, so that was sort of the condition of my childhood. But I never became professional. I never had aspirations to be professional. I didn’t grow up in kind of the ranks of a ballet company, which is often kind of a prerequisite for what it means to lead the company.

The structure of the ballet world, though, often has an executive and an artistic director, so the artistic director then ends up really carrying these artistic expertise requirements, and then the executive director is more responsible for the business side. So, at least that window of opportunity existed for someone with a more business-oriented background to come into the role of the executive director spot. But it’s just not as common for someone to be in that role and not have that background at all, and to instead have been a banker, investment banker, an MBA. That was my training ground.

But I think that actually that combined experience of having loved ballet as a child and then subsequently really spending the early part of my career focused on building up my business toolkit means that I can be a very unique type of partner to the artistic director. It’s almost more clear in a sense that we have our own lanes and our own areas, kind of domain areas, of expertise. And it means that while I have deep appreciation for the art form, I’m constantly learning about it too, and developing even kind of newfound later in life appreciation. The thing that I know that I bring to bear is a much more clear-eyed and critical point of view when it comes to executive communication and data and data-driven decision-making, and the things that Harvard management skillsets bring to bear in any career, in any kind of organization, in any sector. So, I think that blend has been really important and critical for the kind of arts leader and manager I’ve become.

BRIAN KENNY: Yeah, we actually see this phenomenon in other industries too. If you think about health care, where people rise through the medical ranks as a surgeon or as a physician of some sort, and they get moved into an administrative role, and the challenge that they face from not really having the business acumen. I’ve seen that play out in different ways.

Edward, the case really paints a picture of the changing landscape post-pandemic. It’s hard to believe the pandemic was only a few years ago. It feels like a long time ago, but at the same time, every organization came out of that facing a certain set of challenges. What were the challenges that you identified for the Boston Ballet coming out of the pandemic?

EDWARD CHANG: Yeah, and Ming, please feel free to interrupt. You were living this.

BRIAN KENNY: Yeah, you lived it.

EDWARD CHANG: One of the big challenges is that when you think about what is the main product of the Boston Ballet, it’s putting on performances. It’s creating art and putting on performances, and with the pandemic, that was no longer possible. And so, you have an organization where over 50% of the revenue, roughly, is coming from things like ticket sales, or the school that the Boston Ballet runs, and none of those sources of revenue exist any more, kind of overnight. And so one question is, what do you do in that situation? How do you just navigate the fact that, okay, you’re cutting off a giant revenue source, but you still have all these fixed costs, you have all these dancers you’ve hired, the orchestra, the rent that you’re paying on buildings and things like that. How do you just move forward?

But then even as we were coming out of the pandemic and performances were able to resume, things were changing, society was changing. There’s questions about how are people spending their money, both in terms of the audience members, how are audiences consuming the arts? And I think that there’s potentially interesting changes where during the pandemic, in this major time of global uncertainty, people maybe wanted more nostalgia. Maybe they wanted things that made them feel good. And so, a shift in consumer preferences or audience preferences towards things that are maybe more classic, like The Nutcracker or Sleeping Beauty, as opposed to more experimental.

But then when you also think about other kinds of audiences of the Boston Ballet, you think about donors, people who contribute to the arts. And coming put of both the pandemic as well as the racial reckoning of the summer of 2020 is perhaps a desire for people who traditionally have given lots of money to different organizations wanting to have different sorts of impact with their money, and thinking about, how does the Boston Ballet shape itself as an organization that will help these philanthropists have the sort of impact that they want to have? There are kind of these giant underlying monumental forces pushing audiences in different directions, and the Ballet and Ming really need to figure out, how do you address these changes?

BRIAN KENNY: Yeah, that’s a big set of challenges right there, and I definitely want to come back to the one about program choice. How do you show that you are advancing with the times, but you’re still respecting the sort of comfort food of the art form, the classics? But before we do that, Ming, if you can just take us back to probably March of 2020. You’re about ready to launch a new season. You’ve got exciting things that lie ahead, and the pandemic hits. What was the scene like? How did you make those decisions, and what things did you factor in?

MING MIN HUI: Yeah, the date is March 18, 2020, which is how you know it was significant. I was CFO at the time, so I worked for our last executive director, Max Hodges, also an HBS alum. She’s 2010, I’m 2015. And Max was actually coming back from maternity leave, and we were about to open a ballet called Carmen. It had done quite well in the box office. I think Carmen is one of those programs that has a lot more name recognition, so everyone was very excited about this. It was a more contemporary version of Carmen, and we had been watching the public health indicators for a few weeks with increasing worry that this program maybe wasn’t going to see the light of day.

And so, I remember that Max came back from maternity leave on a Wednesday, and the show was going to open the next day, on a Thursday, and she said, “From a public safety standpoint, we probably can’t do this.” And then I said, “Unfortunately, as the CFO, I’ve also run the numbers on what our balance sheet and deferred revenue commitments look like, and we’re in a tough liquidity position if we have to suddenly refund not just this program, but the rest of the spring season.” And so this now begins the trickle-down effect of these organizations, nonprofits, arts organizations are so mission-driven that we often run on incredibly thin margins, incredibly thin working capital lines. And so, the balancing of needs of different constituents, and then our own kind of existential threat within all of this, was very apparent in that moment.

And obviously what ended up happening afterwards is we made that call. We canceled opening night, so the piece got rehearsed, but then everyone went home. And what we thought was going to be two weeks obviously quickly became a lot more than two weeks of being at home, figuring out all kinds of permutations to produce art remotely and in hybrid forms and in pod forms and so forth. And we also worked incredibly closely with audiences, subscribers, donors, to have credits and avoiding refunds, donate back ticket values, and then raise kind of incremental philanthropic funds to get us through this period.

BRIAN KENNY:Yeah, and you know, out of every bad situation can come some good things, so you found maybe new ways to connect with your audience using technology in ways that you hadn’t before. I know that at HBS, we’ve actually done episodes on Cold Call about the switch that we had to make literally within a week to be able to go from teaching in the classroom to teaching online, and every other organization did some variation of that. Edward, let me come back to you. The case does a great job of talking about how the Boston Ballet has to sort of balance the themes of personal identity and institutional identity, this is, as I said, a classic institution in Boston, but also demonstrate change. How do you think that came out and manifested itself in the case?

EDWARD CHANG: When you think about an art form like ballet that has this extremely rich tradition, there is a desire to preserve a lot of that tradition, preserve a lot of the history, preserve the canon. But at the same time, when we have a backwards-looking perspective, we can see that there were parts of this history that maybe were exclusionary, or that there are pieces that in their original choreographies perhaps perpetuated racist stereotypes or tropes. And there’s a question of, how do you stay relevant in the current world? How do you stay relevant in a world where norms in society change, expectations change?

And so, I think that one of the challenges for an organization like the Boston Ballet and for Ming as executive director is to think about, how do you strike the right balance between maybe a perspective on maintaining tradition, maintaining an art form, while also remaining relevant, while also continuing to innovate? And especially when you think about the audience for ballet looks very different now than it did 50 years ago, and it’s going to look very different in 50 years from now. And if the Boston Ballet wants to be an organization that’s going to stay relevant, that’s going to last another 60 years, what are the steps that need to be taken today in order to ensure that relevance moving forward?

BRIAN KENNY: Yeah, so how did you think about that, Ming, as you came in? You’ve held multiple roles within the organization, maybe coming in a little more in your comfort zone and as chief of staff or CFO-type roles, and then moving into the executive director role, where you had to take a lot more care and concern about the creative side of the house. How did you think about that and how did those roles maybe prepare you for the challenges of stepping into the executive director role, particularly at a moment of crisis?

MING MIN HUI: Yeah. Well, I’m marveling at what a good student of this organization Edward has been, because that is a very good descriptor of the ongoing tensions that I think exist not just for Boston Ballet, but for the industry writ large. And I’ll preface, too, just to say that the work that I think the organization has done is certainly not driven solely by me or carried solely by me. Now, I just sort of carry a different role within the work, given the position and given a certain dimension of the personal significance, I think, that I carry now as someone who is from an underrepresented racial ethnic background, gender background, for the role that I’m in. And so, that carries kind of with it a different visibility around the work.

But I think Boston Ballet has in many ways been a leader in these ways that you navigate these tensions, because we’ve been, I think, careful from the get-go to not be too reductionist with any of the approaches to these hard questions, where everything requires a real sense of open-mindedness, of nuance and appreciation, that the conversation is ongoing and very contextualized. And so, when it comes to, for example, things like preserving canon, how do you navigate the challenges of some of these works that were made during a time when people didn’t know necessarily what they were depicting, and how that might reflect years later with societies that then that becomes a very inaccurate representation or kind of depiction of a certain group of people, type of people.

What I see in front of us is that there’s often kind of several different tough ways to approach these things. One is to choose to move ahead and just ensure that the work is surrounded with education, with context. It’s part of a conversation piece. But that’s not a great solution for all cases because you, at the end of the day, might be perpetrating a stereotype on stage or among an audience that causes more harm than anything you can put around it. So there’s also a step then around what it means to preserve some great dancing, some of the great classical techniques that these ballets represent, but perhaps strip away some of the narrative components that might be challenging or problematic.

And you know, “Bayadere,” for example, is kind of an interesting example of this. There’s a certain act within “Bayadere” that is very famous. It’s called “King of the Shades,” and it’s really devoid of any of the kind of reductionist Asian stereotyping that pervades a lot of the rest of the ballet. And so for example, we’ve excerpted this act and performed it in isolation of the rest of the main ballet.

And then there’s a third possibility, and it’s often very investment-intensive, that you can just remake it entirely. Put a new narrative, put new costuming, put new reframing on an existing ballet, and then reimagine it in a sense and kind of make it a living art form through that mechanism. But that often requires resources. It requires risk, too, in terms of how it’s going to land with new audiences, so that’s also a factor that doesn’t have … It’s not without its challenges.

BRIAN KENNY: Yeah, and you’re never going to make everybody happy. I think we’ve all learned that in different ways over the last few years, is that some of these things are pretty polarizing. You probably have a board that you have to deal with. You have an audience that’s been with the ballet for a long time and considers themselves almost like part owners of the product, and so that creates a whole different set of challenges for you.

Edward, I’m wondering, as you dug into looking at the Boston Ballet, did you note any significant differences between the sort of nonprofit management approach and the for-profit management approach? And what do you think, if we just pull the lens back even more broadly, what do you think business leaders can sort of take away from that?

EDWARD CHANG: One of the things that I really admired from talking to Ming and doing the interviews for this case is, I think the organizations that have been most successful in navigating these moments where you’re not going to be able to make everyone happy are those where they really deep down have thought deeply about, what are the guiding values or principles that are guiding these decisions? And I think for any of these decisions, if you can really boil down to, what are the underlying values or principles that you’re using to make the decisions? I think, even when people disagree with you, even if people disagree with the decision that you end up making or the outcome that comes to be, if they understand what are the underlying values and principles, it’s much more palatable, that they can at least respect that you came at it from a principled place, from a principled decision.

I think where organizations often stray, especially in these potentially controversial societal topics, is that they don’t really have a clear sense of what their values are or what their priorities are, or what are those guiding principles to make these decisions. And then when they make decisions, that they can come across as inauthentic or inconsistent, and that is where I think a lot of the backlash can come from is that you’re really not making anyone happy in those situations. And so, I think one thing that Ming and the Boston Ballet has done well is that when they’re thinking about things like, how do they balance preserving tradition, preserving art, preserving the canon, with how do we ensure that we’re staying relevant? How do we ensure that we’re not perpetuating harm in society? That they’ve really thought about, what are the key components? What are those guiding values or principles guiding these artistic and business decisions?

And when you think about that, I think from the questions of, what can we learn from nonprofits for the for-profit world, I think in a lot of the research I’ve done, organizations are probably more similar than we think. That when you think about these questions about, how do you guide an organization or help an organization navigate, or lead an organization in these times of crisis, a lot of these things about focusing on, what are the core values, what are the core principles? That this is relevant, not just for deciding which ballet to stage, but things like, when you’re going through layoffs or when you’re downsizing, how do you communicate that? What are the underlying central ideas or tenets they’re going to use to communicate to your employees, to communicate to your shareholders in ways that are going to be authentic? Those, I think, are lessons that any organization, that any leader, can take.

But of course, there are differences between the nonprofit and the for-profit world in that in the nonprofit world, there is this much more explicit fact that there is a mission besides just profit manifestation, right? That as a nonprofit organization, the Boston Ballet has this social mission to create art. And I think that that is one of the interesting things about it being a leader in many ways, and where for a for-profit organization, an overly simplistic perspective is that you’re just about trying to maximize shareholder value. I should caveat that, because the rest of the LCA teaching group will admonish me if I say that there is no legal requirement to do that in the US.

BRIAN KENNY: LCA, by the way, being Leadership and Corporate Accountability. It’s a course that we teach here to all of our first-year students.

EDWARD CHANG: Yeah, but an overly simplistic perspective on capitalism is that as a leader for a for-profit company, you’re just trying to maximize shareholder value. And I think what’s really interesting in the nonprofit space is that you do have to still think about the finances, you still have to think about the management perspective, but you also have to think about mission fulfillment.

BRIAN KENNY: Of course.

EDWARD CHANG: And so in a way, the question of leading a nonprofit organization, there is a much more explicit tension, because it’s part of your mission. Both the organization has to exist as an ongoing matter, but then you also have this social mission to fulfill, whereas in the for-profit space, maybe a lot of organizations nowadays are thinking more actively about that social mission, but there’s at least not necessarily that explicit charter in the US context.

BRIAN KENNY: Yeah, yeah. What is the mission of the Boston Ballet? I didn’t ask you that before, but I’m curious now.

MING MIN HUI: Yeah. I mean, there’s a very long mission statement somewhere on our website, but the way I like to capture it is that it really is to make dance for everybody. And so to Edward’s excellent point, I think that’s been a lot of the driving force behind how we think about what we choose to do and why is the access mission is a central focus, then, of everything we do. And if the dances that we’re putting on stage, the ballets we’re putting on stage, or are expressing through our educational commitments, if there are populations that are not feeling a sense of belonging or are not feeling like they can see themselves as part of this, we have, on some level, failed some amount of our mission, then.

And so exactly to Edward’s point, there is maybe kind of a symbiosis of some of the financial sustainability that is actually in lockstep with the access mission, right, because this question of existential relevance and audience development. I mean, of course that is something then that we should be taking as a practical, doing right and doing good, and is not at all kind of irrelevant then to the question of what it means to also just be doing well financially and sustainably. But you can see how core that is, then, for us to be grappling with these questions just because it is in the DNA of what the programs are supposed to be doing.

BRIAN KENNY: Yeah, and you gave some examples earlier about the way that you’ve adjusted some of the programming. I’m wondering, this is a two-part question, what are some of the other changes maybe you made to the business process side or to the structure of the organization to support this view of making dance accessible to everyone? And is this a sector-wide trend? Is this happening across not just dance organizations, but other types of arts organizations?

MING MIN HUI: Yeah. I mean, I think this question of accessibility for particularly classical art forms that have kind of a more Eurocentric history, it’s actually been really a conversation point long before the George Floyd kind of racial reckoning of 2020. I think that series of events just accelerated what was already an underlying series of conversations in that sector, acknowledging that if we are not grappling with these questions, the irrelevance problem becomes increasingly real.

BRIAN KENNY: Sure.

MING MIN HUI: And so it’s still, I think, true to this day that ballet and opera and symphony, a lot of these classical Eurocentric art forms, do suffer from a broad perception of being elitist or of being very white. And how much of that is based in truth versus generalized mindset is, I think, something that’s just playing in progress right now as it stands.

So at Boston Ballet, you’re right to indicate that we think a lot about what we’re producing programmatically and how these themes show up in what’s deeply core about the art-making, but it certainly shows up in just organizational practice writ large that I think applies beyond ballet, beyond the dance sector, art world, into organizations and companies, regardless of tax status. And so we’ve investigated just how to think about approaches to hiring processes differently, recruitment processes differently, in an effort to make our staff and board make-up more diverse. And that is best practices drawn from not at all a nonprofit-specific context. That is best practices drawn from much broader organizational context and thinking about what creates bias in these processes. That’s the kind of other example of ways that this just showing up well beyond the stage or in the studios.

BRIAN KENNY: Another tension that we alluded to earlier, and I’ll come back to you on this, Edward, is this tension of heritage or legacy versus forward-thinking and innovative, and sometimes those two things can chafe against each other. We grappled with this a little bit. Harvard Business School’s been around for a long time, but we are certainly very innovative. We know it. We know that we innovate here, but the brand maybe doesn’t show it in the way that we need it to all the time. How should organizations think about grappling with that tension?

EDWARD CHANG: I mean, I think it goes back to one of the things I was saying earlier about what I think the Boston Ballet does really well is to focus on the underlying values or principles. And that I think that tradition for the sake of tradition probably doesn’t serve anyone, and innovation for the sake of changing things also doesn’t serve anyone.

BRIAN KENNY: Yeah, good point.

EDWARD CHANG: And so, it’s really about thinking, “What is important about a tradition? What is important about a history that we’re trying to preserve, and why does that matter for our mission?” And the same thing for when we’re thinking about innovation, what is the purpose of the innovation? How is this going to serve our broader mission, serve our broader values for an organization? And I think when you frame it not so much in maybe a juxtaposition between tradition and innovation, but really thinking about, what are the actions that an organization can take that’s going to help drive the mission forward, that’s going to align with those values and principles, that hopefully a lot of those things that at least on the surface maybe appear to be intentioned, hopefully, that some of those tensions go away by focusing deeper down on the core values.

BRIAN KENNY: Yeah. Does that ring true to you, Ming?

MING MIN HUI: It does. I mean, the example that comes to mind for me, I think that we are really proud of a lot of the innovation that we do in the art-making and through new commissions of works by voices that are alive and well today, up-and-coming artists, choreographers. And so, certainly a lot of that is kind of pure innovation in the sense of, what is the movement vocabulary? What is the dance and artistic product that people are seeing on stage? Does it challenge a lot of conceptions of what you think ballet is? So, that’s sort of the more obvious programmatic way that we deliver on the innovation logic. But then one of the examples that came to mind in accordance with what Edward’s talking about, things like The Nutcracker are sort of this preserving of tradition, preserving of nostalgia.

BRIAN KENNY: Yeah. You can’t mess with The Nutcracker.

MING MIN HUI: Really can’t mess with that. The Tchaikovsky score is preserved in a very intentional way because it’s just such a perfectly architected piece in so many ways. But for example, this past Christmas season, holiday season, we introduced a new Nutcracker head for one of our dancers who’s black and was in the role of The Nutcracker Prince Cavalier. This head was modified so that the, I think, original Nutcracker head is sort of this very pale skin with ruddy cheeks and blue eyes, and so that we updated and had an alternate head that he could wear where the skin tone is darker. You have olive eyes. It’s just sort of more representative of his underlying true racial expression. And so this dancer, Danny Durrett, he is one of the few Black men, I think, who’ve gotten to play this role, to dance this role for a major ballet company. And by doing this, in deep conversation with him, because you want to be very respectful to whoever it is who actually has to inhabit the role in the space.

BRIAN KENNY: Sure. Yeah.

MING MIN HUI: It was incredibly meaningful for him to feel like the character, the dance role, kind of belonged to him in a way that historically, it maybe hasn’t. It opens a really complex can of worms around The Nutcracker doll and the questions of representation. And also, there’s now a really complex way to think about all the different possible racial expressions that you need to accommodate within this kind of modification. But at least we’ve started that conversation, and that’s, in some ways, a method of innovating, right?

BRIAN KENNY: Yeah, I love that.

MING MIN HUI: It’s within the classics. Yeah.

BRIAN KENNY: And back to what we were saying earlier, you can’t make everybody happy, but you’re going to have to make some people unhappy sometimes in order to make progress. This has been a great conversation. I was really looking forward to this one. I’ve got one question left for each of you, so I’ll start with you, Ming. By the way, our mission at Harvard Business School is to educate leaders who make a difference in the world. That’s a very simple statement that has a lot that backs it up, and you’re a great example of that. I’m wondering what advice you would give to other young people from underrepresented backgrounds like yourself who are aspiring to be in leadership roles in legacy or traditional institutions that maybe haven’t been as open to that in the past.

MING MIN HUI: I’ve come back to campus quite a lot, because I am quite passionate about making sure that people see that there are alternate examples of what this leadership can even just look like. And for anyone who then looks more like me, they might feel a little more inspired to believe that this path is possible. And in those conversations, I’m open to admitting that feeling a certain degree of imposter syndrome, I think that that’s maybe an overplayed concept, but it’s not that unusual. And on some level, I think that the feelings like that sometimes tell you something about what challenges you’ve accepted, and what work you’re doing that maybe goes even beyond yourself, and that it’s okay to lean into some of the discomfort that that might yield, and to appreciate that that work is really important. The advice that was given to me that I found helpful is, are you really smart enough that you’ve managed to dupe everybody else into thinking that you are not deserving? And isn’t that quite presumptuous of you to think that you have outsmarted everybody else? I just find it such a fun kind of inversion of the self-doubt that might come from not seeing yourselves in kind of the hero role or the leader role. And so, that’s something that I share in case it’s helpful to anybody else.

BRIAN KENNY: Yeah, and another piece of advice that I got along the same lines a long time ago was that if you’re not a little nervous and you don’t have a little bit of that feeling, you probably haven’t challenged yourself enough.

MING MIN HUI: Yeah, exactly.

BRIAN KENNY: So, that’s not a bad feeling to have. Edward, last question goes to you. We always ask our case authors, if there’s one thing you’d like people to remember about the “MING MIN HUI” case, what would it be?

EDWARD CHANG: And I hope that one of the things that’s emerged both from this conversation, hearing Ming talk about how the Ballet is thinking about staying relevant and also addressing issues around equity is actually understanding that these issues are much more interrelated than they might seem at first blush. And that for all sorts of organizations, not just for the Boston Ballet, that I think organizations that are going to remain relevant for another 50, 100 years, are those where questions around how do you address things like diversity, equity, and inclusion are not this kind of on the side thing that’s nice to do when we have a little bit of extra spare time.

But when you think about something like, so for the Boston Ballet, they need to think about, how do they stay relevant for an audience that’s changing? It’s changing both in terms of its taste, it’s also changing demographically. And if an organization like the Boston Ballet thinks that, “Oh, the way we’re going to do this is by hiring the exact same sort of people who all look alike, who are going to have all the same background, and that’s going to help us produce products that are going to be innovative or relevant,” I think that’s probably not a winning strategy. And in many ways when you think about something like, How do you stay relevant to audiences? How do you stay relevant in terms of creating new products? Oh, actually thinking about maybe having, in this case, having dancers, having choreographers, having staff who better reflect the future audience, that that might be a better strategy for a business in terms of staying relevant.

Or even if you think about, how do you actually recruit an employee base that’s going to help accomplish that? If you are an organization that is not invested in diversity, you are essentially, perhaps, cutting off a large portion of the talent pool, because it’s much harder to recruit people from unrepresented backgrounds if your organization is very homogenous in the first place. And that having failed not to invest in that early on is something that’s going to make it much harder to diversify later on in an organization’s life.

By investing in talent, by creating an environment where people feel like they belong, where people feel like they can bring their whole selves, for people from all sorts of backgrounds feel like they can be accepted, can hopefully help you build a diverse organization that’s going to help you generate the ideas, make the right decisions, create the products that are going to help the organization stay relevant.

BRIAN KENNY: Ming, Edward, thank you so much for joining me on Cold Call.

EDWARD CHANG: Great.

MING MIN HUI: Thank you so much.

BRIAN KENNY: If you enjoy Cold Call, you might like our other podcasts, Climate Rising, Coaching Real Leaders, IdeaCast, Managing the Future of Work, Skydeck, Think Big, Buy Small, and Women at Work. Find them wherever you get your podcasts. If you have any suggestions or just want to say hello, we want to hear from you, email us at coldcall@hbs.edu. Thanks again for joining us, I’m your host Brian Kenny, and you’ve been listening to Cold Call, an official podcast of Harvard Business School and part of the HBR Podcast Network.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Apple Reportedly Loses Key AI Mind

Published

on


Apple has kept a low profile in the artificial intelligence arms race. But now, a major talent loss is raising fresh questions about whether the iPhone maker is falling behind.

According to Bloomberg, Meta has hired Ruoming Pang, a high-level engineer who led Apple’s foundation models team. Pang, a former Google veteran and key architect behind the large language models (LLMs) powering Apple Intelligence, will now join Meta’s elite AI unit focused on building superintelligent systems.

His exit is a significant blow for Apple, especially at a time when the company is trying to convince the public and developers that it’s serious about generative AI. He was in charge of the team of roughly 100 engineers building the foundational technology behind Apple Intelligence, the suite of AI features recently announced at the company’s WWDC event.

On his LinkedIn page, Pang described his role as leading the team that develops the foundation models that power Apple Intelligence. Think of foundation models as the base engine for AI. These massive, complex models, also known as Large Language Models (LLMs), are trained on vast amounts of data and can be adapted to perform a wide range of tasks, from summarizing your emails to generating images. Pang’s team was responsible for every aspect of these models, from the training framework (AXLearn) and inference optimization (making the AI run efficiently on your device) to its multi-modal capabilities (the ability to understand both text and images).

Just last month, Pang celebrated his team’s work in a LinkedIn post following Apple’s developer conference. “At WWDC we introduce a new generation of LLMs developed to enhance the Apple Intelligence features,” he wrote. “I’m very excited about the progress we have made since last year and would like to take this opportunity to thank our team and collaborators. It has been a true privilege to work with you all!”

Pang joined Apple in 2021 after a 15 year career at Google. His departure now raises serious questions about Apple’s ability to retain top talent as it tries to play catch up in the AI arms race.

Meanwhile, Mark Zuckerberg and Meta are not just participating in the talent war; they are its most aggressive combatants. In a relentless push to build what he calls Artificial General Intelligence (AGI), or “superintelligence”—AI systems that can reason and think at or above human levels—Zuckerberg has been personally courting top researchers from across the industry. Meta is reportedly offering multi million dollar compensation packages to lure talent from rivals, particularly OpenAI.

This “hiring spree” has seen Meta assemble a dream team of AI pioneers, including former GitHub CEO Nat Friedman and Scale AI CEO Alexandr Wang. By convincing Pang to leave Apple, Zuckerberg has shown that no company is safe from his talent raid.

The move comes at a vulnerable moment for Apple. The company has faced internal debate over whether to rely on its in house models or strike deeper partnerships with third parties like OpenAI for future versions of Siri. This uncertainty has reportedly impacted morale within the AI division, and Pang’s exit could trigger a wider exodus of talent.

For Apple, losing the mind behind its core AI models is a critical setback. For Meta, it’s another high profile victory in its audacious, high stakes quest to dominate the next era of computing.

Apple did not immediately respond to a request for comment.



Source link

Continue Reading

AI Insights

Wavelink signs distribution agreement with Cloudian to support growing demand for artificial intelligence-ready, cloud-native storage solutions

Published

on


COMPANY NEWS: Wavelink, an Infinigate Group company and leader in technology distribution, services, and business development in Australia and New Zealand, has signed a distribution agreement with Cloudian, a global leader in S3-compatible file and object storage. Under the agreement, Wavelink will distribute Cloudian’s portfolio throughout Australia, New Zealand, and Oceania.

Cloudian’s artificial intelligence (AI) ready data platform, HyperStore, delivers highly scalable, S3-compatible object storage that integrates seamlessly across on-premises, private, and public cloud environments. Its modular architecture and pay-as-you-grow model make it ideally suited for organisations looking to move workloads from hyperscale clouds to local infrastructure, often to reduce latency, improve cost predictability, or regain data control. With exabyte scalability, full S3 application programming interface (API) compatibility, multi-tenancy, and military-grade security, HyperStore is a robust solution for AI workloads that demand secure access to large volumes of data.

Ilan Rubin, chief executive officer, Wavelink, said, “Cloudian is a great fit for Wavelink’s channel partners, from managed service providers to resellers specialising in cloud, infrastructure, and security. Wavelink is excited to support Cloudian’s growth across the region, and its market leadership, flexible commercial model, and compatibility with a wide range of use cases make Cloudian an ideal addition to Wavelink’s portfolio.”

The partnership further strengthens Wavelink’s ability to support partners across all stages of the cloud journey, from public cloud optimisation and hybrid cloud strategies to on-premises deployment for AI model training and inferencing. Coupling Cloudian’s cost-effective scalability with Wavelink’s channel development services provides a solid foundation for meeting growing regional demand for secure, AI-ready storage platforms.

James Wright, managing director, Asia Pacific and Japan, Cloudian, said, “Cloudian is excited to partner with Wavelink to expand its reach across Australia, New Zealand, and Oceania, and in particular, to bring the HyperStore platform to more organisations. Whether customers are looking to contain public cloud costs, bring data closer to compute, or accelerate their AI initiatives, Cloudian’s modern architecture is built to deliver.”

As part of the agreement, Wavelink will provide partner enablement programs, technical training, and go-to-market initiatives tailored to industries embracing AI and hybrid data strategies.

About Cloudian
Cloudian is the most widely deployed independent provider of object storage. With a native S3 API, we bring the scalability, flexibility, and management efficiency of public cloud storage into your data centre while providing ransomware protection and reducing total cost of ownership by 60 per cent or more compared to traditional storage area network (SAN)/network attached storage (NAS) and public cloud.

About Wavelink
Wavelink, an Infinigate Group company, is a leading technology distributor in Australia and New Zealand (ANZ), specialising in channel services and business development with a strong focus on advanced cybersecurity, mobility, networking, and storage solutions. We empower our channel partners with the support and technical expertise they need to succeed while building strategic channels for our vendor partners.

Wavelink stands out in the ANZ distribution market due to our specialised expertise in vertical and operational technology, providing unparalleled depth to our technologies and services. Our deep understanding of customer needs lets us connect vendor technologies with the right partners and end customers. This is reinforced by our comprehensive services portfolio, designed to drive partner success at every opportunity.

For more information, visit www.wavelink.com.au.



Source link

Continue Reading

AI Insights

BSA 42 | Artificial intelligence

Published

on


But is political orientation associated with people’s views towards different AI technologies? As noted earlier, we suspect that the relationship between political orientation and people’s perceptions of AI will vary depending on the specific AI application being considered. For example, we might expect people with right-wing views to be more likely to support the use of AI for calculating eligibility for welfare payments, on the basis that automated rules may be more likely to be enforced. Those with left-wing views, in contrast, may be more concerned about the risk of inequitable decisions being made.

To understand these relationships, we examine whether political orientation, as measured by two Likert scales that are included as standard on the British Social Attitudes (BSA) and thus have also been asked of all members of the NatCen Opinion Panel, is related to perceptions of the benefits of AI applications. One of these scales identifies whether people are on the left or on the right, the other whether they are libertarian or authoritarian in outlook. (Further details on the derivation of these scales are available in the Technical Details). For the purpose of these analyses and those appearing later in the report, we divide respondents first, into the one-third most ‘left-wing’ and the one-third most ‘right-wing’ and, second, the one-third most libertarian and the one-third most authoritarian, in each case based on their scores on the relevant scale. 

Those with right-wing views are more likely than those with left-wing views to think the benefits of AI outweigh the concerns. Table 2 shows that those with right-wing views have net benefit scores that are consistently higher than those with left-wing views in all cases, except with regard to driverless cars. This difference is particularly pronounced for the use of facial recognition for policing and the use of AI to determine welfare eligibility. 

People with right-wing views perceive positively some uses of AI that people with left-wing views perceive negatively overall namely determining loan repayment risk, robotic care assistants, and determining welfare eligibility. Looking at the benefit and concern scores separately suggests that these differences result from the fact that those with left-wing views report higher levels of concern across most technologies, compared with people with right-wing views, while the two groups’ perceptions of benefit are more similar. For example, while 36% and 35% of people with left-wing and right-wing views respectively report mental health chatbots to be beneficial, 68% of people with left-wing views say they are concerned by this use of the technology, compared with 59% of people with right-wing views. 

Table 1. Net benefit scores, by left-right wing views
  Left Right Difference 
AI use      
Cancer risk 1.3 1.4 +0.1
Facial recognition in policing 0.8 1.5 +0.7
Large language models 0.3 0.4 +0.1
Loan repayment risk -0.1 0.4 +0.5
Robotic care assistants -0.1 0 +0.1
Welfare eligibility -0.6 0.2 +0.8
Mental health chatbot -0.7 -0.4 +0.3
Driverless cars -0.7 -0.7 0.0

Note: Positive scores indicate perceptions of benefit outweigh concerns while negative scores indicate concerns outweigh benefits.  Scores can range from -3 to +3. 
Unweighted bases can be found in Appendix Table A.1 of this chapter.

There is less of a consistent difference between the scores of those with libertarian views and those with an authoritarian outlook, with the direction of difference not always operating in the same direction. That said, Table 2 shows that people with authoritarian views feel the benefits of AI outweigh their concerns in the case of five uses facial recognition for policing, assessing risk of cancer, LLMs, assessing loan repayment risk and assessing welfare eligibility. Their net benefit score is particularly high for the use of facial recognition in policing, especially when compared with those with libertarian views. These data align with previous research, which finds that the use of AI for facial recognition in policing is particularly likely to appeal to people with authoritarian views (Peng, 2023). Meanwhile, libertarians have more positive net benefit scores than authoritarians for the majority of private sector AI applications, such as robotic care assistants and driverless cars, perhaps reflecting their view of AI as potentially increasing human choice by widening the range of options for undertaking various tasks.

The difference in attitudes between these two groups is also notable in relation to the use of AI to assess welfare eligibility, where those with libertarian views, unlike those with an authoritarian outlook, feel the concerns around this technology outweigh potential benefits. This view may feed their concern for the possibility of more heavy handed state intervention, when AI is used in the public sector. 

Table 2. Net benefit scores, by libertarian-authoritarian views 
  Libertarian Authoritarian Difference
AI use      
Cancer risk 1.4 1.4 +0.0
Facial recognition in policing 0.7 1.6 +0.9
Large language models 0.2 0.4 +0.2
Loan repayment risk 0 0.3 +0.3
Robotic care assistants 0.1 -0.2 -0.3
Welfare eligibility -0.5 0.2 +0.7
Mental health chatbot -0.6 -0.5 +0.1
Driverless cars -0.4 -0.9 -0.5

Note: Positive scores indicate perceptions of benefit outweigh concerns while negative scores indicate concerns outweigh benefits. Scores can range from -3 to +3. 
Unweighted bases can be found in Appendix Table A.2 of this chapter.

To better understand the relationship between political orientation and net benefit scores (whether benefits outweigh concerns, or vice versa), we conducted a multivariate analysis (linear regression) to assess to what extent net-benefit scores are associated with political orientation, once a number of demographic characteristics have been controlled for namely ethnicity, digital skills, income, age and education. Previous analysis of these data highlighted that ethnicity, digital skills and income are associated with overall attitudes to AI (Modhvadia et al., 2025). We also anticipated that age and education may be linked. Studies suggest older people reject new technologies, feeling they are not useful in their personal lives (Zhang, 2023) – while we expect that those with higher levels of education may have higher levels of digital literacy and openness to new technologies.  

The results of our analysis are presented in the appendix (Table A.3). They show that for the majority of uses of AI, political orientation remains significantly associated with perceptions of net benefit, even once the relationships between attitudes to AI and these demographic variables have been controlled for. The net benefit scores of people with more right-wing views are significantly higher for nearly all of our AI applications. The only exception is driverless cars, the application that is most negatively perceived by all of our respondents. The strength of these relationships is, however, relatively low. Similarly, people with authoritarian views have significantly higher net benefit scores for facial recognition in policing, the use of AI in determining welfare benefits, the use of AI in determining loan repayment risk, LLMs and mental health chatbots, even once the relationships with other demographic variables have been controlled for. The only instance where people with authoritarian views have significantly lower net benefit scores, compared with those holding libertarian views, is in relation to driverless cars. However, again, the strength of these relationships is variable. It is strongest for facial recognition in policing and weakest for mental health chatbots. These findings suggest that political orientation is associated with attitudes to AI, even when other demographic differences have been controlled for, but that the magnitude of this association depends on the use to which AI is applied.

In terms of our control variables, ethnicity, digital skills, income and age were found to be associated with how people view each use of AI. Black and Asian people are less likely to perceive facial recognition in policing as beneficial, while they are more likely to see benefits for LLMs and mental health chatbots. Those with higher digital skills are generally more positive about most of the applications of AI, with this association being strongest in the case of robotic care assistants. Having a higher income is related to more positive perceptions of all of the AI uses, while older people (aged 55 years and over) are more positive about the use of AI in health diagnostics (detecting cancer risk) and justice (facial recognition in policing) but are more negative about LLMs and robotic care assistants.

Common benefits and concerns

The net benefit scores discussed so far provide a summary measure of the balance of benefit and concern for eight different applications of AI. To understand the reasons for these assessments, in each case we asked respondents to identify from a list the specific benefits and concerns they associate with each AI technology. For example, for facial recognition in policing, we provided the following list of possible benefits:

Make it faster and easier to identify wanted criminals and missing persons
Be more accurate than the police at identifying wanted criminals and missing persons
Be less likely than the police to discriminate against some groups of people in society when identifying criminal suspects
Save money usually spent on human resources
Make personal information more safe and secure

Our list of possible concerns that people might have about the same AI application were as follows:

Cause delays in identifying wanted criminals and missing persons
Be less accurate than the police at identifying wanted criminals and missing persons
Be more likely than the police to discriminate against some groups of people in society
Lead to innocent people being wrongly accused if it makes a mistake
Make it difficult to determine who is responsible if a mistake is made 
Gather personal information which could be shared with third parties
Make personal information less safe and secure
Lead to job cuts (for example, for trained police officers and staff)
Cause the police to rely too heavily on it rather than their professional judgements

While each list was tailored to the specific technology being asked about, the benefits and concerns included in each list had common themes (such as efficiency and bias). Respondents were able to select as many options from each list as they felt applied, as well as “something else”, “none of the above” and “don’t know”.

Across all of our respondents, the most commonly selected benefit for each use of AI related to economic efficiency and/or speed of operation. Meanwhile, the most commonly selected concerns were about over-reliance and inaccuracy. For example, in the case of facial recognition technology in policing, 89% feel that faster identification of wanted criminals and missing persons is a potential benefit, while 57% think that overreliance on this technology is a concern. (Further details of these results are available in Modhvadia et al (2025)).  

But how does political orientation shape these views? We found that people across the political spectrum tend to highlight similar types of benefits and concerns but that the degree to which they do so varies. The next sections focus on four specific themes: speed (i.e. completing tasks faster than humans), inaccuracy, job displacement, and discrimination. These themes reflect broader concerns about efficiency and fairness areas where political orientation is especially likely to influence attitudes, as discussed in the Introduction. As before, to analyse these differences, we have divided people into three equally-sized groups along the two ideological dimensions and compare the results for the two groups at each end.

Speed and efficiency

We found some support for the theory, set out previously, that those with right-wing views might be more likely to value the economic efficiency that might be delivered by AI. Improving the speed and efficiency of services was more commonly selected as an advantage by those with more right-wing views than those with more left-wing views in the case of determining eligibility for welfare benefits like Universal Credit, and using AI for determining an individual’s risk level for repaying a loan. As shown in Table 3, 55% of those with right-wing views select this benefit for determining welfare eligibility, compared with 49% of those with left-wing views, and 61% select the same benefit for loan repayment risk, compared with 56% of those with left-wing views. However, these differences are small and only apparent in uses of AI that relate to the distribution of financial resources.

Table 3. Perceptions about AI-enabled speed and efficiency, by political orientation (left vs right)
  Left Right Difference
AI use      
% seeing benefits related to speed and efficiency for….      
Cancer risk 85 85 +0
Facial recognition in policing 87 90 +3
Large language models 57 56 -1
Loan repayment risk 56 61 +5
Robotic care assistants 50 48 -2
Welfare eligibility 49 55 +6
Mental health chatbot 52 50 -2
Driverless cars 35 30 -5
Unweighted base 1079 1078  

Differences between those with authoritarian views and those with a libertarian outlook in their beliefs about the potential for AI to improve speed and efficiency are more prominent. As shown in Table 4, those with libertarian views tend to be more likely to see speed and efficiency as key benefits of most AI applications, perhaps seeing possibilities for the opening up of human choice and market competition from AI innovations. For example, 62% of those with libertarian views select this benefit for large language models, compared with only 50% of those with authoritarian views. The only exception to this pattern is in relation to facial recognition for policing, where 91% of those with authoritarian views feel efficiency to be a key benefit, compared with 86% of those with libertarian views. This may be because, as compared with those with libertarian views, those with an authoritarian outlook are more positive about the use of facial recognition in policing irrespective of how it is undertaken. In contrast, the low figure of 25% for those with authoritarian views seeing efficiency gains from driverless cars (compared with 40% of those with libertarian views) may reflect a sense of the possible legal issues and potential chaos that could result from this (as yet untested in a UK setting) AI innovation on Britain’s roads.

Table 4. Perceptions of AI-enabled speed and efficiency, by political orientation (libertarian vs authoritarian) 
  Libertarian Authoritarian Difference
AI use      
% seeing benefits related to speed and efficiency for….      
Cancer risk 86 82 -4
Facial recognition in policing 86 91 +5
Large language models 62 50 -12
Loan repayment risk 60 55 -5
Robotic care assistants 54 42 -12
Welfare eligibility 55 52 -3
Mental health chatbot 58 46 -12
Driverless cars 40 25 -15
Unweighted base 1082 1081  

Inaccuracy and inequalities

As shown in Table 5, those with left-wing views are generally more worried than those with right-wing views about inaccuracy and inequity, although this difference is more pronounced for some uses of AI, compared with others. Most markedly, 63% of those with left-wing views are concerned that facial recognition in policing could lead to false accusations, whereas only 45% of those with right-wing views express this concern. People with left-wing views are also markedly more worried about inaccuracy in terms of welfare eligibility and loan repayment. 

Table 5. Concern about inaccuracy in AI technologies, by political orientation (left v right) 
  Left Right Difference
AI use      
% with concerns related to inaccuracy for….      
Cancer risk 25 23 -2
Facial recognition in policing 63 45 -18
Loan repayment risk 30 22 -8
Robotic care assistants 44 41 -3
Welfare eligibility 43 28 -15
Mental health chatbot 51 46 -5
Driverless cars 46 40 -6
Unweighted base 1079 1078  

Note: Inaccuracy concerns were not in the selection list for LLMs

Similarly, Table 6 shows that 23% of those with left-wing views are worried about discriminatory outcomes in the use of AI to determine welfare eligibility, compared with just 8% of those with right-wing views. Even for the application of AI in cancer risk assessment, a use that is consistently positively viewed across those with different political orientations, 27% of those with left-wing views are concerned about the technology being less effective for some groups of society, leading to discrimination in healthcare. The comparable figure is 17% for those with right-wing views. 

Table 6. Concern about AI-enabled discriminatory outcomes, by political orientation (left v right)
  Left Right Difference
AI use      
% with concerns related to discriminatory outcomes for….      
Cancer risk 27 17 -10
Facial recognition in policing 24 9 -15
Loan repayment risk 24 13 -11
Robotic care assistants 27 23 -4
Welfare eligibility 23 8 -15
Mental health chatbot 16 8 -8
Driverless cars 28 23 -5
Unweighted base 1079 1078  

Note: Discriminatory concerns were not in the selection list for LLMs

Research suggests that people who hold more authoritarian views are less likely to be concerned about discrimination or fairness (Curtice, 2024), leading us to anticipate that they are less likely to be concerned about the impact that AI technologies might have on minority groups. Our data support this theory. As shown in Table 7, for most applications of AI, those with libertarian views appear to be more concerned than those with an authoritarian outlook about discrimination. For example, 25% of those with libertarian views express concern that facial recognition in policing may discriminate against certain groups, compared with 9% of those holding authoritarian views. A similar pattern can be found in attitudes towards the use of AI for detecting the risk of cancer risk; 29% of those holding libertarian views worry about it leading to health inequalities, compared with 15% of those with authoritarian views.

Table 7. Concern about AI enabled discriminatory outcomes, by political orientation (libertarian v authoritarian)
  Libertarian Authoritarian Difference
AI use      
% with concerns related to discriminatory outcomes for….      
Cancer risk 29 15 -14
Facial recognition in policing 25 9 -16
Loan repayment risk 20 14 -6
Robotic care assistants 26 26 0
Welfare eligibility 18 11 -7
Mental health chatbot 15 9 -6
Driverless cars 26 26 0
Unweighted base 1082 1081  

Note: Discriminatory concerns were not in the selection list for LLMs

In contrast, as shown in Table 8, worries about inaccuracy appear to depend much more on the specific application of AI being considered, than to people’s libertarian-authoritarian orientation. That said, 61% of those holding libertarian views – but only 47% of authoritarians are worried about false accusations from facial recognition. Meanwhile, 39% of those holding libertarian views are worried that the use of AI for determining welfare eligibility may be less accurate than the use of professionals, compared with 31% of those holding authoritarian views. However, the inverse pattern is found in the case of robotic care assistants. 

Table 8. Concern about inaccuracy in AI technologies, by political orientation (libertarian v authoritarian)
  Libertarian Authoritarian Difference
AI use      
% with concerns related to inaccuracy for….      
Cancer risk 20 27 +7
Facial recognition in policing 61 47 -14
Loan repayment risk 24 26 +2
Robotic care assistants 39 47 +8
Welfare eligibility 39 31 -8
Mental health chatbot 51 46 -5
Driverless cars 39 45 +6
Unweighted base 1082 1081  

Note: Inaccuracy concerns were not in the selection list for LLMs

Job displacement

For all the AI applications, those with left-wing views are more concerned than those with right-wing views about potential job losses. This is consistent with existing research, which posits that left-wing individuals are more likely to express concerns about job displacement and increasing social inequality (Curtice, 2024). Table 9 shows that this concern is particularly high for both robotic care assistants (where 62% are of those on the left worried about job loss, compared with 44% of those who are right-wing) and driverless cars (where 60% are worried about job loss, compared with 47%). 

Table 9. Concern about job loss, by political orientation (left vs right)
  Left Right Difference
AI use      
% with concerns related to job loss for….      
Facial recognition in policing 46 37 -9
Large language models 48 37 -11
Loan repayment risk 46 37 -9
Robotic care assistants 62 44 -18
Welfare eligibility 50 38 -12
Mental health chatbot 47 32 -15
Driverless cars 60 47 -13
Unweighted base 1079 1078  

Note: Job loss concern not in selection list for cancer risk detection

Again, as shown in Table 10, the extent to which libertarians differ from authoritarians in their level of concern about job losses depends on the use to which AI is being put. More people with authoritarian views are worried in the case of facial recognition in policing (44%, compared with 38% of those with libertarian views) while more people with libertarian views are worried in relation to general-purpose LLMs (46%, compared with 39% of people with authoritarian views). For other applications of AI, levels of concern about job losses are largely similar, irrespective of whether someone holds authoritarian or libertarian views.

Table 10. Concern about job loss, by political orientation (libertarian vs authoritarian)
  Libertarian Authoritarian Difference
AI use      
% with concerns related to job loss for….      
Facial recognition in policing 38 44 +6
Large language models 46 39 -7
Loan repayment risk 41 46 +5
Robotic care assistants 52 54 +2
Welfare eligibility 44 46 +2
Mental health chatbot 42 41 -1
Driverless cars 53 55 +2
Unweighted base 1082 1081  

Note: Job loss concern not in selection list for cancer risk detection

Taken together, these findings show that political orientation is linked to particular beliefs about the key advantages and disadvantages of AI. In general, people who are left-wing are more concerned than those with right-wing views about inaccuracy, discrimination and job loss, perhaps reflecting a broader concern they may have that AI technologies exacerbate inequalities in society. People with libertarian views, more so than people with authoritarian views, appear to be concerned about discrimination for most applications of AI, while at the same time showing more optimism about the potential speed and efficiency benefits that might come with these tools.

However, these findings also indicate that people’s attitudes towards AI and their relationship with political orientation, depend on their attitude towards the particular use to which the technology is put. For instance, the greater popularity of the use of facial recognition in policing among authoritarians translates into greater enthusiasm for the various potential advantages that it is thought AI could bring to this task. One possible explanation for the different attitudes of people with libertarian and authoritarian views towards the efficiency benefits of driverless cars may be that the more positive attitudes of libertarians towards the technology in general, as an AI innovation which opens up new possibilities for human choice (in this case of transport options), lead them to perceive them as more efficient, while authoritarians’ more negative views lead them to view driverless cars as less likely to bring efficiency gains. Overall, individual buy-in for specific applications of AI is likely to shape assessments of the potential benefits and risks of that application.   

Political orientation and AI regulation

We have clearly established then that political orientation shapes attitudes towards AI. These patterns, along with the common concerns and benefits that people have about AI, offer important clues about how different groups might want these AI technologies to be governed. Previous research has found that people who are left-wing are generally more likely to support greater state intervention in the economy, and are more likely to support stricter regulation of AI technologies (König et al, 2023). In contrast, right-wing individuals may oppose regulatory overreach, prioritising market freedom and economic growth achieved through AI-driven innovation. In this final section, we assess how political views influence attitudes towards AI regulation. We measured preferences for regulation by asking respondents what would make them more comfortable with AI technologies being used, providing them with the following options:

Clear explanations of how AI systems work and make decisions in general
Specific, clear information on how AI systems made a decision about you
More human involvement and control in AI decisions 
Clear procedures in place for appealing to a human specialist against a decision made by AI 
Assurance that the AI has been deemed acceptable by a government regulator 
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies 
People’s personal information is kept safe and secure 
The AI technology is regularly evaluated to ensure it does not discriminate against particular groups of people

Respondents were able to select as many options as they liked from the list of measures that could increase their comfort with AI technologies. Overall, a substantial majority of the public 72% think that laws and regulations would make them feel more comfortable with AI technologies, up from 62% in 2023 (Modhvadia et al., 2025). This increased demand for regulation is worthy of note, especially given that the UK is yet to introduce a comprehensive legal framework for AI. For this reason, in Table 11, we focus on how political orientation relates to people selecting either “laws and regulation” or “assurance that the AI has been deemed acceptable by a government regulator” as measures that would increase their comfort with AI being used.
4

Support for regulation is consistently high across both the left-right and authoritarian-libertarian dimensions. Table 11 shows that over half of both those holding right-wing and left-wing views feel assurance by a government regulator would make them more comfortable with AI. Even higher proportions of people feel laws and regulations that prohibit certain uses would make them more comfortable with AI: this is the case for 70% of those with right-wing views and 76% of those with left-wing views. Meanwhile, Table 12 shows that tighter regulation is also popular among both libertarians and authoritarians.

Table 11. Preference for government regulation, by left-wing or right-wing views
  Left Right
What would make you more comfortable with AI technologies being used? % %
Assurance that the AI has been deemed acceptable by a government regulator 58 55
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies 76 70
Unweighted base 1079 1078

Respondents who did not answer our questions about political orientation, or answered with “don’t know”, are not included in this table

Table 12. Preference for government regulation, by libertarian or authoritarian views
  Libertarian Authoritarian
What would make you more comfortable with AI technologies being used? % %
Assurance that the AI has been deemed acceptable by a government regulator 58 54
Laws and regulations that prohibit certain uses of technologies, and guide the use of all AI technologies 77 67
Unweighted base 1079 1078

Still, people on the right and authoritarians are a little less likely than those on the left and libertarians to say that government assurance and regulation would make them feel more comfortable about AI. To examine whether these small differences remain significant once their associations with other characteristics are controlled for, we conducted a multivariate analysis (logistic regression) with political orientation and key demographic characteristics (ethnicity, digital skills, income, age and education) included as predictors of attitudes to AI regulation. These characteristics were chosen because either we have previously identified them as related to attitudes to AI (ethnicity, income and digital skills were associated with attitudes to AI in a previous study, Modhvadia et al 2025), or because we anticipate they may relate to engagement and preferences around new technologies (in the case of age and education). The results of this model are presented in the appendix (Table A.4).  

In three out of four instances, this analysis indicates that the differences, though small, are statistically significant. Those on the right are less likely than those on the left to say that either government assurance or regulation would make them feel more comfortable about AI, while authoritarians are less likely than libertarians to say the same of regulation. Other characteristics, and in particular having digital skills and a higher household income, appear to more strongly relate to preferences for regulation than political orientation.

Conclusion 

In this report, we have investigated the relationship between political orientation and public perceptions of AI technologies and their regulation. As we expected, the findings reveal a significant correlation between political orientation and the perceived benefits of and concerns about a wide range of AI applications. Those with right-wing views are more positive than those with left-wing views about nearly all the uses of AI about which respondents were asked, a pattern which held true even when the associations between on the one hand political orientations and attitudes towards AI, and on the other hand, people’s demographic characteristics were controlled for. The difference in attitudes between people with left-wing and right-wing views is most pronounced in the case of facial recognition for policing and the use of AI for assessing eligibility for welfare. Greater concern among those with more left-wing views may be occasioned by worries about how these technologies might have a negative impact on equity and fairness, as we found that those with left-wing views are more likely to report worries about inaccuracy, discrimination and job losses.

Where people stand on the authoritarian-libertarian dimension is also associated with their attitudes to the uses of AI. Those holding authoritarian views are more positive than those with libertarian views about several applications of AI. Specifically, those with authoritarian views are more likely to perceive facial recognition technologies in policing as beneficial, suggesting they may be more likely to perceive AI surveillance technologies more broadly as beneficial too. This is likely to reflect their preference for security and social order, where AI is viewed as an instrument to enhance these objectives. Conversely, people with libertarian views express heightened concerns regarding the potential for discriminatory outcomes from facial recognition technology, an outlook that aligns with their emphasis on individual autonomy and rights. They are also more likely than people with authoritarian views to have concerns about possible discrimination by other AI applications, such as in their use to predict cancer risk, provide mental health chatbots, and assess both welfare eligibility and the likelihood that someone would repay a loan.

Three of these last four applications (the exception is loan repayment) constitute the examples of the use of AI by the public sector covered by our survey. Our findings suggest that attitudes towards public sector applications, which impact people’s lives and liberty, may be more divisive between people of different political orientations than are applications of AI provided by private sector companies for consumers. Certainly, facial recognition in policing and the use of AI to determine welfare eligibility appear to be two particularly politically salient applications of AI, where there is much debate over fairness, accuracy and equity. In contrast, private sector consumer applications of AI, such as driverless cars (albeit universally regarded negatively) and LLMs (viewed positively), seem to be viewed in a similar fashion irrespective of people’s political orientation.

However, contrary to our expectations, we did not find a strong relationship between political orientation and preference for the regulation of AI. Irrespective of political orientation, we found that seven in 10 people feel laws and regulations would make them more comfortable with AI. And although support for regulation is somewhat lower among those who hold right-wing or authoritarian views, the difference is marginal. Instead, socio-economic factors such as income and digital skills appear to serve as more robust predictors of attitudes to AI regulation.  

These findings are important for three key reasons. First, as the UK government seeks to increase the use of AI, describing AI as “a golden opportunity…an opportunity we are determined to seize” (UK Government, 2025), they will need to understand people’s hopes and fears. Our findings offer an understanding of the perceptions of the technology held by different groups, as well as their likelihood of adopting AI applications in the future. They provide policymakers with insight as to how they can encourage public acceptance of AI, and the benefits that they should highlight for their message to resonate with different constituencies. Our results show that people carry with them values and expectations, such as worries about discrimination, which differ across political ideologies.

Second, these findings reiterate the value of studying attitudes towards specific uses of AI technologies. Our data suggest that some applications of AI may be politically divisive – such as facial recognition in policing and the use of AI to determine welfare eligibility – while other uses of AI, such as cancer risk assessment, are met with similar levels of optimism or concern by those with different political orientations. Future research would benefit from working with the public to understand how attitudes towards specific uses of AI affect the considerations that need to be taken into account when deploying AI technologies.

Third, as the government considers options for regulating AI, it will be important to understand where people’s concerns lie, and how opposition to regulation might arise. Our findings show that the public want regulation around AI, and this desire appears to be largely independent of political orientation. As a minimum, it appears that there is public support for the government to deliver on its commitment in the AI Opportunities Action Plan (2025) to “funding regulators to scale up their AI capabilities”.

There are signs that, in the future, considerations like these will become more important in the UK political landscape. In both the US and Europe, AI has become politically salient. In the US, any moves towards AI safety, or AI regulation have become controversial and divide explicitly along political fault-lines. In the European Union (EU), AI regulation has been implemented more comprehensively than anywhere else in the world, setting policymakers in direct confrontation with US firms and, potentially, the US administration. The UK has tried to follow a delicate path between these two extremes, but it seems likely that issues such as digital services taxes, the Online Safety Act and technology regulation more generally will become politically salient in the future. Meanwhile, the public is increasingly using commercial LLMs, which show considerable potential to reshape – and bring US influences to bear upon –  specific policy areas. Understanding of the political make-up of the public with respect to the use of AI, AI adoption and AI regulation will become increasingly helpful to politicians as they attempt to navigate this increasingly important and politically contested field.

 

Acknowledgements 

The research reported here was undertaken as part of Public Voices in AI, a satellite project funded by Responsible AI UK and EPSRC (Grant number: EP/Y009800/1). Public Voices in AI was a collaboration between: the ESRC Digital Good Network @ the University of Sheffield, Elgon Social Research Limited, Ada Lovelace Institute, The Alan Turing Institute and University College London.

The authors would like to acknowledge Octavia Field Reid, Associate Director, Ada Lovelace Institute, for her work reviewing a draft of this report. 

 

References

Ada Lovelace Institute. (October 2023). What do the public think about AI? https://www.adalovelaceinstitute.org/evidence-review/what-do-the-public-think-about-ai/ 

Araujo, T., Brosius, A., Goldberg, A. C., Möller, J., & Vreese, C. de. (2023). Humans vs. AI: The Role of Trust, Political Attitudes, and Individual Characteristics on Perceptions About Automated Decision Making Across Europe. International Journal of Communication, 17(0) 6222-6249.

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research 18(01), 1-15.

Claudy, M. C., Parkinson, M., & Aquino, K. (2024). Why should innovators care about morality? Political ideology, moral foundations, and the acceptance of technological innovations. Technological Forecasting and Social Change, 203, 1-17. https://doi.org/10.1016/j.techfore.2024.123384 

Council of the European Union (2023) ChatGPT in the Public Sector Overhyped or Overlooked? 

Curtice, John (2024), One-dimensional or two-dimensional? The changing dividing lines of Britain’s electoral politics. British Social Attitudes: the 41st report, London: The National Centre for Social Research. https://natcen.ac.uk/publications/bsa-41-one-dimensional-or-two-dimensional 

Fast, E., & Horvitz, E. (2017). Long-Term Trends in the Public Perception of Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10635 

Gur, T., Hameiri, B., & Maaravi, Y. (2024). Political ideology shapes support for the use of AI in policy-making. Frontiers in Artificial Intelligence, 7, 1-9. https://doi.org/10.3389/frai.2024.1447171 

Hemesath, S and Tepe, M. (2024). Multidimensional preference for technology risk  regulation: The role of political beliefs, technology attitudes, and national innovation cultures. Regulation and Governance 18, 1264-1283. https://doi.org/10.1111/rego.12578  

König, P., Wurster, S., & Siewert, M. (2023) Sustainability challenges of artificial intelligence and Citizens’ regulatory preferences. Government Information Quarterly, 40 1-11. https://doi.org/10.1016/j.giq.2023.101863 

Leslie, D. (2020). Understanding bias in facial recognition technologies: an explainer. The Alan Turing Institute. https://doi.org/10.5281/zenodo.4050457

Mack, E. A., Miller, S. R., Chang, C. H., Van Fossen, J. A., Cotten, S. R., Savolainen, P. T., & Mann, J. (2021). The politics of new driving technologies: Political ideology and autonomous vehicle adoption. Telematics and Informatics, 61, 101604 https://doi.org/10.1016/j.tele.2021.101604

Modhvadia, R., Sippy, T., Field Reid, O., and Margetts, H. (2025). How do people feel about AI? Ada Lovelace Institute and The Alan Turing Institute.  https://attitudestoai.uk/

Neff, G. (2024). Can Democracy Survive AI? Sociologica, 18(3), 137-146. https://doi.org/10.6092/issn.1971-8853/21108 

O’Shaughnessy, M. R., Schiff, D. S., Varshney, L. R., Rozell, C. J., & Davenport, M. A. (2023). What governs attitudes toward artificial intelligence adoption and governance? Science and Public Policy, 50(2), 161–176. https://doi.org/10.1093/scipol/scac056

Prabhakaran, V., Mitchell, M., Gebru, T., & Gabriel, I. (2022). A Human Rights-Based Approach to Responsible AI (No. arXiv:2210.02667). arXiv. https://doi.org/10.48550/arXiv.2210.02667 

UK Government. (January 2025). AI Opportunities Action Plan. GOV.UK. https://www.gov.uk/government/publications/ai-opportunities-action-plan/ai-opportunities-action-plan 

UK Government. (March 2025). PM remarks on the fundamental reform of the British State. GOV.UK. https://www.gov.uk/government/speeches/pm-remarks-on-the-fundamental-reform-of-the-british-state-13-march-2025 

Wang, S. (2023). Factors related to user perceptions of artificial intelligence (AI)-based content moderation on social media. Computers in Human Behavior, 149, 107971. https://doi.org/10.1016/j.chb.2023.107971 

Wen, C.-H. R., & Chen, Y.-N. K. (2024). Understanding public perceptions of revolutionary technology: The role of political ideology, knowledge, and news consumption. Journal of Science Communication, 23(5), 1-18. https://doi.org/10.22323/2.23050207 

Yang, S., Krause, N. M., Bao, L., Calice, M. N., Newman, T. P., Scheufele, D. A., Xenos, M. A., & Brossard, D. (2023). In AI We Trust: The Interplay of Media Use, Political Ideology, and Trust in Shaping Emerging AI Attitudes. Journalism & Mass Communication Quarterly https://doi.org/10.1177/10776990231190868 

Yi, A., Goenka, S., & Pandelaere, M. (2024). Partisan Media Sentiment Toward Artificial Intelligence. Social Psychological and Personality Science, 15(6), 682–690. https://doi.org/10.1177/19485506231196817 
    
Zhang, M. (2023). Older people’s attitudes towards emerging technologies: A systematic literature review. Public Understanding of Science, 32(8), 948-968. https://doi.org/10.1177/09636625231171677 

 

Appendix

Table A.1. Net benefit scores across left-right spectrum scale: unweighted bases
  Left Right
AI use (N) (N)
Cancer risk 987 980
Facial recognition in policing 1,013 1,029
Large language models 846 814
Loan repayment risk 911 932
Robotic care assistants 908 894
Welfare eligibility 896 875
Mental health chatbot 851 807
Driverless cars 991 970
Table A.2. Net benefit scores across libertarian-authoritarian scale: unweighted bases
  Libertarian Authoritarian
AI use (N) (N)
Cancer risk 2,006 981
Facial recognition in policing 1,029 1,034
Large language models 909 779
Loan repayment risk 926 915
Robotic care assistants 918 896
Welfare eligibility 897 884
Mental health chatbot 873 823
Driverless cars 987 973
Table A.3 Linear regression of respondents’ net benefit scores
  Facial recognition for policing Welfare assessments Cancer diagnosis Loan assessments 
Left-right scale  0.18*** 0.32*** 0.08* 0.23***
  (0.03) (0.04) (0.03) (0.03)
Libertarian-authoritarian scale 0.52** 0.40*** -0.05 0.21***
  (0.03) (0.04) (0.03) (0.04)
Ethnicity (Neither Black nor Asian)         
Asian or Asian British  -0.39** 0.16 -0.22* 0.05
  (0.09) (0.12) (0.10) (0.11)
Black or Black British -0.36* -0.20 -0.16 -0.03
  (0.16) (0.21) (0.17) (0.19)
Whether the respondent has basic digital skills (no digital skills)        
Respondent has basic digital skills 0.31*** 0.06 0.03*** 0.28***
  (0.06) (0.08) (0.07) (0.07)
Monthly equivalised household income (Less than £1,500)        
Monthly equalised household income is more than £1,500 0.24*** 0.35*** 0.27*** 0.16**
  (0.05) (0.07) (0.05) (0.06)
Age (aged 18-34)        
Aged 34-54 0.02 -0.19* -0.07 0.09
  (0.06) (0.08) (0.07) (0.07)
Aged 55+ 0.16** -0.13 0.18** 0.12
  (0.06) (0.08) (0.07) (0.07)
Education (does not have a degree)        
Has a degree -0.11* 0.12 0.07 0.05
  (0.05) (0.07) (0.05) (0.06)
Adjusted R squared 0.16 0.09 0.04 0.05
Unweighted base: 2,839 2,452 2,716 2,554
  Large language models Mental health chatbots Robotic care assistants Driverless cars
Left-right scale  0.11** 0.09* 0.09* 0.06
  (0.04) (0.04) (0.04) (0.04)
Libertarian-authoritarian scale 0.15*** 0.10* -0.02 -0.17***
  (0.04) (0.04) (0.04) (0.04)
Ethnicity (Neither Black nor Asian)         
Asian or Asian British  0.28* 0.38** 0.51*** 0.25
  (0.11) (0.14) (0.13) (0.13)
Black or Black British 0.69*** 0.47* 0.18 0.09
  (0.19) (0.23) (0.21) (0.22)
Whether the respondent has basic digital skills (no digital skills)        
Respondent has basic digital skills 0.30*** 0.02 0.45*** 0.20*
  (0.08) (0.09) (0.09) (0.09)
Monthly equivalised household income (Less than £1,500)        
Monthly equalised household income is more than £1,500 0.17** 0.15* 0.23** 0.26***
  (0.06) (0.08) (0.07) (0.07)
Age (aged 18-34)        
Aged 34-54 0.02 -0.21* -0.05 0.16
  (0.07) (0.08) (0.08) (0.09)
Aged 55+ -0.24** -0.16 -0.17* -0.16
  (0.07) (0.09) (0.08) (0.08)
Education (does not have a degree)        
Has a degree -0.02 -0.10 0.27*** 0.24***
  (0.06) (0.07) (0.07) (0.07)
Adjusted R squared 0.04 0.01 0.05 0.04
Unweighted base: 2,310 2,315 2,505 2,717

*=significant at 95% level 
**=significant at 99% level 
***=significant at 99.9% level

Table A.4  Logistic regression of respondents’ references for regulation
  Assurance that the AI has been deemed acceptable by a government regulator Laws and regulation that prohibit certain uses of technologies, and guide the use of all AI technologies
Left-right scale`  -0.10* -0.12*
  (0.05) (0.05)
Libertarian-authoritarian scale 0.01 -0.21***
  (0.05) (0.06)
Ethnicity (Neither Black nor Asian)     
Asian or Asian British  0.29 -0.19
  (0.15) (0.16)
Black or Black British -0.23 -0.02
  (0.26) (0.29)
Whether the respondent has basic digital skills (no digital skills)    
Respondent has basic digital skills 0.31** 0.54***
  (0.10) (0.10)
Monthly equivalised household income (Less than £1,500)    
Monthly equalised household income is more than £1,500 0.50*** 0.52***
  (0.08) (0.09)
Age (aged 18-34)    
Aged 34-54 0.08 0.24*
  (0.10) (0.11)
Aged 55+ 0.30** 0.43***
  (0.10) (0.11)
Education (does not have a degree)    
Has a degree 0.29*** 0.22*
  (0.08) (0.09)
Unweighted base: 2,979 2,979

*=significant at 95% level 
**=significant at 99% level 
***=significant at 99.9% level

 

Publication details

Clery, E., Curtice, J. and Jessop, C. (eds.) (2025)
British Social Attitudes: The 42nd Report.   
London: National Centre for Social Research  

© National Centre for Social Research 2025

First published 2025

You may print out, download and save this publication for your non-commercial use. Otherwise, and apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act, 1988, this publication may be reproduced, stored or transmitted in any form, or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction, in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the National Centre for Social Research.

National Centre for Social Research 
35 Northampton Square 
London  
EC1V 0AX  
info@natcen.ac.uk 
natcen.ac.uk



Source link

Continue Reading

Trending