
Shutterstock image
OpenAI has hired Mike Liberatore, the former chief financial officer at Elon Musk’s AI company xAI, CNBC reported on September 16.
Liberatore’s LinkedIn profile lists his current role as the business finance officer at OpenAI. His tenure at xAI lasted merely four months, and previously, he worked as the vice president of finance and corporate development at Airbnb.
The report added that Liberatore will report to OpenAI’s current CFO, Sarah Friar, and will work with co-founder Greg Brockman’s team, which manages the contracts and capital behind the company’s compute strategy.
According to The Wall Street Journal’s report, Liberatore was involved with xAI’s funding efforts, including a $5 billion debt sale in June. He also oversaw xAI’s data centre expansion in Memphis, Tennessee, in the United States. The reasons for his departure remain unknown.
Liberatore is an addition to the list of recent high-profile departures from xAI. Last month, Robert Keele, who was the general counsel at the company, announced his departure, stating that there were differences between his worldviews and Musk’s.
The WSJ report also added that Raghu Rao, a senior lawyer overseeing the commercial and legal affairs for the company, left around the same time.
Furthermore, Igor Babuschkin, the co-founder of the company, also announced last month that he was leaving xAI to start his own venture capital firm.
That being said, Liberatore’s appointment at OpenAI comes at a time when the company has announced significant structural changes.
OpenAI recently announced that its nonprofit division will now be ‘paired’ with a stake in its public benefit corporation (PBC), valued at over $100 billion. The company also announced it has signed a memorandum of understanding with Microsoft to transform its for-profit arm into a PBC. This structural change was initially announced by OpenAI in May.
Shutterstock image
September 16, 2025
Disney, NBCUniversal and Warner Bros. Discovery Sept. 16 filed a lawsuit against Chinese AI company MiniMax claiming the company is stealing their intellectual property without permission.
Hollywood continues its ramp up legal offensive against artificial intelligence companies as the technology evolves enabling third-parties to artificially create content on the backs of existing content.
MiniMax is marketing software to consumers called Hailuo that affords users access to studio images and videos from characters such as Spider-Man, Superman, Darth Vader, Shrek, Buzz Lightyear and Bugs Bunny, among others.
“MiniMax’s bootlegging business model and defiance of U.S. copyright law are not only an attack on Plaintiffs and the hard-working creative community that brings the magic of movies to life, but are also a broader threat to the American motion picture industry, which has created millions of jobs and contributed more than $260 billion to the nation’s economy,” read the complaint filed in U.S. District Court, Central District of California in Los Angeles.
The litigation comes after the studios say their calls to MiniMax to stop using their IP illegally were ignored.
In June, Disney and NBCU sued San Francisco-based AI company Midjourney claiming the company was marketing software featuring their IP without permission.
DAVOS, SWITZERLAND – A person walks past a temporary AI stall along the main Promenade at the World Economic Forum in 2024. (Photo by Andy Barton/SOPA Images/LightRocket via Getty Images)
SOPA Images/LightRocket via Getty Images
Last week, I argued that the MIT “GenAI Divide” report compelled us to rethink how we measure AI’s impact in business. Beyond failure rates lies a more nuanced story of measurement blind spots. Now Gallup’s latest surveys reveal another critical metric that demands our urgent attention: trust. Racing to join the AI gold rush is tempting, but without public trust, the gains will be fleeting
According to the 2025 Bentley University-Gallup report, about a third of Americans (31%) now trust businesses “a lot” or “some” to use AI responsibly, a marked improvement from 21% in 2023. Meanwhile, 57% say AI does as much harm as good, up from 50%. Forty-one percent trust businesses “not much,” and more than a quarter (28%) say “not at all.” Almost three-quarters expect AI to shrink U.S. jobs in the next decade, a belief unwavering over three years of polling.
That is not a groundswell of resistance. But it is not durable trust, either.
Are Businesses Measuring the Wrong Things—Again?
Much as my earlier MIT analysis argued for measuring the true impact of AI—capturing shadow adoption, micro-productivity gains, the bottom-up transformation that official “failure rates” miss—the new challenge for business is similar. Businesses track pilots, press releases, P&L statements, but rarely include public sentiment, trust, or transparency as a KPI. Yet Gallup’s latest and last year’s polling show those are exactly what the public demands.
Transparency is the runaway winner when Americans are asked how companies could alleviate AI concerns. It is a stronger lure for trust than education, more persuasive than regulation, and more urgent than vague promises. Nearly six in ten say businesses should be transparent about how they use AI—how and where decisions are made, who’s impacted, what happens to jobs, and where human oversight begins.
The Trust Dividend: Not Just a PR Asset
Why should business leaders care? Because this is not just about keeping up appearances. It is about unlocking the “trust dividend,” the tangible business benefits that flow when customers and employees believe that AI is improving their experience, not just the bottom line. Trust smooths adoption curves, drives customer engagement, helps attract top talent, and increasingly, keeps businesses on the right side of regulation.
But trust, like productivity, is not an abstract virtue. It needs to be tracked, audited, and managed. Businesses that treat trust-building as a first-class business outcome, e.g., counting trust scores, tracking transparency efforts, linking senior pay to public and workforce trust metrics, are the ones most likely to reap AI’s sustained rewards.
Neutrality: A Window, Not a Resting Place
MIT and Gallup have uncovered parallel truths. The measurable gains from AI—revenue, cost savings, efficiency—tell only half the story. The deeper transformation is happening in the subtle shifts of daily work life. The rise in neutrality from 50% to nearly 60% in just a year is barely a cause for corporate complacency. It represents a window. Public judgment about AI’s net value remains up for grabs. Businesses that act now to make their use of AI transparent, participatory, and demonstrably fair will capture the swing vote.
What Should Business Leaders Do Now?
Headlines about AI failure often obscure a richer reality of bottom-up innovation and quiet productivity lifts. Now, with Gallup’s pulse on the public, it is clear the next business challenge is not just to do AI right, but to be seen as doing it right. Businesses that win the trust game openly, consistently, and with tangible proof will be in a league of their own.
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
Journey to 1000 models: Scaling Instagram’s recommendation system
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
VEX Robotics launches AI-powered classroom robotics system
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
OpenAI 🤝 @teamganassi
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries