Tools & Platforms
Sam Altman’s AI Empire Relies on Brutal Labor Exploitation

Artificial intelligence (AI) is quite possibly the most hyped technology in history. For well over half a century, the potential for AI to replace most or all human skills has crisscrossed in the public imagination between sci-fi fantasy and scientific mission.
From the predictive AI of the 2000s that brought us search engines and apps, to the generative AI of the 2020s that is bringing us chatbots and deepfakes, every iteration of AI is apparently one more leap toward the summit of human-comparable AI, or what is now widely termed Artificial General Intelligence (AGI).
The strength of Karen Hao’s detailed analysis of America’s AI industry, Empire of AI, is that her relentlessly grounded approach refuses to play the game of the AI hype merchants. Hao makes a convincing case that it is wrong to focus on hypotheticals about the future of AI when its present incarnation is fraught with so many problems. She also stresses that exaggerated “doomer” and “boomer” perspectives on what is coming down the line both end up helping the titans of the industry to build a present and future for AI that best serves their interests.
Moreover, AI is a process, not a destination. The AI we have today is itself the product of path dependencies based on the ideologies, infrastructure, and IPs that dominate in Silicon Valley. As such, AI is being routed down a highly oligopolistic developmental path, one that is designed deliberately to minimize market competition and concentrate power in the hands of a very small number of American corporate executives.
However, the future of AI remains contested territory. In what has come as a shock to the Silicon Valley bubble, China has emerged as a serious rival to US AI dominance. As such, AI has now moved to the front and center of great-power politics in a way comparable to the nuclear and space races of the past. To understand where AI is and where it is going, we must situate analysis of the technology within the wider economic and geopolitical context in which the United States finds itself.
Hao’s story revolves around OpenAI, the San Francisco company most famous for ChatGPT, the AI chatbot that brought generative AI to the world’s attention. Through the trials and tribulations of its CEO Sam Altman, we are brought into a world of Machiavellian deceit and manipulation, where highfalutin moral ambition collides constantly with the brutal realities of corporate power. Altman survives the various storms that come his way, but only by junking everything he once claimed to believe in.
OpenAI began with the mission of “building AGI that benefits humanity” as a nonprofit that would collaborate with others through openly sharing its research, without developing any commercial products. This objective stemmed from the convictions of Altman and OpenAI’s first major patron, Elon Musk, who believed that AI posed major risks to the world if it was developed in the wrong way. AI therefore required cautious development and tight government regulation to keep it under control.
OpenAI was thus a product of AI’s “doomer” faction. The idea was to be the first to develop AGI in order to be best positioned to rein it in. The fact that Altman would end up flipping OpenAI 180 degrees — creating a for-profit company that produces proprietary software, based on extreme levels of corporate secrecy and shark-like determination to outcompete its rivals in the speed of AI commercialization, regardless of the risks — testifies to his capacity to mutate into whatever he needs to be in the pursuit of wealth and power.
The motivation for the first shift toward what OpenAI would eventually become came from strategic considerations in relation to its doctrine of AI development, called “scaling.” The idea behind scaling was that AI could advance by leaps and bounds simply through the brute force of massive data power. This reflected a devout belief in “connectionism,” a school of AI development that was much easier to commercialize than its rival (“symbolism”).
The connectionists believed that the key to AI was to create “neural networks,” digital approximations of real neurons in the human brain. OpenAI’s big thinkers, most importantly its first chief scientist Ilya Sutskever, believed that if the firm had more data-processing nodes (“neurons”) available to it than anyone else, it would position itself at the cutting edge of AI development. The problem was that scaling, an intrinsically data-intensive strategy, required a huge amount of capital — much more than a nonprofit was capable of attracting.
Driven by the need to scale, OpenAI created a for-profit arm in 2019 to raise capital and build commercial products. As soon as it did so, there was a scramble between Altman and Musk to take over as CEO. Altman won out and Musk, having been sidelined, turned from ally to enemy overnight, accusing Altman of raising funds as a nonprofit under false pretenses. This was a criticism that would later develop into litigation against OpenAI.
But Musk’s ideological justification for the split was an afterthought. If he had won the power struggle, the world’s richest man planned to lash OpenAI to his electric car company, Tesla. Whoever became CEO, OpenAI was on an irreversible path toward becoming just like any other Big Tech giant.
Yet because of the company’s origins, it was left with a strange governance structure that gave board-level control to an almost irrelevant nonprofit arm, based on the ludicrous pretense that, despite its newly embraced profit motive, OpenAI’s mission was still to build AGI for humanity. The Effective Altruism (EA) movement gave a veneer of coherence to the Orwellian ideological precepts of OpenAI. EA promotes the idea that the best way of doing good is to become as rich as possible and then give your money to philanthropic causes.
This junk philosophy found massive support in Silicon Valley, where the idea of pursuing maximum wealth accumulation and justifying it on moral terms was highly convenient. Altman, who in 2025 glad-handed Saudi Prince Mohammed Bin Salman alongside Trump just after the despotic ruler announced his own AI venture, epitomizes the inevitable endgame of EA posturing: power inevitably becomes its own purpose.
Just four months after the for-profit launched, Altman secured a $1 billion investment from Microsoft. With Musk out of the picture, OpenAI found an alternative Big Tech benefactor to fund its scaling. More willing to trample on data protection rules than big competitors like Google, OpenAI began to extract data from anywhere and everywhere, with little care for its quality or content — a classic tech start-up “disruptor” mentality, akin to Uber or Airbnb.
This data bounty was the raw material that fueled OpenAI’s scaling. Driven by a desire to impress Microsoft founder and former CEO Bill Gates, who wanted to see OpenAI create a chatbot that would be useful for research, the company developed ChatGPT, expecting it to be moderately successful. To everyone’s surprise, within two months ChatGPT became the fastest-growing consumer app in history. The generative AI era was born.
From that point onward, OpenAI became relentlessly focused on commercialization. But the shockwaves of ChatGPT were felt well beyond the company. Scaling became the standard-bearer for AI development: observers deemed whichever company could marshal the greatest amount of “compute” (data power) to be the likely winner of the AI tech race. Alphabet and Meta started to spend sums on AI development that dwarfed those marshalled by the US Government and the European Commission.
As Big Tech raced to get ahead on generative AI, the funding rush swept up almost all of the talent in the field. This transformed the nature of AI research, including in universities, with leading professors increasingly tied to one of the Big Tech players. As the stakes grew higher, research from within companies became increasingly secretive and dissent frowned upon. Corporate proprietary walls were dividing up the field of AI development.
We have to place this heavily commodified form of AI development within its overall conjuncture. If generative AI was developed in the US in the 1950s, it would have gone years or even decades being largely backed by US military R&D budgets. Even after it had undergone commercialization, the state would have remained the main purchaser of the technology for decades. This was the developmental path of semiconductors.
However, in the 2020s, at the tail end of the neoliberal era, it is the corporate–state nexus that drives and frames technological development in the United States, reducing incentives for long-term thinking, and stunting any open, pedagogical process of scientific inquiry. That will have long-term consequences for how AI is developed that are unlikely to be positive, whether for society in general or for American global leadership in particular.
One of the myths of AI is that it is a technology that does not rely on workers. There are essentially three parts to the generative AI re-production process: extracting the data, crunching the data, and testing or fixing the data. The extraction part relies on dead, rather than living, labor. For example, OpenAI scraped data from LibraryGenesis, an online repository of books and scholarly articles, making use of centuries of intellectual labor for free.
The crunching data part of generative AI is all about computing power, which relies on labor only to the extent that the infrastructure required for “compute,” most importantly data centers, is based on a long digital value chain that includes Taiwanese chip manufacturers and Chilean copper miners. While the testing and fixing of the data is the part of generative AI re-production that is most often forgotten about, it is also the part that is most directly dependent on workers.
There are two types of digital workers required for testing and fixing the enormous data requirements of generative AI. The first are click workers, also known as data annotators (or data labelers). These are gig workers who earn piece rates for completing short digital tasks, such as categorizing what is contained in an image.
Click workers are vital because without them, AI systems like ChatGPT would be riddled with errors, especially when it comes to “edge cases”: rare or unusual situations that sit at the boundaries of AI’s categorization parameters. Click workers turn the data of generative AI systems from low grade to high quality. This is especially important for OpenAI, since so much of the company’s data has been extracted from the gutters of the internet.
The barriers to entry for click work are extremely low, because anyone who can access the internet can perform the most basic tasks. Click workers are operating in a global labor market with little connection to their fellow workers, meaning they have very limited leverage over their digital bosses. As such, the pay rates are rock-bottom and the conditions as precarious as it gets.
Hao finds that Venezuela became the global hotbed of click work for a period of time, due to its high education levels, good internet access, and massive economic crisis. The tough US sanctions on Venezuela didn’t preclude its AI companies from exploiting the South American country’s desperate and impoverished workforce. Once click-work outsourcing firms like RemoTasks feel that they have maximized the labor exploitation of one crisis-hit country, or start to face resistance over working conditions, they simply “robo-fire” workers from that location and bring workers on board from somewhere else.
The second type of worker in the AI industry is the content moderator. Because OpenAI and other AI companies are scraping the detritus of the internet for data, a substantial portion is saturated with racism, sexism, child pornography, fascist views, and every other ugly thing one can think of. A version of AI that doesn’t have the horrors of the internet filtered out will develop these characteristics in its responses; indeed, earlier versions of what would become ChatGPT did produce neo-Nazi propaganda, alarming OpenAI’s compliance team.
The solution has been to turn to human content moderators to extract the filth out of the AI’s system, in the same way content moderators have been tasked for years now with policing social media content. Unlike click workers, the content moderator workforce tends to be subject to a regime of digital Taylorism, rather than one of piece work. This takes the form of a call center-style setup where workers are motivated by bonuses in target-driven environments, all the time under the watchful eyes of human supervisors and digital surveillance.
Like the click workers, they are completing small digital tasks by annotating data, but the data they are annotating consists of the vilest content humans can produce. Because they are training the AI, it’s necessary for content moderators to look closely at all the gory details that flash up on their screen in order to label each part correctly. Being exposed to this repeatedly and exhaustively is a mental health nightmare.
Hao follows the story of Mophat Okinyi, a Kenyan content moderator working for outsourcing firm Sama, moderating Meta and OpenAI content. The longer Okinyi worked for Sama, the more his behavior became erratic and his personality changed, destroying his relationship and leading to spiraling costs for mental health support.
Having reported on content moderation myself, I know that Okinyi’s case is by no means exceptional. It is the norm for content moderators to have their minds systematically broken down by the relentless brutality they must witness repeatedly just to do their job.
While most click work and content moderation is done in the Global South, there are signs that as AI becomes more complex, it will increasingly need data workers in the Global North as well. The main reason for this is the increasing importance of Reinforcement Learning from Human Feedback (RLHF) to AI development.
RLHF is a more complex form of data annotation, because click workers need to compare two responses from an AI and be able to explain why one is better than the other. As AI tools are developed for specific industries, the need for specialist expertise as well as an understanding of culturally specific cues means that RLHF increasingly requires high-skill workers to enter the AI industry.
In keeping with the style of the book, Hao does not speculate on where RLHF might lead, but it is worth briefly considering its potential impact on the future of work. If generative AI tools can produce content which is as good as or better than material from a human, then it is not inconceivable that such tools could replace the worker in any content-producing industry.
However, that would not mean that the skills of those workers would disappear entirely: there would still be a need, for example, for paralegals, but their job would be to test and fix the paralegal AI. At that point, these professional service-sector jobs would be exposed to the Uberized model of work that click workers in the Global South have now experienced for years. It’s not for nothing that Altman has said “there will be some change required to the social contract.”
Of course, there remain significant question marks about generative AI’s true capacities in a wide range of content production. But wherever you sit on the scale between skeptic and true believer, there’s little doubt that AI will increasingly be relevant not only to the jobs of the most impoverished sections of the working class, but also to workers who are used to having some level of financial security due to their position higher up the labor-market ladder. The drawing of a much larger pool of workers into the precariat could have explosive social consequences.
AI’s effect on the environment is likely to be just as dramatic as its impact on labor, if not more so. Generative AI’s enormous data usage requires gigantic data centers rammed with high-energy usage GPU chips to service it. These data centers need vast amounts of land to build on and huge quantities of water to cool them down. As generative AI products become increasingly widely used, the ecological footprint of the industry expands relentlessly.
Hao highlights some stunning statistics and predictions. Every AI-generated image has the equivalent energy consumption of charging a smartphone by 25 percent. AI’s water usage could match half of all the water used in the UK by 2027. By 2030, AI could be using more energy than all of India, the world’s third-largest consumer of electricity.
The environmental consequences have already been significant. Iowa, two years into a drought, had Microsoft guzzling 11.5 million tonnes of the state’s potable water. Uruguay, a country that has experienced repeated droughts, saw mass protests after the courts forced its government to reveal the extent of the drinking water that Google’s data centers in the country were using. “This is not drought, it’s pillage,” graffiti in Montevideo reads.
What makes the arrival of data centers en masse especially hard to stomach for local populations is the fact that they provide hardly any upsides. Data centers generate very few jobs in the places they are located, while draining local areas of their land and water, thus actively damaging more labor-intensive industries.
In spite of this, following the logic of the scaling doctrine, we should expect data centers to grow ever bigger as “compute” expands to keep AI moving forward. While Altman has invested heavily in a nuclear fusion start-up as the golden ticket to abundant and free energy, just like AGI, it is a bet on a miracle cure tomorrow that distracts from the real problems AI scaling is causing today.
However, in a rare bit of good news for the world’s ecology, the scaling doctrine received a hammerblow from the far east in January. DeepSeek, a Chinese generative AI chatbot, launched and quickly surpassed ChatGPT as the most downloaded app in the United States.
The remarkable thing about DeepSeek is not what it did, but how it did it. The chatbot cost just $6 million to train — about one-fiftieth of the cost of ChatGPT, with a higher quality on some benchmarks. DeepSeek was trained on old GPU chips, designed by Nvidia to be of lower quality to comply with US chip export restrictions to China. Because of its efficiency, DeepSeek’s energy consumption is 90 percent lower than that of ChatGPT. The technical workings behind this engineering marvel were made open-source, so anyone could see how it was done.
DeepSeek was a technological marvel and a geopolitical earthquake rolled into one. Not only did it mark China’s arrival as a tech superpower, but it also demonstrated that the scaling doctrine embraced by the whole of Silicon Valley as the go-to methodology for generative AI had proven to be behind the curve, at best. The shock that a Chinese company could embarrass Silicon Valley was so great that it triggered panic in Wall Street about whether the trillions already invested in American AI constituted a bet gone badly wrong.
In one day, the fall in the market capitalization of tech stocks was equivalent to the entire financial value of Mexico. Even Donald Trump weighed in to say that DeepSeek’s emergence was “a wake-up call” for US Big Tech. On X, Altman struck a positive tone in response, but OpenAI quickly started to brief the press that DeepSeek might have “distilled” OpenAI’s models in creating its chatbot, though little has been heard about this claim since. In any case, distillation can’t explain the enormous efficiency gains of DeepSeek compared to OpenAI.
It’s unfortunate that DeepSeek doesn’t appear in Empire of AI. Hao writes that she finished the book in January 2025, the month of DeepSeek’s launch. It would have been wise for the publisher to have given Hao a six-month extension to write a chapter on DeepSeek and the fallout in the US, especially considering how much of the book is a critique of the dogma that scaling is the only way to seriously develop AI.
However, she has commented elsewhere on DeepSeek’s dramatic arrival and the flaws it reveals about the US AI industry: “DeepSeek has demonstrated that scaling up AI models relentlessly, a paradigm OpenAI introduced and champions, is not the only, and far from the best, way to develop AI.”
DeepSeek also raised more profoundly ideological questions about AI development. If a Chinese company could develop cutting-edge tech on an open-source basis, giving everyone else the opportunity to test the underlying assumptions of their innovation and build on them, why were American companies busy constructing giant proprietary software cages around their tech — a form of enclosure that was bound to inhibit the speed of scientific progress? Some have started asking whether Chinese communism offers a better ecosystem for AI development than American capitalism.
In fact, the question of open-source versus proprietary approaches just scratches the surface of the debates that society should be having about artificial intelligence. Ultimately, both DeepSeek and ChatGPT operate based on capitalist business models, just with different principles of technical development. While the Android open-source software operating system differentiates Google from Apple, no one today invests any hopes in Google as a model for socially just tech development. The bigger question we should be asking is this: if we can’t trust oligopolistic capitalist enterprises with a technology as powerful as this, how should AI be governed?
Hao only really gets her teeth into this point in the book’s epilogue, “How the Empire Falls.” She takes inspiration from Te Hiku, a Māori AI speech recognition project. Te Hiku seeks to revitalize the te reo language through putting archived audio tapes of te reo speakers into an AI speech recognition model, teaching new generations of Māori who have few human teachers left.
The tech has been developed on the basis of consent and active participation from the Māori community, and it is only licensed to organizations that respect Māori values. Hao believes Te Hiku shows there is “another way” of doing AI:
Models can be small and task specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers. The creation of AI can be community driven, consensual, respectful of local context and history; its application can uplift and strengthen marginalised communities; its governance can be inclusive and democratic.
More broadly, Hao says we should be aiming for “redistributing power” in AI along three axes: knowledge, resources, and influence. There should be greater funding for organizations pursuing new directions of AI research, holding Big Tech to account, or developing community-based AI tools like Te Hiku. There should also be transparency over AI training data, its environmental impact, supply chains, and land leases.
Labor unions should be supported to develop power among data workers and workers whose jobs are under threat from automation. Finally, “broad-based education” is required to bust the myths surrounding AI, so that the public can come to a more grounded understanding of how AI tools are built, what their constraints are, and whose interests they serve.
Although these are important ideas, in and of themselves they wouldn’t threaten the power of companies like OpenAI. The state is notably absent from Hao’s vision for bringing down the AI tech giants. The questions of how AI should be regulated and what ownership structure it should have go unexplored in Empire of AI.
Perhaps in the age of Trump there is a feeling of skepticism among progressives that the state can be anything other than a tool for entrenching the power of corporate elites. It is certainly hard not to be cynical when confronted with projects like Stargate, an Open AI-backed private sector collaboration to invest $500 billion in AI infrastructure. Stargate is underpinned by a commitment from the Trump administration that it will bend and break regulations as necessary to ensure the project gets the energy supply it needs — a clear case of the state–corporate nexus working seamlessly, with little care about the consequences for society at large.
Yet the left can’t sidestep the question of state power and AI. While projects like Te Hiku are no doubt valuable, by definition they cannot be scaled-up alternatives to the collective power of American AI capital, which commands resources far greater than many of the world’s states. If it becomes normal for AI tools like ChatGPT to be governed by and for Silicon Valley, we risk seeing the primary means of content production concentrated in the hands of a tiny number of tech barons.
We therefore need to put big solutions on the table. Firstly, regulation: there must be a set of rules that place strict limits on where AI companies get their data from, how their models are trained, and how their algorithms are managed. In addition, all AI systems should be forced to operate within tightly regulated environmental limits: energy usage for generative AI cannot be a free-for-all on a planet under immense ecological stress. AI-powered automated weapons systems should be prohibited. All of this should be subject to stringent, independent audits to ensure compliance.
Secondly, although the concentration of market power in the AI industry took a blow from DeepSeek’s arrival, there remain strong tendencies within AI — and indeed in digital tech as a whole — towards monopolization. Breaking up the tech oligarchy would mean eliminating gatekeepers that concentrate power and control data flows.
Finally, the question of ownership should be a serious part of the debate. Te Hiku shows that when AI tools are built by organizations with entirely different incentive structures in place, they can produce wildly different results. As long as artificial intelligence is designed for the purposes of the competitive accumulation of capital, firms will continue to find ways to exploit labor, degrade the environment, take short cuts in data extraction, and compromise on safety, because if they don’t, one of their competitors will.
It is possible to imagine a world where a socialized AI serves as a genuine aid to humanity. It would be one where instead of displacing jobs, AI would be designed to help workers reduce the amount of time they spend on technical and bureaucratic tasks, focusing human energies on problem-solving instead, and reducing the length of the working week. Rather than gobbling up water supplies, AI would function as a resource planning tool to help identify waste and duplication within and across energy systems.
These possibilities are far removed from the fantasies of AGI, whereby Artificial Intelligence will supposedly become so powerful that it will resolve problems deeply embedded in the social relations of capitalism. Instead, this is a vision for AI that presupposes structural change.
On May 29, the US Department of Energy tweeted the following message: “AI is the next Manhattan Project, and THE UNITED STATES WILL WIN.” US government agencies are not the only ones to have compared AI favorably to the Manhattan Project.
From the days when OpenAI was just an idea in Altman’s head, he was proposing a “Manhattan Project for AI” to Musk. When he watched Oppenheimer, the Oscar-winning biopic of the man who led the Manhattan Project, Altman’s conclusion was that the mushroom cloud over Japan was a bad look for the atomic age — it’s important to get the PR right when it comes to emerging technologies. The obvious moral lesson of the story — the idea that scientists with good intentions can cause monstrous damage by naively assuming they are (and will always be) on the side of the good guys — never seemed to cross his mind.
The Manhattan Project is an imperfect analogy for the AI tech race. While geopolitical tension is undoubtedly growing between the United States and China, with technology at the heart of it, we are thankfully not yet in the midst of a world war.
The point at which the comparison holds best is this: in both cases, the scientists at the technological vanguard were and are the ones most loudly warning about the risks. As in the case of the Manhattan Project, the interests of US politicians in seeing their scientists develop the technology faster than anyone else is drowning out the warnings about the risks for humanity as a whole.
“A nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction,” Leo Szilard, who first developed the concept of the nuclear chain reaction, wrote, “may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.” Today, it is the likes of Geoffrey Hinton, known as the “godfather of AI,” who are playing the role of Szilard. Hinton resigned from Google over the “existential risk” posed by the way the technology is being developed.
Moreover, we don’t need to speculate about future risks — we can see it in the here and now. The Israeli military has used an AI machine called “Lavender” to identify “targets” in its genocide in Gaza. Lavender’s kill list was made up of thirty-seven thousand Palestinians, with no requirement to check why those individuals were on the list. Human input was limited to a rubberstamp process, even though its overseers knew that Lavender makes errors in at least 10 percent of cases. As has been long clear, Palestine serves as a laboratory for the use of emerging military technologies, which are refined and then exported to the world.
The Left should actively oppose a Manhattan Project for AI. A frenzied geopolitical competition to develop highly militarized and propagandistic use cases for AI is not in the interests of the United States, China, or the rest of the world. Whether it is Gaza today or Hiroshima and Nagasaki in the 1940s, we should recognize what the United States “winning” the tech race in this sense looks like. Loosening the grip of the US corporate–state nexus over AI should be a key priority for all those interested in a world of peace and social justice.
Tools & Platforms
DRUID AI Raises $31 Million Series C

DRUID AI today announced it has secured $31 million in Series C financing to advance the global expansion of its enterprise-ready agentic AI platform under the leadership of its new CEO Joseph Kim. The strategic investment – which will advance DRUID AI’s mission to empower companies to create, manage, and orchestrate conversational AI agents – was led by Cipio Partners, with participation from TQ Ventures, Karma Ventures, Smedvig, and Hoxton Ventures.
“This investment is both a testament to DRUID AI’s success and a catalyst to elevate businesses globally through the power of agentic AI,” said Kim. “Customer success is what it’s all about, and delivering real business outcomes requires understanding companies’ pain points and introducing innovations that help those customers address their complex challenges. That’s the DRUID AI way, and now we’re bringing it to the world through this new phase of global growth.”
Roland Dennert, manager partner at Cipio Partners, a premier global growth equity fund, explained: “At Cipio Partners, we focus on supporting growth-stage technology companies that have achieved product-market fit and are ready to scale. DRUID AI aligns perfectly with our investment strategy – offering a differentiated, AI-based product in a vast and rapidly growing market. Our investment will help accelerate DRUID AI’s expansion into the U.S. and elsewhere, fuel further technological advancements, and strengthen its position as a global leader in enterprise AI solutions. We are excited to partner with DRUID AI on its journey and look forward to supporting the company in shaping the future of enterprise AI-driven interactions.”
Kim’s proven track record in leading high-performance teams and scaling AI-driven technology businesses ideally positions him to spearhead that effort. He has more than two decades of operating executive experience in application, infrastructure, and security industries. Most recently, he was CEO of Sumo Logic. He serves on the boards of directors of SmartBear and Andela. In addition, he was a senior operating partner at private equity firm Francisco Partners, CPTO at Citrix, SolarWinds, and Hewlett Packard Enterprise, and chief architect at GE.
DRUID AI cofounder and Chief Operating Officer Andreea Plesea, who had been interim CEO, commented: “I am delighted Joseph is taking the reins as CEO to drive our next level of growth. His commitment to customer success and developing the exact solutions customers need is in total sync with the approach that has fueled our progress and positioned us to raise new funds. Joseph and the Series C set up DRUID AI and our clients for expanded innovation and impact.”
The appointment of Kim as CEO and the new funding come on the heels of DRUID AI earning a Challenger spot in the Gartner Magic Quadrant for Conversational AI Platforms for 2025. This is just the latest development validating the maturity of DRUID AI’s platform and its readiness to deliver business results in a market that is experiencing rapid advancement and adoption.
In 2024, DRUID AI grew ARR 2.7x year-over-year. Its award-winning platform has powered more than 1 billion conversations across thousands of agents. In addition, the DRUID AI global partner ecosystem has attracted industry giants Microsoft, Genpact, Cognizant, and Accenture.
DRUID AI is trusted by more than 300 global clients across banking, financial services, government, healthcare, higher education, manufacturing, retail, and telecommunications. Leading organizations such as AXA Insurance, Carrefour, the Food and Drug Administration (FDA), Georgia Southern University, Kmart Australia, Liberty Global Group, MatrixCare, National Health Service, and Orange Auchan have adopted DRUID AI to redefine the way they operate.
Companies have embraced DRUID AI to help teams accelerate digital operations, reduce the complexity of day-to-day work, enhance user experience, and maximize technology ROI. Powered by advanced agentic AI and driven by the DRUID Conductor, its core orchestration engine, the DRUID platform enables businesses to effortlessly deploy AI agents and intelligent apps that streamline processes, integrate seamlessly with existing systems, and fulfill complex requests efficiently. DRUID AI’s end-to-end platform delivers 98% first response accuracy.
“At Georgia Southern, we recognized that to truly meet the needs of today’s digital native students, we needed to offer dynamic and accurate real-time support that would solve their issues on the spot,” said Ashlea Anderson, CIO at Georgia Southern University. “By leveraging DRUID AI’s platform, we’ve created personalized and intuitive experiences to support students throughout their academic journeys, increasing enrollment and student retention. The result is a more efficient, connected campus where students feel supported, engaged, and better positioned to succeed.”
To learn more, visit www.druidai.com.
About DRUID AI
DRUID AI (druidai.com) is an end-to-end enterprise-grade AI platform that enables lightning-fast development and deployment of AI agents, knowledge bases, and intelligent apps for teams looking to automate business processes and improve technology ROI. DRUID AI Agents enable personalized, omnichannel, and secure interactions while seamlessly integrating with existing business systems. Since 2018, DRUID AI has been actively pursuing its vision of providing each employee with an intelligent virtual assistant, establishing an extensive partner network of over 200 partners, and servicing more than 300 clients worldwide.
Tools & Platforms
VA leader eyes ‘aggressive deployment’ of AI as watchdog warns of challenges to get there

A key technology leader at the Department of Veterans Affairs told lawmakers Monday that the agency intends to “capitalize” on artificial intelligence to help overcome its persistent difficulties in providing timely care and maintaining cost-effective operations.
At the same time, a federal watchdog warned the same lawmakers that the VA could face challenges before the agency can effectively do so.
Lawmakers on the House VA subcommittee on technology modernization pressed Charles Worthington, the VA’s chief data officer and chief technology officer, over the agency’s plans to deploy AI across its dozens of facilities as the federal government increasingly turns to automation technology.
“I’m pleased to report that all VA employees now have access to a secure, generative AI tool to assist them with their work,” Worthington told the subcommittee. “In surveys, users of this tool are reporting that it’s saving them over two hours per week.”
Worthington outlined how the agency is utilizing machine learning in agency workflows, as well as in clinical care for earlier disease detection and ambient listening tools that are expected to be rolled out at some facilities later this year. The technology can also be used to identify veterans who may be at high risk of overdose and suicide, Worthington added.
“Despite our progress, adopting AI tools does present challenges,” Worthington acknowledged in his opening remarks. “Integrating new AI solutions with a complex system architecture and balancing innovation with stringent security compliance is crucial.”
Carol Harris, the Government Accountability Office’s director of information technology and cybersecurity, later revealed during the hearing that VA officials told the watchdog that “existing federal AI policy could present obstacles to the adoption of generative AI, including in the areas of cybersecurity, data privacy and IT acquisitions.”
Harris noted that generative AI can require infrastructure with significant computational and technical resources, which the VA has reported issues accessing and receiving funding for. The GAO outlined an “AI accountability framework” in a full report to solve some of these issues.
Questions were also raised over the VA’s preparedness to deploy the technology to the agency’s more than 170 facilities.
“We have such an issue with the VA because it’s a big machine, and we’re trying to compound or we’re trying to bring in artificial intelligence to streamline the process, and you have 172 different VA facilities, plus satellite campuses, and that’s 172 different silos, and they don’t work together,” said Rep. Morgan Luttrell, R-Texas. “They don’t communicate very well with each other.”
Worthington said he believes AI is being used at facilities nationwide. Luttrell pushed back, stating he’s heard from multiple sites that don’t have AI functions because “their sites aren’t ready.”
“Or they don’t have the infrastructure in place to do that because we keep compounding software on top of software, and some sites can’t function at all with [the] new software they’re trying to implement,” Luttrell added.
Worthington responded: “I would agree that having standardized systems is a challenge at the VA, and so there is a bit of a difference in different facilities. Although I do think many of them are starting to use AI-assisted medical devices, for example, and a number of those are covered in this inventory,” in reference to the VA’s AI use case inventory.
Luttrell then asked if the communication between sites needs to happen before AI can be implemented.
“We can’t wait because AI is here whether we’re ready or not,” said Worthington, who suggested creating a standard template that sites can use, pointing to the VA GPT tool as an example. VA GPT is available to every VA employee, he added.
Worthington told lawmakers that recruiting and retaining AI talent remains difficult, while scaling commercial AI tools brings new costs.
Aside from facility deployment, lawmakers repeatedly raised concerns about data privacy, given the VA’s extensive collection of medical data. Amid these questions, Worthington maintained that all AI systems must meet “rigorous security and privacy standards” before receiving an authority to operate within the agency.
“Before we bring a system into production, we have to review that system for its compliance with those requirements and ensure that the partners that are working with us on those systems attest to and agree with those requirements,” he said.
Members from both sides of the aisle raised concerns about data security after the AI model had been implemented in the agency. Subcommittee chair Tom Barrett, R-Mich., said he does not want providers to “leech” off the VA’s extensive repository of medical data “solely for the benefit” of AI, and not the agency.
Tools & Platforms
Tech giants to pour billions into UK AI. Here’s what we know so far

Microsoft CEO Satya Nadella speaks at Microsoft Build AI Day in Jakarta, Indonesia, on April 30, 2024.
Adek Berry | AFP | Getty Images
LONDON — Microsoft said on Tuesday that it plans to invest $30 billion in artificial intelligence infrastructure in the U.K. by 2028.
The investment includes $15 billion in capital expenditures and $15 billion in its U.K. operations, Microsoft said. The company said the investment would enable it to build the U.K.’s “largest supercomputer,” with more than 23,000 advanced graphics processing units, in partnership with Nscale, a British cloud computing firm.
The spending commitment comes as President Donald Trump embarks on a state visit to Britain. Trump arrived in the U.K. Tuesday evening and is set to be greeted at Windsor Castle on Wednesday by King Charles and Queen Camilla.
During his visit, all eyes are on U.K. Prime Minister Keir Starmer, who is under pressure to bring stability to the country after the exit of Deputy Prime Minister Angela Rayner over a house tax scandal and a major cabinet reshuffle.
On a call with reporters on Tuesday, Microsoft President Brad Smith said his stance on the U.K. has warmed over the years. He previously criticized the country over its attempt in 2023 to block the tech giant’s $69 billion acquisition of video game developer Activision-Blizzard. The deal was cleared by the U.K.s competition regulator later that year.
“I haven’t always been optimistic every single day about the business climate in the U.K.,” Smith said. However, he added, “I am very encouraged by the steps that the government has taken over the last few years.”
“Just a few years ago, this kind of investment would have been inconceivable because of the regulatory climate then and because there just wasn’t the need or demand for this kind of large AI investment,” Smith said.
Starmer and Trump are expected to sign a new deal Wednesday “to unlock investment and collaboration in AI, Quantum, and Nuclear technologies,” the government said in a statement late Tuesday.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries