Connect with us

Tools & Platforms

Sam Altman’s AI Empire Relies on Brutal Labor Exploitation

Published

on


Artificial intelligence (AI) is quite possibly the most hyped technology in history. For well over half a century, the potential for AI to replace most or all human skills has crisscrossed in the public imagination between sci-fi fantasy and scientific mission.

From the predictive AI of the 2000s that brought us search engines and apps, to the generative AI of the 2020s that is bringing us chatbots and deepfakes, every iteration of AI is apparently one more leap toward the summit of human-comparable AI, or what is now widely termed Artificial General Intelligence (AGI).

The strength of Karen Hao’s detailed analysis of America’s AI industry, Empire of AI, is that her relentlessly grounded approach refuses to play the game of the AI hype merchants. Hao makes a convincing case that it is wrong to focus on hypotheticals about the future of AI when its present incarnation is fraught with so many problems. She also stresses that exaggerated “doomer” and “boomer” perspectives on what is coming down the line both end up helping the titans of the industry to build a present and future for AI that best serves their interests.

Moreover, AI is a process, not a destination. The AI we have today is itself the product of path dependencies based on the ideologies, infrastructure, and IPs that dominate in Silicon Valley. As such, AI is being routed down a highly oligopolistic developmental path, one that is designed deliberately to minimize market competition and concentrate power in the hands of a very small number of American corporate executives.

However, the future of AI remains contested territory. In what has come as a shock to the Silicon Valley bubble, China has emerged as a serious rival to US AI dominance. As such, AI has now moved to the front and center of great-power politics in a way comparable to the nuclear and space races of the past. To understand where AI is and where it is going, we must situate analysis of the technology within the wider economic and geopolitical context in which the United States finds itself.

Hao’s story revolves around OpenAI, the San Francisco company most famous for ChatGPT, the AI chatbot that brought generative AI to the world’s attention. Through the trials and tribulations of its CEO Sam Altman, we are brought into a world of Machiavellian deceit and manipulation, where highfalutin moral ambition collides constantly with the brutal realities of corporate power. Altman survives the various storms that come his way, but only by junking everything he once claimed to believe in.

OpenAI began with the mission of “building AGI that benefits humanity” as a nonprofit that would collaborate with others through openly sharing its research, without developing any commercial products. This objective stemmed from the convictions of Altman and OpenAI’s first major patron, Elon Musk, who believed that AI posed major risks to the world if it was developed in the wrong way. AI therefore required cautious development and tight government regulation to keep it under control.

OpenAI was thus a product of AI’s “doomer” faction. The idea was to be the first to develop AGI in order to be best positioned to rein it in. The fact that Altman would end up flipping OpenAI 180 degrees — creating a for-profit company that produces proprietary software, based on extreme levels of corporate secrecy and shark-like determination to outcompete its rivals in the speed of AI commercialization, regardless of the risks — testifies to his capacity to mutate into whatever he needs to be in the pursuit of wealth and power.

The motivation for the first shift toward what OpenAI would eventually become came from strategic considerations in relation to its doctrine of AI development, called “scaling.” The idea behind scaling was that AI could advance by leaps and bounds simply through the brute force of massive data power. This reflected a devout belief in “connectionism,” a school of AI development that was much easier to commercialize than its rival (“symbolism”).

The connectionists believed that the key to AI was to create “neural networks,” digital approximations of real neurons in the human brain. OpenAI’s big thinkers, most importantly its first chief scientist Ilya Sutskever, believed that if the firm had more data-processing nodes (“neurons”) available to it than anyone else, it would position itself at the cutting edge of AI development. The problem was that scaling, an intrinsically data-intensive strategy, required a huge amount of capital — much more than a nonprofit was capable of attracting.

Driven by the need to scale, OpenAI created a for-profit arm in 2019 to raise capital and build commercial products. As soon as it did so, there was a scramble between Altman and Musk to take over as CEO. Altman won out and Musk, having been sidelined, turned from ally to enemy overnight, accusing Altman of raising funds as a nonprofit under false pretenses. This was a criticism that would later develop into litigation against OpenAI.

But Musk’s ideological justification for the split was an afterthought. If he had won the power struggle, the world’s richest man planned to lash OpenAI to his electric car company, Tesla. Whoever became CEO, OpenAI was on an irreversible path toward becoming just like any other Big Tech giant.

Yet because of the company’s origins, it was left with a strange governance structure that gave board-level control to an almost irrelevant nonprofit arm, based on the ludicrous pretense that, despite its newly embraced profit motive, OpenAI’s mission was still to build AGI for humanity. The Effective Altruism (EA) movement gave a veneer of coherence to the Orwellian ideological precepts of OpenAI. EA promotes the idea that the best way of doing good is to become as rich as possible and then give your money to philanthropic causes.

This junk philosophy found massive support in Silicon Valley, where the idea of pursuing maximum wealth accumulation and justifying it on moral terms was highly convenient. Altman, who in 2025 glad-handed Saudi Prince Mohammed Bin Salman alongside Trump just after the despotic ruler announced his own AI venture, epitomizes the inevitable endgame of EA posturing: power inevitably becomes its own purpose.

Just four months after the for-profit launched, Altman secured a $1 billion investment from Microsoft. With Musk out of the picture, OpenAI found an alternative Big Tech benefactor to fund its scaling. More willing to trample on data protection rules than big competitors like Google, OpenAI began to extract data from anywhere and everywhere, with little care for its quality or content — a classic tech start-up “disruptor” mentality, akin to Uber or Airbnb.

This data bounty was the raw material that fueled OpenAI’s scaling. Driven by a desire to impress Microsoft founder and former CEO Bill Gates, who wanted to see OpenAI create a chatbot that would be useful for research, the company developed ChatGPT, expecting it to be moderately successful. To everyone’s surprise, within two months ChatGPT became the fastest-growing consumer app in history. The generative AI era was born.

From that point onward, OpenAI became relentlessly focused on commercialization. But the shockwaves of ChatGPT were felt well beyond the company. Scaling became the standard-bearer for AI development: observers deemed whichever company could marshal the greatest amount of “compute” (data power) to be the likely winner of the AI tech race. Alphabet and Meta started to spend sums on AI development that dwarfed those marshalled by the US Government and the European Commission.

As Big Tech raced to get ahead on generative AI, the funding rush swept up almost all of the talent in the field. This transformed the nature of AI research, including in universities, with leading professors increasingly tied to one of the Big Tech players. As the stakes grew higher, research from within companies became increasingly secretive and dissent frowned upon. Corporate proprietary walls were dividing up the field of AI development.

We have to place this heavily commodified form of AI development within its overall conjuncture. If generative AI was developed in the US in the 1950s, it would have gone years or even decades being largely backed by US military R&D budgets. Even after it had undergone commercialization, the state would have remained the main purchaser of the technology for decades. This was the developmental path of semiconductors.

However, in the 2020s, at the tail end of the neoliberal era, it is the corporate–state nexus that drives and frames technological development in the United States, reducing incentives for long-term thinking, and stunting any open, pedagogical process of scientific inquiry. That will have long-term consequences for how AI is developed that are unlikely to be positive, whether for society in general or for American global leadership in particular.

One of the myths of AI is that it is a technology that does not rely on workers. There are essentially three parts to the generative AI re-production process: extracting the data, crunching the data, and testing or fixing the data. The extraction part relies on dead, rather than living, labor. For example, OpenAI scraped data from LibraryGenesis, an online repository of books and scholarly articles, making use of centuries of intellectual labor for free.

The crunching data part of generative AI is all about computing power, which relies on labor only to the extent that the infrastructure required for “compute,” most importantly data centers, is based on a long digital value chain that includes Taiwanese chip manufacturers and Chilean copper miners. While the testing and fixing of the data is the part of generative AI re-production that is most often forgotten about, it is also the part that is most directly dependent on workers.

There are two types of digital workers required for testing and fixing the enormous data requirements of generative AI. The first are click workers, also known as data annotators (or data labelers). These are gig workers who earn piece rates for completing short digital tasks, such as categorizing what is contained in an image.

Click workers are vital because without them, AI systems like ChatGPT would be riddled with errors, especially when it comes to “edge cases”: rare or unusual situations that sit at the boundaries of AI’s categorization parameters. Click workers turn the data of generative AI systems from low grade to high quality. This is especially important for OpenAI, since so much of the company’s data has been extracted from the gutters of the internet.

The barriers to entry for click work are extremely low, because anyone who can access the internet can perform the most basic tasks. Click workers are operating in a global labor market with little connection to their fellow workers, meaning they have very limited leverage over their digital bosses. As such, the pay rates are rock-bottom and the conditions as precarious as it gets.

Hao finds that Venezuela became the global hotbed of click work for a period of time, due to its high education levels, good internet access, and massive economic crisis. The tough US sanctions on Venezuela didn’t preclude its AI companies from exploiting the South American country’s desperate and impoverished workforce. Once click-work outsourcing firms like RemoTasks feel that they have maximized the labor exploitation of one crisis-hit country, or start to face resistance over working conditions, they simply “robo-fire” workers from that location and bring workers on board from somewhere else.

The second type of worker in the AI industry is the content moderator. Because OpenAI and other AI companies are scraping the detritus of the internet for data, a substantial portion is saturated with racism, sexism, child pornography, fascist views, and every other ugly thing one can think of. A version of AI that doesn’t have the horrors of the internet filtered out will develop these characteristics in its responses; indeed, earlier versions of what would become ChatGPT did produce neo-Nazi propaganda, alarming OpenAI’s compliance team.

The solution has been to turn to human content moderators to extract the filth out of the AI’s system, in the same way content moderators have been tasked for years now with policing social media content. Unlike click workers, the content moderator workforce tends to be subject to a regime of digital Taylorism, rather than one of piece work. This takes the form of a call center-style setup where workers are motivated by bonuses in target-driven environments, all the time under the watchful eyes of human supervisors and digital surveillance.

Like the click workers, they are completing small digital tasks by annotating data, but the data they are annotating consists of the vilest content humans can produce. Because they are training the AI, it’s necessary for content moderators to look closely at all the gory details that flash up on their screen in order to label each part correctly. Being exposed to this repeatedly and exhaustively is a mental health nightmare.

Hao follows the story of Mophat Okinyi, a Kenyan content moderator working for outsourcing firm Sama, moderating Meta and OpenAI content. The longer Okinyi worked for Sama, the more his behavior became erratic and his personality changed, destroying his relationship and leading to spiraling costs for mental health support.

Having reported on content moderation myself, I know that Okinyi’s case is by no means exceptional. It is the norm for content moderators to have their minds systematically broken down by the relentless brutality they must witness repeatedly just to do their job.

While most click work and content moderation is done in the Global South, there are signs that as AI becomes more complex, it will increasingly need data workers in the Global North as well. The main reason for this is the increasing importance of Reinforcement Learning from Human Feedback (RLHF) to AI development.

RLHF is a more complex form of data annotation, because click workers need to compare two responses from an AI and be able to explain why one is better than the other. As AI tools are developed for specific industries, the need for specialist expertise as well as an understanding of culturally specific cues means that RLHF increasingly requires high-skill workers to enter the AI industry.

In keeping with the style of the book, Hao does not speculate on where RLHF might lead, but it is worth briefly considering its potential impact on the future of work. If generative AI tools can produce content which is as good as or better than material from a human, then it is not inconceivable that such tools could replace the worker in any content-producing industry.

However, that would not mean that the skills of those workers would disappear entirely: there would still be a need, for example, for paralegals, but their job would be to test and fix the paralegal AI. At that point, these professional service-sector jobs would be exposed to the Uberized model of work that click workers in the Global South have now experienced for years. It’s not for nothing that Altman has said “there will be some change required to the social contract.”

Of course, there remain significant question marks about generative AI’s true capacities in a wide range of content production. But wherever you sit on the scale between skeptic and true believer, there’s little doubt that AI will increasingly be relevant not only to the jobs of the most impoverished sections of the working class, but also to workers who are used to having some level of financial security due to their position higher up the labor-market ladder. The drawing of a much larger pool of workers into the precariat could have explosive social consequences.

AI’s effect on the environment is likely to be just as dramatic as its impact on labor, if not more so. Generative AI’s enormous data usage requires gigantic data centers rammed with high-energy usage GPU chips to service it. These data centers need vast amounts of land to build on and huge quantities of water to cool them down. As generative AI products become increasingly widely used, the ecological footprint of the industry expands relentlessly.

Hao highlights some stunning statistics and predictions. Every AI-generated image has the equivalent energy consumption of charging a smartphone by 25 percent. AI’s water usage could match half of all the water used in the UK by 2027. By 2030, AI could be using more energy than all of India, the world’s third-largest consumer of electricity.

The environmental consequences have already been significant. Iowa, two years into a drought, had Microsoft guzzling 11.5 million tonnes of the state’s potable water. Uruguay, a country that has experienced repeated droughts, saw mass protests after the courts forced its government to reveal the extent of the drinking water that Google’s data centers in the country were using. “This is not drought, it’s pillage,” graffiti in Montevideo reads.

What makes the arrival of data centers en masse especially hard to stomach for local populations is the fact that they provide hardly any upsides. Data centers generate very few jobs in the places they are located, while draining local areas of their land and water, thus actively damaging more labor-intensive industries.

In spite of this, following the logic of the scaling doctrine, we should expect data centers to grow ever bigger as “compute” expands to keep AI moving forward. While Altman has invested heavily in a nuclear fusion start-up as the golden ticket to abundant and free energy, just like AGI, it is a bet on a miracle cure tomorrow that distracts from the real problems AI scaling is causing today.

However, in a rare bit of good news for the world’s ecology, the scaling doctrine received a hammerblow from the far east in January. DeepSeek, a Chinese generative AI chatbot, launched and quickly surpassed ChatGPT as the most downloaded app in the United States.

The remarkable thing about DeepSeek is not what it did, but how it did it. The chatbot cost just $6 million to train — about one-fiftieth of the cost of ChatGPT, with a higher quality on some benchmarks. DeepSeek was trained on old GPU chips, designed by Nvidia to be of lower quality to comply with US chip export restrictions to China. Because of its efficiency, DeepSeek’s energy consumption is 90 percent lower than that of ChatGPT. The technical workings behind this engineering marvel were made open-source, so anyone could see how it was done.

DeepSeek was a technological marvel and a geopolitical earthquake rolled into one. Not only did it mark China’s arrival as a tech superpower, but it also demonstrated that the scaling doctrine embraced by the whole of Silicon Valley as the go-to methodology for generative AI had proven to be behind the curve, at best. The shock that a Chinese company could embarrass Silicon Valley was so great that it triggered panic in Wall Street about whether the trillions already invested in American AI constituted a bet gone badly wrong.

In one day, the fall in the market capitalization of tech stocks was equivalent to the entire financial value of Mexico. Even Donald Trump weighed in to say that DeepSeek’s emergence was “a wake-up call” for US Big Tech. On X, Altman struck a positive tone in response, but OpenAI quickly started to brief the press that DeepSeek might have “distilled” OpenAI’s models in creating its chatbot, though little has been heard about this claim since. In any case, distillation can’t explain the enormous efficiency gains of DeepSeek compared to OpenAI.

It’s unfortunate that DeepSeek doesn’t appear in Empire of AI. Hao writes that she finished the book in January 2025, the month of DeepSeek’s launch. It would have been wise for the publisher to have given Hao a six-month extension to write a chapter on DeepSeek and the fallout in the US, especially considering how much of the book is a critique of the dogma that scaling is the only way to seriously develop AI.

However, she has commented elsewhere on DeepSeek’s dramatic arrival and the flaws it reveals about the US AI industry: “DeepSeek has demonstrated that scaling up AI models relentlessly, a paradigm OpenAI introduced and champions, is not the only, and far from the best, way to develop AI.”

DeepSeek also raised more profoundly ideological questions about AI development. If a Chinese company could develop cutting-edge tech on an open-source basis, giving everyone else the opportunity to test the underlying assumptions of their innovation and build on them, why were American companies busy constructing giant proprietary software cages around their tech — a form of enclosure that was bound to inhibit the speed of scientific progress? Some have started asking whether Chinese communism offers a better ecosystem for AI development than American capitalism.

In fact, the question of open-source versus proprietary approaches just scratches the surface of the debates that society should be having about artificial intelligence. Ultimately, both DeepSeek and ChatGPT operate based on capitalist business models, just with different principles of technical development. While the Android open-source software operating system differentiates Google from Apple, no one today invests any hopes in Google as a model for socially just tech development. The bigger question we should be asking is this: if we can’t trust oligopolistic capitalist enterprises with a technology as powerful as this, how should AI be governed?

Hao only really gets her teeth into this point in the book’s epilogue, “How the Empire Falls.” She takes inspiration from Te Hiku, a Māori AI speech recognition project. Te Hiku seeks to revitalize the te reo language through putting archived audio tapes of te reo speakers into an AI speech recognition model, teaching new generations of Māori who have few human teachers left.

The tech has been developed on the basis of consent and active participation from the Māori community, and it is only licensed to organizations that respect Māori values. Hao believes Te Hiku shows there is “another way” of doing AI:

Models can be small and task specific, their training data contained and knowable, ridding the incentives for widespread exploitative and psychologically harmful labor practices and the all-consuming extractivism of producing and running massive supercomputers. The creation of AI can be community driven, consensual, respectful of local context and history; its application can uplift and strengthen marginalised communities; its governance can be inclusive and democratic.

More broadly, Hao says we should be aiming for “redistributing power” in AI along three axes: knowledge, resources, and influence. There should be greater funding for organizations pursuing new directions of AI research, holding Big Tech to account, or developing community-based AI tools like Te Hiku. There should also be transparency over AI training data, its environmental impact, supply chains, and land leases.

Labor unions should be supported to develop power among data workers and workers whose jobs are under threat from automation. Finally, “broad-based education” is required to bust the myths surrounding AI, so that the public can come to a more grounded understanding of how AI tools are built, what their constraints are, and whose interests they serve.

Although these are important ideas, in and of themselves they wouldn’t threaten the power of companies like OpenAI. The state is notably absent from Hao’s vision for bringing down the AI tech giants. The questions of how AI should be regulated and what ownership structure it should have go unexplored in Empire of AI.

Perhaps in the age of Trump there is a feeling of skepticism among progressives that the state can be anything other than a tool for entrenching the power of corporate elites. It is certainly hard not to be cynical when confronted with projects like Stargate, an Open AI-backed private sector collaboration to invest $500 billion in AI infrastructure. Stargate is underpinned by a commitment from the Trump administration that it will bend and break regulations as necessary to ensure the project gets the energy supply it needs — a clear case of the state–corporate nexus working seamlessly, with little care about the consequences for society at large.

Yet the left can’t sidestep the question of state power and AI. While projects like Te Hiku are no doubt valuable, by definition they cannot be scaled-up alternatives to the collective power of American AI capital, which commands resources far greater than many of the world’s states. If it becomes normal for AI tools like ChatGPT to be governed by and for Silicon Valley, we risk seeing the primary means of content production concentrated in the hands of a tiny number of tech barons.

We therefore need to put big solutions on the table. Firstly, regulation: there must be a set of rules that place strict limits on where AI companies get their data from, how their models are trained, and how their algorithms are managed. In addition, all AI systems should be forced to operate within tightly regulated environmental limits: energy usage for generative AI cannot be a free-for-all on a planet under immense ecological stress. AI-powered automated weapons systems should be prohibited. All of this should be subject to stringent, independent audits to ensure compliance.

Secondly, although the concentration of market power in the AI industry took a blow from DeepSeek’s arrival, there remain strong tendencies within AI — and indeed in digital tech as a whole — towards monopolization. Breaking up the tech oligarchy would mean eliminating gatekeepers that concentrate power and control data flows.

Finally, the question of ownership should be a serious part of the debate. Te Hiku shows that when AI tools are built by organizations with entirely different incentive structures in place, they can produce wildly different results. As long as artificial intelligence is designed for the purposes of the competitive accumulation of capital, firms will continue to find ways to exploit labor, degrade the environment, take short cuts in data extraction, and compromise on safety, because if they don’t, one of their competitors will.

It is possible to imagine a world where a socialized AI serves as a genuine aid to humanity. It would be one where instead of displacing jobs, AI would be designed to help workers reduce the amount of time they spend on technical and bureaucratic tasks, focusing human energies on problem-solving instead, and reducing the length of the working week. Rather than gobbling up water supplies, AI would function as a resource planning tool to help identify waste and duplication within and across energy systems.

These possibilities are far removed from the fantasies of AGI, whereby Artificial Intelligence will supposedly become so powerful that it will resolve problems deeply embedded in the social relations of capitalism. Instead, this is a vision for AI that presupposes structural change.

On May 29, the US Department of Energy tweeted the following message: “AI is the next Manhattan Project, and THE UNITED STATES WILL WIN.” US government agencies are not the only ones to have compared AI favorably to the Manhattan Project.

From the days when OpenAI was just an idea in Altman’s head, he was proposing a “Manhattan Project for AI” to Musk. When he watched Oppenheimer, the Oscar-winning biopic of the man who led the Manhattan Project, Altman’s conclusion was that the mushroom cloud over Japan was a bad look for the atomic age — it’s important to get the PR right when it comes to emerging technologies. The obvious moral lesson of the story — the idea that scientists with good intentions can cause monstrous damage by naively assuming they are (and will always be) on the side of the good guys — never seemed to cross his mind.

The Manhattan Project is an imperfect analogy for the AI tech race. While geopolitical tension is undoubtedly growing between the United States and China, with technology at the heart of it, we are thankfully not yet in the midst of a world war.

The point at which the comparison holds best is this: in both cases, the scientists at the technological vanguard were and are the ones most loudly warning about the risks. As in the case of the Manhattan Project, the interests of US politicians in seeing their scientists develop the technology faster than anyone else is drowning out the warnings about the risks for humanity as a whole.

“A nation which sets the precedent of using these newly liberated forces of nature for purposes of destruction,” Leo Szilard, who first developed the concept of the nuclear chain reaction, wrote, “may have to bear the responsibility of opening the door to an era of devastation on an unimaginable scale.” Today, it is the likes of Geoffrey Hinton, known as the “godfather of AI,” who are playing the role of Szilard. Hinton resigned from Google over the “existential risk” posed by the way the technology is being developed.

Moreover, we don’t need to speculate about future risks — we can see it in the here and now. The Israeli military has used an AI machine called “Lavender” to identify “targets” in its genocide in Gaza. Lavender’s kill list was made up of thirty-seven thousand Palestinians, with no requirement to check why those individuals were on the list. Human input was limited to a rubberstamp process, even though its overseers knew that Lavender makes errors in at least 10 percent of cases. As has been long clear, Palestine serves as a laboratory for the use of emerging military technologies, which are refined and then exported to the world.

The Left should actively oppose a Manhattan Project for AI. A frenzied geopolitical competition to develop highly militarized and propagandistic use cases for AI is not in the interests of the United States, China, or the rest of the world. Whether it is Gaza today or Hiroshima and Nagasaki in the 1940s, we should recognize what the United States “winning” the tech race in this sense looks like. Loosening the grip of the US corporate–state nexus over AI should be a key priority for all those interested in a world of peace and social justice.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

AI Shopping Is Here. Will Retailers Get Left Behind?

Published

on


AI doesn’t care about your beautiful website.

Visit any fashion brand’s homepage and you’ll see all sorts of dynamic or interactive elements from image carousels to dropdown menus that are designed to catch shoppers’ eyes and ease navigation.

To the large language models that underlie ChatGPT and other generative AI, many of these features might as well not exist. They’re often written in the programming language JavaScript, which for the moment at least most AI struggles to read.

This giant blindspot didn’t matter when generative AI was mostly used to write emails and cheat on homework. But a growing number of startups and tech giants are deploying this technology to help users shop — or even make the purchase themselves.

“A lot of your site might actually be invisible to an LLM from the jump,” said A.J. Ghergich, global vice president of Botify, an AI optimisation company that helps brands from Christian Louboutin to Levi’s make sure their products are visible to and shoppable by AI.

The vast majority of visitors to brands’ websites are still human, but that’s changing fast. US retailers saw a 1,200 percent jump in visits from generative AI sources between July 2024 and February 2025, according to Adobe Analytics. Salesforce predicts AI platforms and AI agents will drive $260 billion in global online sales this holiday season.

Those agents, launched by AI players such as OpenAI and Perplexity, are capable of performing tasks on their own, including navigating to a retailer’s site, adding an item to cart and completing the checkout process on behalf of a shopper. Google’s recently introduced agent will automatically buy a product when it drops to a price the user sets.

This form of shopping is very much in its infancy; the AI shopping agents available still tend to be clumsy. Long term, however, many technologists envision a future where much of the activity online is driven by AI, whether that’s consumers discovering products or agents completing transactions.

To prepare, businesses from retail behemoth Walmart to luxury fashion labels are reconsidering everything from how they design their websites to how they handle payments and advertise online as they try to catch the eye of AI and not just humans.

“It’s in every single conversation I’m having right now,” said Caila Schwartz, director of consumer insights and strategy at Salesforce, which powers the e-commerce of a number of retailers, during a roundtable for press in June. “It is what everyone wants to talk about, and everyone’s trying to figure out and ask [about] and understand and build for.”

From SEO to GEO and AEO

As AI joins humans in shopping online, businesses are pivoting from SEO — search engine optimisation, or ensuring products show up at the top of a Google query — to generative engine optimisation (GEO) or answer engine optimisation (AEO), where catching the attention of an AI responding to a user’s request is the goal.

That’s easier said than done, particularly since it’s not always clear even to the AI companies themselves how their tools rank products, as Perplexity’s chief executive, Aravind Srinivas, admitted to Fortune last year. AI platforms ingest vast amounts of data from across the internet to produce their results.

Though there are indications of what attracts their notice. Products with rich, well-structured content attached tend to have an advantage, as do those that are the frequent subject of conversation and reviews online.

“Brands might want to invest more in developing robust customer-review programmes and using influencer marketing — even at the micro-influencer level — to generate more content and discussion that will then be picked up by the LLMs,” said Sky Canaves, a principal analyst at Emarketer focusing on fashion, beauty and luxury.

Ghergich pointed out that brands should be diligent with their product feeds into programmes such as Google’s Merchant Center, where retailers upload product data to ensure their items appear in Google’s search and shopping results. These types of feeds are full of structured data including product names and descriptions meant to be picked up by machines so they can direct shoppers to the right items. One example from Google reads: Stride & Conquer: Original Google Men’s Blue & Orange Power Shoes (Size 8).

Ghergich said AI will often read this data before other sources such as the HTML on a brand’s website. These feeds can also be vital for making sure the AI is pulling pricing data that’s up to date, or as close as possible.

As more consumers turn to AI and agents, however, it could change the very nature of online marketing, a scenario that would shake even Google’s advertising empire. Tactics that work on humans, like promoted posts with flashy visuals, could be ineffective for catching AI’s notice. It would force a redistribution of how retailers spend their ad budgets.

Emarketer forecasts that spending on traditional search ads in the US will see slower growth in the years ahead, while a larger share of ad budgets will go towards AI search. OpenAI, whose CEO, Sam Altman, has voiced his distaste for ads in the past, has also acknowledged exploring ads on its platform as it looks for new revenue streams.

A chart showing the forecasted decline in spending on traditional search ads in the US from 2025 to 2029.

“The big challenge for brands with advertising is then how to show up in front of consumers when traditional ad formats are being circumvented by AI agents, when consumers are not looking at advertisements because agents are playing a bigger role,” said Canaves.

Bots Are Good Now

Retailers face another set of issues if consumers start turning to agents to handle purchases. On the one hand, agents could be great for reducing the friction that often causes consumers to abandon their carts. Rather than going through the checkout process themselves and stumbling over any annoyances, they just tell the agent to do it and off it goes.

But most websites aren’t designed for bots to make purchases — exactly the opposite, in fact. Bad actors have historically used bots to snatch up products from sneakers to concert tickets before other shoppers can buy them, frequently to flip them for a profit. For many retailers, they’re a nuisance.

“A lot of time and effort has been spent to keep machines out,” said Rubail Birwadker, senior vice president and global head of growth at Visa.

If a site has reason to believe a bot is behind a transaction — say it completes forms too fast — it could block it. The retailer doesn’t make the sale, and the customer is left with a frustrating experience.

Payment players are working to create methods that will allow verified agents to check out on behalf of a consumer without compromising security. In April, Visa launched a programme focused on enabling AI-driven shopping called Intelligent Commerce. It uses a mix of credential verification (similar to setting up Apple Pay) and biometrics to ensure shoppers are able to checkout while preventing opportunities for fraud.

“We are going out and working with these providers to say, ‘Hey, we would like to … make it easy for you to know what’s a good, white-list bot versus a non-whitelist bot,’” Birwadker said.

Of course the bot has to make it to checkout. AI agents can stumble over other common elements in webpages, like login fields. It may be some time before all those issues are resolved and they can seamlessly complete any purchase.

Consumers have to get on board as well. So far, few appear to be rushing to use agents for their shopping, though that could change. In March, Salesforce published the results of a global survey that polled different age groups on their interest in various use cases for AI agents. Interest in using agents to buy products rose with each subsequent generation, with 63 percent of Gen-Z respondents saying they were interested.

Canaves of Emarketer pointed out that younger generations are already using AI regularly for school and work. Shopping with AI may not be their first impulse, but because the behaviour is already ingrained in their daily lives in other ways, it’s spilling over into how they find and buy products.

More consumers are starting their shopping journeys on AI platforms, too, and Schwartz of Salesforce noted that over time this could shape their expectations of the internet more broadly, the way Google and Amazon did.

“It just feels inevitable that we are going to see a much more consistent amount of commerce transactions originate and, ultimately, natively happen on these AI agentic platforms,” said Birwadker.



Source link

Continue Reading

Tools & Platforms

CarMax’s top tech exec shares his keys to reinventing a legacy retailer in the age of AI

Published

on


More than 30 years ago, CarMax aimed to transform the way people buy and sell used cars with a consistent, haggle-free experience that separated it from the typical car dealership.

Despite evolving into a market leader since then, its chief information and technology officer, Shamim Mohammad, knows no company is guaranteed that title forever; he had previously worked for Blockbuster, which, he said, couldn’t change fast enough to keep up with Netflix in streaming video.

Mohammad spoke with Modern Retail at the Virginia-based company’s technology office in Plano, Texas, which it opened three to four years ago to recruit for tech workers like software engineers and analysts in the region home to tech companies such as AT&T and Texas Instruments. At that office, CarMax has since hired almost 150 employees — more than initially expected — including some of Mohammad’s former colleagues from Blockbuster, which he had worked for in Texas in the early 2000s.

He explained how other legacy retailers can learn from how CarMax leveraged new technology like artificial intelligence and a startup mindset as it embraced change, becoming an omnichannel retailer where customers can buy cars in person, entirely online or through a combination of both. Many customers find a car online and test-drive and complete their purchase at the store.

“Every company, every industry is going through a lot of disruption because of technology,” Mohammad said. “It’s much better to do self-disruption: changing our own business model, challenging ourselves and going through the pain of change before we are disrupted by somebody else.”

Digitizing the dealership

Mohammad has been with CarMax for more than 12 years and had also been vp of information technology for BJ’s Wholesale Club. Since joining the auto retailer, he and his team have worked to use artificial intelligence to fully digitize the process of car buying, which is especially complex given the mountain of vehicle information and regulations dealers have to consider.

He said the company has been using AI and machine learning for at least 12-13 years to price cars, make sure the right information is online for the cars, and understand where cars need to be in the supply chain and when. That, he said, has powered the company’s website in becoming a virtual showroom that helps customers understand the vehicles, their functions and how they fit their needs. Artificial intelligence has also powered its online instant offer tool for selling cars, giving customers a fair price that doesn’t lose the company money, Mohammad said.

“Technology is enabling different types of experiences, and it’s setting new expectations, and new types of ways to shop and buy. Our industry is no different. We wanted to be that disruptor,” Mohammad said. “We want to make sure we change our business model and we bring those experiences so that we continue to remain the market leader in our industry.”

About three or four years ago, CarMax was an early adopter of ChatGPT, using it to organize data on the different features of car models and make it presentable through its digital channels. Around the same time, the company also used generative AI to comb through and summarize thousands of customer product reviews — it did what would have taken hundreds of content writers more than 10 years to do in a matter of days, he said — and keep them up to date.

As the technology has improved over the last few years, the company has adopted several new AI-powered features. One is Rhodes, a tool associates use to get support and information they need to help customers, which launched about a year ago, Mohammad said. It uses a large language model combining CarMax data with outside information like state or federal rules and regulations to help employees quickly access that data.

Anything that requires a lot of human workload and mental capacity can be automated, he said, from looking at invoices and documents to generating code for developers and engineers, saving them time to do more valuable work. Retailers like Target and Walmart have done the same by using AI chatbots as tools for employees.

“We used to spend a fortune on employee training, and employees only retained and reliably repeated a small percentage of what we trained,” said Jason Goldberg, chief commerce strategy officer for Publicis Groupe. “Increasingly, AI is letting us give way better tools to the salespeople, to train them and to support them when they’re talking to customers.”

In just the last few months, Mohammad said, CarMax has been rolling out an agentic version of a previous buying and selling assistant on its website called Skye that better understands the intent of the user — not only answering the question the customer asks directly, but also walking the customer through the entire car buying process.

“It’ll obviously answer [the customer’s question], but it will also try to understand what you’re trying to do and help you proactively through the entire process. It could be financing; it could be buying; it could be selling; it could be making an appointment; it could be just information about the car and safety,” he said.

The new Skye is more like talking to an actual human being, Mohammad said, where, in addition to answering the question, the agent can make other recommendations in a more natural conversation. For example, if someone is trying to buy a car and asks for a family car that’s safe, it will pull one from its inventory, but it may also ask if they’d like to talk to someone or even how their day is going.

“It’s guiding you through the process beyond what you initially asked. It’s building a rapport with you,” Mohammad said. “It knows you very well, it knows our business really well, and then it’s really helping you get to the right car and the right process.”

Goldberg said that while many functions of retail, from writing copy to scheduling shifts, have also been improved with AI, pushing things done by humans to AI chatbots could lead to distrust or create results that are inappropriate or offensive. “At the moment, most of the AI things are about efficiency and reducing friction,” Goldberg said. “They’re taking something you’re already doing and making it easier, which is generally appealing, but there is also the potential to dehumanize the experience.”

In testing CarMax’s new assistant, other AI agents are actually monitoring it to make sure it’s up to the company’s standards and not saying bad words, Mohammad said, adding it would be impossible for humans to look at everything the new assistant is doing.

The company doesn’t implement AI just to implement AI, Mohammad said, adding that his teams are using generative AI as a tool when needing to solve particular problems instead of being forced to use it.

“Companies don’t need an AI strategy. … They need a strategy that uses AI,” Mohammad said. “Use AI to solve customer problems.”

Working like a tech startup

In embracing change, CarMax has had to change the way it works, Mohammad said. It has created a more startup-like culture, going from cubicles to more open, collaborative office spaces where employees know what everyone else is working on.

About a decade ago, he said, the company started working with a project-based mindset, where it would deliver a new project every six to nine months — each taking about a year in total, with phases for designing and testing.

Now, the company has small, cross-functional product teams of seven to nine people, each with a mission around improving a particular area like finance, digital merchandising, SEO, logistics or supply chain — some even have fun names like “Ace” or “Top Gun.”

Teams have just two weeks to create a prototype of a feature and get it in front of customers. He said that, stacked up over time, those small new changes those teams created completely transformed the business.

“The teams are empowered, and they’re given a mission. I’m not telling them what to do. I’m giving them a goal. They figure out how,” Mohammad said. “Create a culture of experimentation, and don’t wait for things to be perfect. Create a culture where your teams are empowered. It’s OK for them to make mistakes; it’s OK for them to learn from their mistakes.”



Source link

Continue Reading

Tools & Platforms

Available Infrastructure Unveils ‘SanQtum’ Secure AI Platform for Critical Infrastructure

Published

on



Available Infrastructure (Available) publicly unveiled SanQtum, a first-of-a-kind solution that combines national security-grade cyber protection and the world’s most-trusted enterprise artificial intelligence (AI) capability.


In the modern era, AI-powered, machine-speed decision-making is crucial. Yet a fast-evolving and increasingly sophisticated threat landscape puts operational technology (OT) and cyber-physical systems (CPS), IP and other sensitive data, and proprietary trained AI models at risk. SanQtum is a direct response to that need.


Created through a rigorous development process in collaboration with major enterprise tech partners and government agencies, SanQtum pre-integrates a best-in-breed tech stack in a micro edge data center form factor, ready for deployment anywhere — from near-prem urban sites to telecom towers to austere environments. A first cohort of initial sites is already under construction in Northern Virginia and expected to come online later this year.


SanQtum’s cybersecurity protections include zero trust permissions architecture, quantum-resilient data encryption, and are aligned to DHS, CISA, and other US federal cybersecurity standards. Sovereign AI models with ultra-low-latency computing enable secure decision-making at machine speed when milliseconds matter, wrapped in cyber protections to prevent data theft and AI model poisoning.


The need for more sophisticated cybersecurity solutions is widespread and growing by the day. Globally, the cost of cybercrimes to corporations is forecasted to nearly triple, from $8 trillion in 2023 to $23 trillion by 2027. For government agencies and critical infrastructure, cybersecurity is literally a matter of life and death.


Daniel Gregory, CEO of Available


AI is now seemingly everywhere. So are cyber threats, from nation-state attacks to criminal enterprises. In this environment, decision-making without AI — and AI without cybersecurity protections — are no longer negotiable; they’re mandatory. As we head into the July 4th weekend, which has historically seen a surge in cyber attacks each year, security is top-of-mind for many Americans, businesses, and government agencies. We live in a digital world. And AI is now seemingly everywhere. So are cyber threats, from nation-state attacks to criminal enterprises. In this environment, decision-making without AI — and AI without cybersecurity protections — are no longer negotiable; they’re mandatory.



Source link

Continue Reading

Trending