Connect with us

Tools & Platforms

Opinion | AI Utopia, AI Apocalypse, and AI Reality

Published

on


Recent articles and books about artificial intelligence offer images of the future that align like iron filings around two magnetic poles—utopia and apocalypse.

On one hand, AI is said to be leading us toward a perfect future of ease, health, and broadened understanding. We, aided by our machines and their large language models (LLMs), will know virtually everything and make all the right choices to usher in a permanent era of enlightenment and plenty. On the other hand, AI is poised to thrust us into a future of unemployment, environmental destruction, and delusion. Our machines will gobble scarce resources while churning out disinformation and making deadly weapons that AI agents will use to wipe us out once we’re of no further use to them.

Utopia and apocalypse have long exerted powerful pulls on human imagination and behavior. (My first book, published in 1989 and updated in 1995, was Memories and Visions of Paradise: Exploring the Universal Myth of a Lost Golden Age; it examined the history and meaning of the utopian archetype.) New technologies tend to energize these two polar attractors in our collective psyche because toolmaking and language are humanity’s two superpowers, which have enabled our species to take over the world, while also bringing us to a point of existential peril. New technologies increase some people’s power over nature and other people, producing benefits that, mentally extrapolated forward in time, encourage expectations of a grand future. But new technologies also come with costs (resource depletion, pollution, increased economic inequality, accidents, and misuse) that evoke fears of an ultimate reckoning. Language supercharges our toolmaking talent by enabling us to learn from others; it is also the vehicle for formulating and expressing our hopes and fears. AI, because it is both technological and linguistic, and because it is being adopted at a frantic pace and so disruptively, is especially prone to triggering the utopia-apocalypse reflex.

Messages about both the promise and the peril of AI are often crafted by powerful people seeking to consolidate their control over the AI industry.

We humans have been ambivalent about technology at least since our adoption of writing. Tools enable us to steal fire from the gods, like the mythical Prometheus, whom the gods punished with eternal torment; they are the wings of Icarus, who flies too close to the sun and falls to his death. AI promises to make technology autonomously intelligent, thus calling to mind still another cautionary tale, “The Sorcerer’s Apprentice.”

What could go right—or wrong? After summarizing both the utopian and apocalyptic visions for AI, I’ll explore two questions: first, how do these extreme visions help or mislead us in our attempts to understand AI? And second, whom do these visions serve? As we’ll see, there are some early hints of AI’s ultimate limits, which suggest a future that doesn’t align well with many of the highest hopes or deepest fears for the new technology.

AI Utopia

As a writer, I generally don’t deliberately use AI. Nevertheless, in researching this article, I couldn’t resist asking Google’s free AI Overview, “What is the utopian vision for AI?” This came back a fraction of a second later:

The utopian vision for AI envisions a future where AI seamlessly integrates into human life, boosting productivity, innovation, and overall well-being. It’s a world where AI solves complex problems like climate change and disease, and helps humanity achieve new heights.

Google Overview’s first sentence needs editing to remove verbal redundancy (vision, envisions), but AI does succeed in cobbling together a serviceable summary of its promoters’ dreams.

The same message is on display in longer form in the article “Visions of AI Utopia” by Future Sight Echo, who informs us that AI will soften the impacts of economic inequality by delivering resources more efficiently and “in a way that is dynamic and able to adapt instantly to new information and circumstances.” Increased efficiency will also reduce humanity’s impact on the environment by minimizing energy requirements and waste of all kinds.

But that’s only the start. Education, creativity, health and longevity, translation and cultural understanding, companionship and care, governance and legal representation—all will be revolutionized by AI.

There is abundant evidence that people with money share these hopes for AI. The hottest stocks on Wall Street (notably Nvidia) are AI-related, as are many of the corporations that contribute significantly to the NPR station I listen to in Northern California, thereby gaining naming rights at the top of the hour.

Capital is being shoveled in the general direction of AI so rapidly (roughly $300 billion just this year, in the U.S. alone) that, if its advertised potential is even half believable, we should all rest assured that most human problems will soon vanish.

Or will they?

AI Apocalypse

Strangely, when I initially asked Google’s AI, “What is the vision for AI apocalypse?”, its response was, “An AI Overview is not available for this search.” Maybe I didn’t word my question well. Or perhaps AI sensed my hostility. Full disclosure: I’ve gone on record calling for AI to be banned immediately. (Later, AI Overview was more cooperative, offering a lengthy summary of “common themes in the vision of an AI apocalypse.”) My reason for proposing an AI ban is that AI gives us humans more power, via language and technology, than we already have; and that, collectively, we already have way too much power vis-à-vis the rest of nature. We’re overwhelming ecosystems through resource extraction and waste dumping to such a degree that, if current trends continue, wild nature may disappear by the end of the century. Further, the most powerful humans are increasingly overwhelming everyone else, both economically and militarily. Exerting our power more intelligently probably won’t help, because we’re already too smart for our own good. The last thing we should be doing is to cut language off from biology so that it can exist entirely in a simulated techno-universe.

Let’s be specific. What, exactly, could go wrong because of AI? For starters, AI could make some already bad things worse—in both nature and society.

Just as there are limits to fossil-fueled utopia, nuclear utopia, and perpetual-growth capitalist utopia, there are limits to AI utopia. By the same token, limits may prevent AI from becoming an all-powerful grim reaper.

There are many ways in which humanity is already destabilizing planetary environmental systems; climate change is the way that’s most often discussed. Through its massive energy demand, AI could accelerate climate change by generating more carbon emissions. According to the International Energy Agency, “Driven by AI use, the U.S. economy is set to consume more electricity in 2030 for processing data than for manufacturing all energy-intensive goods combined, including aluminum, steel, cement, and chemicals.” The world also faces worsening water shortages; AI needs vast amounts. Nature is already reeling from humanity’s accelerating rates of resource extraction and depletion. AI requires millions of tons of copper, steel, cement, and other raw materials, and suppliers are targeting Indigenous lands for new mines.

We already have plenty of social problems, too, headlined by worsening economic inequality. AI could widen the divide between rich and poor by replacing lower-skilled workers with machines while greatly increasing the wealth of those who control the technology. Many people worry that corporations have gained too much political influence; AI could accelerate this trend by making the gathering and processing of massive amounts of data on literally everyone cheaper and easier, and by facilitating the consolidation of monopolies. Unemployment is always a problem in capitalist societies, but AI threatens quickly to throw millions of white-collar workers off payrolls: Anthropic’s CEO Dario Amodei predicts that AI could eliminate half of entry-level white-collar jobs within five years, while Bill Gates forecasts that only three job fields will survive AI—energy, biology, and AI system programming.

However, the most horrific visions for AI go beyond just making bad things worse. The title of a recent episode of The Bulwark Podcast, “Will Sam Altman and His AI Kill Us All?”, states the worst-case scenario bluntly. But how, exactly, could AI kill us all? One way is by automating military decisions while making weapons cheaper and more lethal (a recent Brookings commentary was titled, “How Unchecked AI Could Trigger a Nuclear War”). Veering toward dystopian sci-fi, some AI philosophers opine that the technology, once it’s significantly smarter than people, might come to view biological humans as pointless wasters of resources that machines could use more efficiently. At that point, AI could pursue multiple pathways to terminate humanity.

AI Reality

I don’t know the details of how AI will unfold in the months and years to come. But the same could be said for AI industry leaders. They certainly understand the technology better than I do, but their AI forecasts may miss a crucial factor. You see, I’ve trained myself over the years to look for limits in resources, energy, materials, and social systems. Most people who work in the fields of finance and technology tend to ignore limits, or even to believe that there are none. This leads them to absurdities, such as Elon Musk’s expectation of colonizing Mars. Earth is finite, humans will be confined to this planet forever, and therefore lots of things we can imagine doing just won’t happen. I would argue that discussions about AI’s promise and peril need a dose of limits awareness.

Arvind Narayanan and Sayash Kapoor, in an essay titled “AI Is Normal Technology,” offer some of that awareness. They argue that AI development will be constrained by the speed of human organizational and institutional change and by “hard limits to the speed of knowledge acquisition because of the social costs of experimentation.” However, the authors do not take the position that, because of these limits, AI will have only minor impacts on society; they see it as an amplifier of systemic risks.

In addition to the social limits Narayanan and Kapoor discuss, there will also (as mentioned above) be environmental limits to the energy, water, and materials that AI needs, a subject explored at a recent conference.

AI seems to present a spectacular new slate of opportunities and threats. But, in essence, much of what was true before AI remains so now.

Finally, there’s a crucial limit to AI development that’s inherent in the technology itself. Large language models need vast amounts of high-quality data. However, as more information workers are replaced by AI, or start using AI to help generate content (both trends are accelerating), more of the data available to AI will be AI-generated rather than being produced by experienced researchers who are constantly checking it against the real world. Which means AI could become trapped in a cycle of declining information quality. Tech insiders call this “AI model collapse,” and there’s no realistic plan to stop it. AI itself can’t help.

In his article “Some Signs of AI Model Collapse Begin to Reveal Themselves,” Steven J. Vaughan-Nichols argues that this is already happening. There have been widely reported instances of AI inadvertently generating fake scientific research documents. The Chicago Sun-Times recently published a “Best of Summer” feature that included forthcoming novels that don’t exist. And the Trump administration’s widely heralded “Make America Healthy Again” report included citations (evidently AI-generated) for non-existent studies. Most of us have come to expect that new technologies will have bugs that engineers will gradually remove or work around, resulting in improved performance. With AI, errors and hallucination problems may just get worse, in a cascading crescendo.

Just as there are limits to fossil-fueled utopia, nuclear utopia, and perpetual-growth capitalist utopia, there are limits to AI utopia. By the same token, limits may prevent AI from becoming an all-powerful grim reaper.

What will be the real future of AI? Here’s a broad-brush prediction (details are currently unavailable due to my failure to upgrade my crystal ball’s operating system). Over the next few years, corporations and governments will continue quickly to invest in AI, driven by its ability to cut labor costs. We will become systemically dependent on the technology. AI will reshape society—employment, daily life, knowledge production, education, and wealth distribution. Then, speeding up as it goes, AI will degenerate into a hallucinating, blithering cacophony of little voices spewing nonsense. Real companies, institutions, and households will suffer as a result. Then, we’ll either figure out how to live without AI, or confine it to relatively limited tasks and data sets. America got a small foretaste of this future recently, when Musk-led DOGE fired tens of thousands of federal workers with the expectation of replacing many of them with AI—without knowing whether AI could do their jobs (oops: Thousands are being rehired).

A messy neither-this-nor-that future is not what you’d expect if you spend time reading documents like “AI 2027,” five industry insiders’ detailed speculative narrative of the imminent AI future, which allows readers to choose the story’s ending. Option A, “slowdown,” leads to a future in which AI is merely an obedient, super-competent helper; while in option B, “race,” humanity is extinguished by an AI-deployed bioweapon because people take up land that could be better used for more data centers. Again, we see the persistent, binary utopia-or-apocalypse stereotype, here presented with impressive (though misleading) specificity.

At the start of this article, I attributed AI utopia-apocalypse discourse to a deep-seated tic in our collective human unconscious. But there’s probably more going on here. In her recent book Empire of AI, tech journalist Karen Hao traces polarized AI visions back to the founding of OpenAI by Sam Altman and Elon Musk. Both were, by turns, dreamers and doomers. Their consistent message: We (i.e., Altman, Musk, and their peers) are the only ones who can be trusted to shepherd the process of AI development, including its regulation, because we’re the only ones who understand the technology. Hao makes the point that messages about both the promise and the peril of AI are often crafted by powerful people seeking to consolidate their control over the AI industry.

Utopia and apocalypse feature prominently in the rhetoric of all cults. It’s no surprise, but still a bit of a revelation, therefore, to hear Hao conclude in a podcast interview that AI is a cult (if it walks, quacks, and swims like a cult… ). And we are all being swept up in it.

So, how should we think about AI in a non-cultish way? In his article, “We Need to Stop Pretending AI Is Intelligent,” Guillaume Thierry, a professor of cognitive neuroscience, writes, “We must stop giving AI human traits.” Machines, even apparently smart ones, are not humans—full stop. Treating them as if they are human will bring dehumanizing results for real, flesh-and-blood people.

The collapse of civilization won’t be AI generated. That’s because environmental-social decline was already happening without any help from LLMs. AI is merely adding a novel factor in humanity’s larger reckoning with limits. In the short run, the technology will further concentrate wealth. “Like empires of old,” writes Karen Hao, “the new empires of AI are amassing extraordinary riches across space and time at great expense to everyone else.” In the longer run, AI will deplete scarce resources faster.

If AI is unlikely to be the bringer of destruction, it’s just as unlikely to deliver heaven on Earth. Just last week I heard from a writer friend who used AI to improve her book proposal. The next day, I went to my doctor for a checkup, and he used AI to survey my vital signs and symptoms; I may experience better health maintenance as a result. That same day, I read a just-published Apple research paper that concludes LLMs cannot reason reliably. Clearly, AI can offer tangible benefits within some fields of human pursuit. But we are fooling ourselves if we assume that AI can do our thinking for us. If we can’t build an equitable, sustainable society on our own, it’s pointless to hope that a machine that can’t think straight will do it for us.

I’m not currently in the job market and therefore can afford to sit on the sidelines and cast judgment on AI. For many others, economic survival depends on adopting the new technology. Finding a personal modus vivendi with new tools that may have dangerous and destructive side effects on society is somewhat analogous to charting a sane and survivable daily path in a nation succumbing to authoritarian rule. We all want to avoid complicity in awful outcomes, while no one wants to be targeted or denied opportunity. Rhetorically connecting AI with dictatorial power makes sense: One of the most likely uses of the new technology will be for mass surveillance.

Maybe the best advice for people concerned about AI would be analogous to advice that democracy advocates are giving to people worried about the destruction of the social-governmental scaffolding that has long supported Americans’ freedoms and rights: Identify your circles of concern, influence, and control; scrutinize your sources of information and tangibly support those with the most accuracy and courage, and the least bias; and forge communitarian bonds with real people.

AI seems to present a spectacular new slate of opportunities and threats. But, in essence, much of what was true before AI remains so now. Human greed and desire for greater control over nature and other people may lead toward paths of short-term gain. But, if you want a good life when all’s said and done, learn to live well within limits. Live with honesty, modesty, and generosity. AI can’t help you with that.



Source link

Tools & Platforms

AI Shopping Is Here. Will Retailers Get Left Behind?

Published

on


AI doesn’t care about your beautiful website.

Visit any fashion brand’s homepage and you’ll see all sorts of dynamic or interactive elements from image carousels to dropdown menus that are designed to catch shoppers’ eyes and ease navigation.

To the large language models that underlie ChatGPT and other generative AI, many of these features might as well not exist. They’re often written in the programming language JavaScript, which for the moment at least most AI struggles to read.

This giant blindspot didn’t matter when generative AI was mostly used to write emails and cheat on homework. But a growing number of startups and tech giants are deploying this technology to help users shop — or even make the purchase themselves.

“A lot of your site might actually be invisible to an LLM from the jump,” said A.J. Ghergich, global vice president of Botify, an AI optimisation company that helps brands from Christian Louboutin to Levi’s make sure their products are visible to and shoppable by AI.

The vast majority of visitors to brands’ websites are still human, but that’s changing fast. US retailers saw a 1,200 percent jump in visits from generative AI sources between July 2024 and February 2025, according to Adobe Analytics. Salesforce predicts AI platforms and AI agents will drive $260 billion in global online sales this holiday season.

Those agents, launched by AI players such as OpenAI and Perplexity, are capable of performing tasks on their own, including navigating to a retailer’s site, adding an item to cart and completing the checkout process on behalf of a shopper. Google’s recently introduced agent will automatically buy a product when it drops to a price the user sets.

This form of shopping is very much in its infancy; the AI shopping agents available still tend to be clumsy. Long term, however, many technologists envision a future where much of the activity online is driven by AI, whether that’s consumers discovering products or agents completing transactions.

To prepare, businesses from retail behemoth Walmart to luxury fashion labels are reconsidering everything from how they design their websites to how they handle payments and advertise online as they try to catch the eye of AI and not just humans.

“It’s in every single conversation I’m having right now,” said Caila Schwartz, director of consumer insights and strategy at Salesforce, which powers the e-commerce of a number of retailers, during a roundtable for press in June. “It is what everyone wants to talk about, and everyone’s trying to figure out and ask [about] and understand and build for.”

From SEO to GEO and AEO

As AI joins humans in shopping online, businesses are pivoting from SEO — search engine optimisation, or ensuring products show up at the top of a Google query — to generative engine optimisation (GEO) or answer engine optimisation (AEO), where catching the attention of an AI responding to a user’s request is the goal.

That’s easier said than done, particularly since it’s not always clear even to the AI companies themselves how their tools rank products, as Perplexity’s chief executive, Aravind Srinivas, admitted to Fortune last year. AI platforms ingest vast amounts of data from across the internet to produce their results.

Though there are indications of what attracts their notice. Products with rich, well-structured content attached tend to have an advantage, as do those that are the frequent subject of conversation and reviews online.

“Brands might want to invest more in developing robust customer-review programmes and using influencer marketing — even at the micro-influencer level — to generate more content and discussion that will then be picked up by the LLMs,” said Sky Canaves, a principal analyst at Emarketer focusing on fashion, beauty and luxury.

Ghergich pointed out that brands should be diligent with their product feeds into programmes such as Google’s Merchant Center, where retailers upload product data to ensure their items appear in Google’s search and shopping results. These types of feeds are full of structured data including product names and descriptions meant to be picked up by machines so they can direct shoppers to the right items. One example from Google reads: Stride & Conquer: Original Google Men’s Blue & Orange Power Shoes (Size 8).

Ghergich said AI will often read this data before other sources such as the HTML on a brand’s website. These feeds can also be vital for making sure the AI is pulling pricing data that’s up to date, or as close as possible.

As more consumers turn to AI and agents, however, it could change the very nature of online marketing, a scenario that would shake even Google’s advertising empire. Tactics that work on humans, like promoted posts with flashy visuals, could be ineffective for catching AI’s notice. It would force a redistribution of how retailers spend their ad budgets.

Emarketer forecasts that spending on traditional search ads in the US will see slower growth in the years ahead, while a larger share of ad budgets will go towards AI search. OpenAI, whose CEO, Sam Altman, has voiced his distaste for ads in the past, has also acknowledged exploring ads on its platform as it looks for new revenue streams.

A chart showing the forecasted decline in spending on traditional search ads in the US from 2025 to 2029.

“The big challenge for brands with advertising is then how to show up in front of consumers when traditional ad formats are being circumvented by AI agents, when consumers are not looking at advertisements because agents are playing a bigger role,” said Canaves.

Bots Are Good Now

Retailers face another set of issues if consumers start turning to agents to handle purchases. On the one hand, agents could be great for reducing the friction that often causes consumers to abandon their carts. Rather than going through the checkout process themselves and stumbling over any annoyances, they just tell the agent to do it and off it goes.

But most websites aren’t designed for bots to make purchases — exactly the opposite, in fact. Bad actors have historically used bots to snatch up products from sneakers to concert tickets before other shoppers can buy them, frequently to flip them for a profit. For many retailers, they’re a nuisance.

“A lot of time and effort has been spent to keep machines out,” said Rubail Birwadker, senior vice president and global head of growth at Visa.

If a site has reason to believe a bot is behind a transaction — say it completes forms too fast — it could block it. The retailer doesn’t make the sale, and the customer is left with a frustrating experience.

Payment players are working to create methods that will allow verified agents to check out on behalf of a consumer without compromising security. In April, Visa launched a programme focused on enabling AI-driven shopping called Intelligent Commerce. It uses a mix of credential verification (similar to setting up Apple Pay) and biometrics to ensure shoppers are able to checkout while preventing opportunities for fraud.

“We are going out and working with these providers to say, ‘Hey, we would like to … make it easy for you to know what’s a good, white-list bot versus a non-whitelist bot,’” Birwadker said.

Of course the bot has to make it to checkout. AI agents can stumble over other common elements in webpages, like login fields. It may be some time before all those issues are resolved and they can seamlessly complete any purchase.

Consumers have to get on board as well. So far, few appear to be rushing to use agents for their shopping, though that could change. In March, Salesforce published the results of a global survey that polled different age groups on their interest in various use cases for AI agents. Interest in using agents to buy products rose with each subsequent generation, with 63 percent of Gen-Z respondents saying they were interested.

Canaves of Emarketer pointed out that younger generations are already using AI regularly for school and work. Shopping with AI may not be their first impulse, but because the behaviour is already ingrained in their daily lives in other ways, it’s spilling over into how they find and buy products.

More consumers are starting their shopping journeys on AI platforms, too, and Schwartz of Salesforce noted that over time this could shape their expectations of the internet more broadly, the way Google and Amazon did.

“It just feels inevitable that we are going to see a much more consistent amount of commerce transactions originate and, ultimately, natively happen on these AI agentic platforms,” said Birwadker.



Source link

Continue Reading

Tools & Platforms

CarMax’s top tech exec shares his keys to reinventing a legacy retailer in the age of AI

Published

on


More than 30 years ago, CarMax aimed to transform the way people buy and sell used cars with a consistent, haggle-free experience that separated it from the typical car dealership.

Despite evolving into a market leader since then, its chief information and technology officer, Shamim Mohammad, knows no company is guaranteed that title forever; he had previously worked for Blockbuster, which, he said, couldn’t change fast enough to keep up with Netflix in streaming video.

Mohammad spoke with Modern Retail at the Virginia-based company’s technology office in Plano, Texas, which it opened three to four years ago to recruit for tech workers like software engineers and analysts in the region home to tech companies such as AT&T and Texas Instruments. At that office, CarMax has since hired almost 150 employees — more than initially expected — including some of Mohammad’s former colleagues from Blockbuster, which he had worked for in Texas in the early 2000s.

He explained how other legacy retailers can learn from how CarMax leveraged new technology like artificial intelligence and a startup mindset as it embraced change, becoming an omnichannel retailer where customers can buy cars in person, entirely online or through a combination of both. Many customers find a car online and test-drive and complete their purchase at the store.

“Every company, every industry is going through a lot of disruption because of technology,” Mohammad said. “It’s much better to do self-disruption: changing our own business model, challenging ourselves and going through the pain of change before we are disrupted by somebody else.”

Digitizing the dealership

Mohammad has been with CarMax for more than 12 years and had also been vp of information technology for BJ’s Wholesale Club. Since joining the auto retailer, he and his team have worked to use artificial intelligence to fully digitize the process of car buying, which is especially complex given the mountain of vehicle information and regulations dealers have to consider.

He said the company has been using AI and machine learning for at least 12-13 years to price cars, make sure the right information is online for the cars, and understand where cars need to be in the supply chain and when. That, he said, has powered the company’s website in becoming a virtual showroom that helps customers understand the vehicles, their functions and how they fit their needs. Artificial intelligence has also powered its online instant offer tool for selling cars, giving customers a fair price that doesn’t lose the company money, Mohammad said.

“Technology is enabling different types of experiences, and it’s setting new expectations, and new types of ways to shop and buy. Our industry is no different. We wanted to be that disruptor,” Mohammad said. “We want to make sure we change our business model and we bring those experiences so that we continue to remain the market leader in our industry.”

About three or four years ago, CarMax was an early adopter of ChatGPT, using it to organize data on the different features of car models and make it presentable through its digital channels. Around the same time, the company also used generative AI to comb through and summarize thousands of customer product reviews — it did what would have taken hundreds of content writers more than 10 years to do in a matter of days, he said — and keep them up to date.

As the technology has improved over the last few years, the company has adopted several new AI-powered features. One is Rhodes, a tool associates use to get support and information they need to help customers, which launched about a year ago, Mohammad said. It uses a large language model combining CarMax data with outside information like state or federal rules and regulations to help employees quickly access that data.

Anything that requires a lot of human workload and mental capacity can be automated, he said, from looking at invoices and documents to generating code for developers and engineers, saving them time to do more valuable work. Retailers like Target and Walmart have done the same by using AI chatbots as tools for employees.

“We used to spend a fortune on employee training, and employees only retained and reliably repeated a small percentage of what we trained,” said Jason Goldberg, chief commerce strategy officer for Publicis Groupe. “Increasingly, AI is letting us give way better tools to the salespeople, to train them and to support them when they’re talking to customers.”

In just the last few months, Mohammad said, CarMax has been rolling out an agentic version of a previous buying and selling assistant on its website called Skye that better understands the intent of the user — not only answering the question the customer asks directly, but also walking the customer through the entire car buying process.

“It’ll obviously answer [the customer’s question], but it will also try to understand what you’re trying to do and help you proactively through the entire process. It could be financing; it could be buying; it could be selling; it could be making an appointment; it could be just information about the car and safety,” he said.

The new Skye is more like talking to an actual human being, Mohammad said, where, in addition to answering the question, the agent can make other recommendations in a more natural conversation. For example, if someone is trying to buy a car and asks for a family car that’s safe, it will pull one from its inventory, but it may also ask if they’d like to talk to someone or even how their day is going.

“It’s guiding you through the process beyond what you initially asked. It’s building a rapport with you,” Mohammad said. “It knows you very well, it knows our business really well, and then it’s really helping you get to the right car and the right process.”

Goldberg said that while many functions of retail, from writing copy to scheduling shifts, have also been improved with AI, pushing things done by humans to AI chatbots could lead to distrust or create results that are inappropriate or offensive. “At the moment, most of the AI things are about efficiency and reducing friction,” Goldberg said. “They’re taking something you’re already doing and making it easier, which is generally appealing, but there is also the potential to dehumanize the experience.”

In testing CarMax’s new assistant, other AI agents are actually monitoring it to make sure it’s up to the company’s standards and not saying bad words, Mohammad said, adding it would be impossible for humans to look at everything the new assistant is doing.

The company doesn’t implement AI just to implement AI, Mohammad said, adding that his teams are using generative AI as a tool when needing to solve particular problems instead of being forced to use it.

“Companies don’t need an AI strategy. … They need a strategy that uses AI,” Mohammad said. “Use AI to solve customer problems.”

Working like a tech startup

In embracing change, CarMax has had to change the way it works, Mohammad said. It has created a more startup-like culture, going from cubicles to more open, collaborative office spaces where employees know what everyone else is working on.

About a decade ago, he said, the company started working with a project-based mindset, where it would deliver a new project every six to nine months — each taking about a year in total, with phases for designing and testing.

Now, the company has small, cross-functional product teams of seven to nine people, each with a mission around improving a particular area like finance, digital merchandising, SEO, logistics or supply chain — some even have fun names like “Ace” or “Top Gun.”

Teams have just two weeks to create a prototype of a feature and get it in front of customers. He said that, stacked up over time, those small new changes those teams created completely transformed the business.

“The teams are empowered, and they’re given a mission. I’m not telling them what to do. I’m giving them a goal. They figure out how,” Mohammad said. “Create a culture of experimentation, and don’t wait for things to be perfect. Create a culture where your teams are empowered. It’s OK for them to make mistakes; it’s OK for them to learn from their mistakes.”



Source link

Continue Reading

Tools & Platforms

Available Infrastructure Unveils ‘SanQtum’ Secure AI Platform for Critical Infrastructure

Published

on



Available Infrastructure (Available) publicly unveiled SanQtum, a first-of-a-kind solution that combines national security-grade cyber protection and the world’s most-trusted enterprise artificial intelligence (AI) capability.


In the modern era, AI-powered, machine-speed decision-making is crucial. Yet a fast-evolving and increasingly sophisticated threat landscape puts operational technology (OT) and cyber-physical systems (CPS), IP and other sensitive data, and proprietary trained AI models at risk. SanQtum is a direct response to that need.


Created through a rigorous development process in collaboration with major enterprise tech partners and government agencies, SanQtum pre-integrates a best-in-breed tech stack in a micro edge data center form factor, ready for deployment anywhere — from near-prem urban sites to telecom towers to austere environments. A first cohort of initial sites is already under construction in Northern Virginia and expected to come online later this year.


SanQtum’s cybersecurity protections include zero trust permissions architecture, quantum-resilient data encryption, and are aligned to DHS, CISA, and other US federal cybersecurity standards. Sovereign AI models with ultra-low-latency computing enable secure decision-making at machine speed when milliseconds matter, wrapped in cyber protections to prevent data theft and AI model poisoning.


The need for more sophisticated cybersecurity solutions is widespread and growing by the day. Globally, the cost of cybercrimes to corporations is forecasted to nearly triple, from $8 trillion in 2023 to $23 trillion by 2027. For government agencies and critical infrastructure, cybersecurity is literally a matter of life and death.


Daniel Gregory, CEO of Available


AI is now seemingly everywhere. So are cyber threats, from nation-state attacks to criminal enterprises. In this environment, decision-making without AI — and AI without cybersecurity protections — are no longer negotiable; they’re mandatory. As we head into the July 4th weekend, which has historically seen a surge in cyber attacks each year, security is top-of-mind for many Americans, businesses, and government agencies. We live in a digital world. And AI is now seemingly everywhere. So are cyber threats, from nation-state attacks to criminal enterprises. In this environment, decision-making without AI — and AI without cybersecurity protections — are no longer negotiable; they’re mandatory.



Source link

Continue Reading

Trending