Tools & Platforms
Oops! Xbox Exec’s AI Advice for Laid-Off Employees Backfires
AI Compassion or Insensitive Overreach?
Last updated:
Edited By
Mackenzie Ferguson
AI Tools Researcher & Implementation Consultant
In a move that sparked controversy, an Xbox Game Studios executive at Microsoft suggested using AI prompts to help employees cope with the distress of layoffs. The post, intended to support emotional well-being, was quickly taken down following backlash from the public. Critics argue it highlights a disconnect between tech solutions and genuine human empathy, raising questions about the boundaries of AI in emotional spaces.
Introduction
The impact of technological advancements on employment continues to spark significant debate. Recently, an incident involving an Xbox Game Studios executive drew wide public attention. The executive suggested using AI prompts as a tool to help laid-off Microsoft employees manage the emotional stress of job loss. This suggestion, which was made publicly on social media, faced a swift backlash, leading to its subsequent deletion. The Times of India provides a detailed account of the controversy and the conversations it has sparked within both tech and human resources circles.
Background Information
The integration of artificial intelligence into various facets of life continues to spark diverse reactions, as illustrated by a recent event involving Xbox Game Studios. In a surprising move, an executive from the company suggested using AI prompts to assist laid-off employees at Microsoft in dealing with the emotional stress of their job loss. This suggestion was made in a post that was later deleted following public backlash. The details of this incident were covered extensively in an article by the Times of India.
The AI prompts suggested by the executive were intended as tools to help individuals navigate the challenging emotions that come with sudden unemployment. However, the suggestion was met with criticism, as many viewed it as an inadequate response to such a significant and personal issue. The Times of India outlines how this decision highlights a divide between technology’s potential to aid in personal matters and the human need for genuine support during difficult times.
This incident is part of a broader conversation about the role of technology in the workplace and its impact on mental health. As organizations increasingly rely on AI to manage various aspects of operations, the balance between technological efficiency and human empathy remains crucial. The situation involving the Microsoft employees and the AI prompts showcases the complexities of implementing technology in sensitive scenarios, as discussed in the Times of India article.
Impact on Microsoft Employees
The recent layoffs at Microsoft have left a significant impact on its employees, both professionally and emotionally. As reported in a recent article, an Xbox Game Studios executive attempted to address the emotional distress among laid-off employees by providing AI-generated prompts . Despite the intention to offer support, the move was met with backlash from both the affected employees and the public, leading to the deletion of the post by the executive.
This incident exposes the complexities and sensitivities involved in handling layoffs, particularly in a tech giant like Microsoft, where employees often identify closely with their work. The reliance on AI prompts, intended to alleviate stress, was perceived as tone-deaf and lacking empathy. Such reactions highlight the importance of human-centered approaches during layoffs, where personalized support and understanding should take precedence over algorithmic solutions.
Public reaction to the use of AI to manage such a human-centric crisis underscores a broader concern about the impersonal nature of technology in addressing emotional needs. It serves as a reminder that advancements in AI should complement rather than replace genuine human interactions, especially in difficult times. Microsoft’s experience may prompt other companies to reassess their strategies when dealing with layoffs, ensuring they strike a balance between innovation and empathy.
Details of the AI Prompts
The concept of AI prompts extends beyond mere automation and into realms of emotional intelligence and psychological support. In a recent case, an Xbox Game Studios executive attempted to utilize AI prompts as a form of emotional assistance for employees recently laid off from Microsoft. The aim was to alleviate the psychological distress of job loss, through tailored AI-generated messages. Unfortunately, this initiative sparked a backlash and led to the deletion of the original post as reported by Times of India. This incident highlights the delicate balance between technology and human empathy and raises questions about the appropriateness of AI in emotionally sensitive situations.
While AI can effectively manage repetitive tasks and predict outcomes based on data patterns, its role in managing human emotions remains contentious. The use of AI prompts in the context of layoffs demonstrates both potential and pitfalls – offering a unique way to communicate support but also risking appearing impersonal or insensitive. This scenario reported by the Times of India serves as a reminder of the importance of context and emotional intelligence in deploying AI in workplace communication.
The public reaction to using AI for managing layoff-related stress ranged from skepticism to outright criticism. Many viewed the approach as cold and inadequate in addressing the complexities of human emotion during such trying times. The mixed reactions underscore the broader societal dialogue on the limits of AI’s capabilities in replicating genuine human empathy. According to the report, this controversy may prompt further examination of how AI can be integrated sensitively into human resource practices without compromising the emotional well-being of individuals.
Looking ahead, the deployment of AI in sensitive areas such as layoffs will require more nuanced and ethically guided approaches. Innovations must consider not only the functional capabilities of AI but also its emotional and psychological impacts. As the incident with Microsoft suggests, the future of AI in workplaces will need to integrate robust ethical guidelines to ensure technology supports rather than replaces human touch.
Public Reactions to the Post
In the wake of a controversial post by an Xbox Game Studios executive, public reaction has been swift and predominantly negative. The executive had suggested that laid-off employees of Microsoft could use AI-generated prompts to manage the emotional distress of their job loss. This suggestion, which many perceived as insensitive, catalyzed a wave of backlash online. The post was seen as dismissive of the real and profound emotional impact of losing one’s job, prompting widespread criticism among netizens and industry observers alike.
The decision to delete the post following the backlash highlights the power of public opinion in shaping corporate communication strategies. Social media platforms, in particular, were rife with comments denouncing the tone-deaf nature of the suggestion. Users expressed a strong sense of empathy for the laid-off employees, arguing that AI cannot replace the human touch and emotional support needed during such challenging times. This incident underscores a growing wariness among the public regarding the reliance on AI for deeply personal and sensitive issues.
Moreover, the episode has prompted discussions about corporate responsibility and sensitivity, especially in communication related to layoffs and employee welfare. While technology like AI offers many advantages, the public’s reaction has highlighted a preference for human empathy and genuine support over automated responses. As reported by the Times of India, the pushback serves as a cautionary tale for executives and PR teams on the importance of thoughtful and humane communication.
Expert Opinions on Using AI for Emotional Support
The incorporation of AI in providing emotional support has garnered mixed reactions, with experts weighing in on both its potential and its shortcomings. Some industry leaders suggest that AI can offer a consistent, non-judgmental presence for individuals in distress, akin to an ever-available friend. However, the controversy surrounding its use is palpable, as demonstrated by the recent incident involving Xbox Game Studios. According to a report from the Times of India, an executive faced backlash for suggesting AI prompts to help laid-off employees manage emotional stress, only to retract the suggestion amid public outcry.
Experts emphasize that while AI can be programmed to detect emotional cues and offer tailored responses, its effectiveness is inherently limited by its lack of human empathy and understanding. The potential for AI to misinterpret emotions or offer inappropriate responses remains a significant concern, leading some to argue for its use only as a supplementary tool rather than a replacement for human interaction. The fallout from the Xbox Game Studios incident underscores this delicate balance, highlighting the need for careful consideration of AI’s role in such deeply personal contexts.
Looking ahead, the future of AI in emotional support is likely to involve more nuanced applications that combine technological precision with human oversight. Many in the field advocate for systems where AI assists in identifying individuals at risk, enabling human professionals to intervene more swiftly and effectively. Meanwhile, ethical considerations will continue to play a crucial role in shaping these technologies, ensuring that emotional well-being remains a priority in the development and deployment of AI solutions. This ongoing dialogue reflects a broader societal negotiation of technology’s place in our most private and sensitive spheres.
Microsoft’s Response to the Backlash
In the wake of recent layoffs at Microsoft, an executive from Xbox Game Studios faced significant backlash for attempting to aid affected employees with AI-generated prompts aimed at managing emotional stress. This effort, though possibly well-intentioned, was criticized widely as it seemed to overlook the gravity of the situation and the very real human emotions involved. Consequently, the executive deleted the contentious social media post not long after it sparked outrage.
In response to the backlash, Microsoft has acknowledged the sensitivity required in managing communications during layoffs. The company has emphasized its commitment to providing genuine support to its employees through more tangible measures, such as offering counseling services and career transition assistance. While the AI initiative was not intended to be the sole support mechanism, the episode highlighted the pitfalls of relying too heavily on technology in addressing deeply personal issues.
The incident has spurred discussions within the tech industry about the boundaries and responsibilities of AI in handling human emotions. Many experts argue that while AI can be a supportive tool, it should complement, not replace, human empathy and personalized support. This controversy may lead to Microsoft and other tech giants reevaluating their strategies to ensure that technology is applied in a manner that respects individual emotional experiences and augments human-led initiatives.
Future Implications of AI in Handling Emotional Stress
Artificial Intelligence (AI) is poised to play a transformative role in the way emotional stress is managed, particularly in situations involving job loss and career transitions. For instance, a notable incident involved a Microsoft executive at Xbox Game Studios who suggested using AI as a tool for coping with emotional stress following layoffs. This sparked a debate on the appropriateness and capabilities of AI in such sensitive situations. Although the suggestion was met with backlash, as reported by Times of India, it highlights a growing interest in leveraging technology to support mental health.
As AI technology continues to evolve, its potential future implications in addressing emotional stress are vast. AI-driven mental health aids could offer personalized support through virtual therapists, capable of providing a wide array of services from meditation guidance to cognitive behavioral therapy. These tools might help individuals navigate their emotional landscapes with greater ease and accessibility, potentially reducing the stigma associated with seeking mental health support.
Furthermore, the integration of AI in handling emotional stress could be particularly beneficial for high-risk groups, providing support in areas where human therapists are scarce or unavailable. By offering continuous monitoring and responsive feedback, AI might significantly alleviate stress and prevent more serious mental health issues from developing. However, it is crucial to address privacy concerns and ensure that these technological solutions are developed with ethical guidelines and cultural sensitivities in mind.
The future of AI in managing emotional stress also lies in its potential to revolutionize how organizations address employee wellbeing. Companies could implement AI solutions to proactively manage workplace stress, tailor support to individual needs, and foster a healthier work environment. Such initiatives could potentially enhance productivity and employee satisfaction, mitigating the adverse effects experienced during corporate restructuring or downsizing events, such as those experienced by Microsoft’s employees.
Conclusion
In light of the recent controversy surrounding the use of AI prompts to support laid-off employees at Microsoft, a reflective conclusion can be drawn on the role of technology in managing workplace challenges. The incident highlights a complex intersection between technological advancement and human sensitivity, illustrating that while artificial intelligence offers tools for efficiency and support, it is not a substitute for empathy and personalized human interaction. This nuanced situation underscores the need for companies to approach AI integration thoughtfully, ensuring that technology complements rather than replaces the human touch in emotionally charged situations.
The backlash following the original post by the Xbox Executive serves as a cautionary tale about the potential repercussions of relying too heavily on AI for human-centric issues. As we move forward into an era increasingly dominated by technological solutions, it is crucial to maintain a balanced perspective. Ensuring that such tools are used to enhance rather than detract from the human experience will be key in avoiding unintended negative reactions from the public and employees alike. This situation opens a broader conversation about the ethical lines in tech deployment, emphasizing the importance of sensitivity over mere functionality.
Future implications of this event may include more structured guidelines and ethical standards for the use of AI in handling employee relations and mental health issues. The public reaction to the event highlights a growing awareness and demand for transparent, considerate implementation of AI tools in the workplace. Companies might now be prompted to develop more comprehensive policies that address the emotional and psychological dimensions of workforce management, particularly in distressing scenarios such as layoffs.
Ultimately, the incident has sparked broader discussions on the role of AI in society, especially in contexts that traditionally require human empathy and understanding. As companies navigate these challenges, the importance of integrating ethical considerations into technological advancement becomes clear. Reflecting on this event offers valuable lessons for tech leaders and companies globally, reminding them to wield technology responsibly and with a mindful appreciation for its impact on human emotions.
Tools & Platforms
AI Shopping Is Here. Will Retailers Get Left Behind?
AI doesn’t care about your beautiful website.
Visit any fashion brand’s homepage and you’ll see all sorts of dynamic or interactive elements from image carousels to dropdown menus that are designed to catch shoppers’ eyes and ease navigation.
To the large language models that underlie ChatGPT and other generative AI, many of these features might as well not exist. They’re often written in the programming language JavaScript, which for the moment at least most AI struggles to read.
This giant blindspot didn’t matter when generative AI was mostly used to write emails and cheat on homework. But a growing number of startups and tech giants are deploying this technology to help users shop — or even make the purchase themselves.
“A lot of your site might actually be invisible to an LLM from the jump,” said A.J. Ghergich, global vice president of Botify, an AI optimisation company that helps brands from Christian Louboutin to Levi’s make sure their products are visible to and shoppable by AI.
The vast majority of visitors to brands’ websites are still human, but that’s changing fast. US retailers saw a 1,200 percent jump in visits from generative AI sources between July 2024 and February 2025, according to Adobe Analytics. Salesforce predicts AI platforms and AI agents will drive $260 billion in global online sales this holiday season.
Those agents, launched by AI players such as OpenAI and Perplexity, are capable of performing tasks on their own, including navigating to a retailer’s site, adding an item to cart and completing the checkout process on behalf of a shopper. Google’s recently introduced agent will automatically buy a product when it drops to a price the user sets.
This form of shopping is very much in its infancy; the AI shopping agents available still tend to be clumsy. Long term, however, many technologists envision a future where much of the activity online is driven by AI, whether that’s consumers discovering products or agents completing transactions.
To prepare, businesses from retail behemoth Walmart to luxury fashion labels are reconsidering everything from how they design their websites to how they handle payments and advertise online as they try to catch the eye of AI and not just humans.
“It’s in every single conversation I’m having right now,” said Caila Schwartz, director of consumer insights and strategy at Salesforce, which powers the e-commerce of a number of retailers, during a roundtable for press in June. “It is what everyone wants to talk about, and everyone’s trying to figure out and ask [about] and understand and build for.”
From SEO to GEO and AEO
As AI joins humans in shopping online, businesses are pivoting from SEO — search engine optimisation, or ensuring products show up at the top of a Google query — to generative engine optimisation (GEO) or answer engine optimisation (AEO), where catching the attention of an AI responding to a user’s request is the goal.
That’s easier said than done, particularly since it’s not always clear even to the AI companies themselves how their tools rank products, as Perplexity’s chief executive, Aravind Srinivas, admitted to Fortune last year. AI platforms ingest vast amounts of data from across the internet to produce their results.
Though there are indications of what attracts their notice. Products with rich, well-structured content attached tend to have an advantage, as do those that are the frequent subject of conversation and reviews online.
“Brands might want to invest more in developing robust customer-review programmes and using influencer marketing — even at the micro-influencer level — to generate more content and discussion that will then be picked up by the LLMs,” said Sky Canaves, a principal analyst at Emarketer focusing on fashion, beauty and luxury.
Ghergich pointed out that brands should be diligent with their product feeds into programmes such as Google’s Merchant Center, where retailers upload product data to ensure their items appear in Google’s search and shopping results. These types of feeds are full of structured data including product names and descriptions meant to be picked up by machines so they can direct shoppers to the right items. One example from Google reads:
Ghergich said AI will often read this data before other sources such as the HTML on a brand’s website. These feeds can also be vital for making sure the AI is pulling pricing data that’s up to date, or as close as possible.
As more consumers turn to AI and agents, however, it could change the very nature of online marketing, a scenario that would shake even Google’s advertising empire. Tactics that work on humans, like promoted posts with flashy visuals, could be ineffective for catching AI’s notice. It would force a redistribution of how retailers spend their ad budgets.
Emarketer forecasts that spending on traditional search ads in the US will see slower growth in the years ahead, while a larger share of ad budgets will go towards AI search. OpenAI, whose CEO, Sam Altman, has voiced his distaste for ads in the past, has also acknowledged exploring ads on its platform as it looks for new revenue streams.

“The big challenge for brands with advertising is then how to show up in front of consumers when traditional ad formats are being circumvented by AI agents, when consumers are not looking at advertisements because agents are playing a bigger role,” said Canaves.
Bots Are Good Now
Retailers face another set of issues if consumers start turning to agents to handle purchases. On the one hand, agents could be great for reducing the friction that often causes consumers to abandon their carts. Rather than going through the checkout process themselves and stumbling over any annoyances, they just tell the agent to do it and off it goes.
But most websites aren’t designed for bots to make purchases — exactly the opposite, in fact. Bad actors have historically used bots to snatch up products from sneakers to concert tickets before other shoppers can buy them, frequently to flip them for a profit. For many retailers, they’re a nuisance.
“A lot of time and effort has been spent to keep machines out,” said Rubail Birwadker, senior vice president and global head of growth at Visa.
If a site has reason to believe a bot is behind a transaction — say it completes forms too fast — it could block it. The retailer doesn’t make the sale, and the customer is left with a frustrating experience.
Payment players are working to create methods that will allow verified agents to check out on behalf of a consumer without compromising security. In April, Visa launched a programme focused on enabling AI-driven shopping called Intelligent Commerce. It uses a mix of credential verification (similar to setting up Apple Pay) and biometrics to ensure shoppers are able to checkout while preventing opportunities for fraud.
“We are going out and working with these providers to say, ‘Hey, we would like to … make it easy for you to know what’s a good, white-list bot versus a non-whitelist bot,’” Birwadker said.
Of course the bot has to make it to checkout. AI agents can stumble over other common elements in webpages, like login fields. It may be some time before all those issues are resolved and they can seamlessly complete any purchase.
Consumers have to get on board as well. So far, few appear to be rushing to use agents for their shopping, though that could change. In March, Salesforce published the results of a global survey that polled different age groups on their interest in various use cases for AI agents. Interest in using agents to buy products rose with each subsequent generation, with 63 percent of Gen-Z respondents saying they were interested.
Canaves of Emarketer pointed out that younger generations are already using AI regularly for school and work. Shopping with AI may not be their first impulse, but because the behaviour is already ingrained in their daily lives in other ways, it’s spilling over into how they find and buy products.
More consumers are starting their shopping journeys on AI platforms, too, and Schwartz of Salesforce noted that over time this could shape their expectations of the internet more broadly, the way Google and Amazon did.
“It just feels inevitable that we are going to see a much more consistent amount of commerce transactions originate and, ultimately, natively happen on these AI agentic platforms,” said Birwadker.
Tools & Platforms
Sixty-Eight Organizations Support Trump’s Pledge to Educate K-12 Students on AI
IBL News | New York
Sixty-eight organizations have signed to date the White House’s Pledge to America’s Youth: Investing in AI Education over the next four years, which follows President Trump’s April 23 executive order in this regard.
Some companies signing the pledge include Google, Amazon, Apple, IBM, Pearson, NVIDIA, OpenAI, Microsoft, Oracle, Adobe, Cisco, Dell, Intel, McGraw-Hill, Workday, Booz Allen, and Magic School AI.
These organizations pledge “to make available, over the next four years, resources for youth and teachers through funding and grants, educational materials and curricula, technology and tools, teacher professional development programs, workforce development resources, and/or technical expertise and mentorship,” working alongside the White House Task Force on Artificial Intelligence Education.
“The Pledge will help make AI education accessible to K-12 students across the country, sparking curiosity in the technology and preparing the next generation for an AI-enabled economy. Fostering young people’s interest and expertise in artificial intelligence is crucial to maintaining American technological dominance,” added.
Michael Kratsios, Director of the White House Office of Science and Technology Policy and Chair of the White House Task Force on AI Education, invited other organizations to join the pledge.
“AI is reshaping our economy and the way we live and work, and we must ensure the next generation of American workers is equipped with the skills they need to lead in this new era,” said Secretary of Labor Lori Chavez-DeRemer.
Brian Stone, performing the duties of the National Science Foundation (NSF) director, said that his institution will fund cutting-edge research, support teacher development, and expand access to STEM education.
As of June 30, 2025, these were the organizations supporting the Pledge:
Accenture |
ACT | The App Association |
Adobe |
Alpha Schools |
Amazon |
AMD |
Apple |
AT&T |
AutoDesk |
Booz Allen |
Brainly |
Business Software Alliance |
Cengage Group |
Charter Communications |
Cisco |
ClassLink |
Clever |
Code.org |
Cognizant |
Comprendo.dev |
Consumer Technology Association |
Cyber Innovation Center |
Dell Technologies |
Ed Technology Specialists |
Farm-Ed |
GlobalFoundries (GF) |
HiddenLayer |
HMH |
HP |
IBM |
IEEE |
Information Technology Industry Council (ITI) |
Intel |
Interplay |
Intuit |
ISACA |
MagicSchool |
Mason Contractors Association of America (MCAA) |
McGraw Hill |
Meta |
Microsoft |
National Children’s Museum |
NVIDIA |
OpenAI |
Oracle |
Palo Alto Networks |
Pathfinder |
Pearson |
Prisms of Reality |
Qualcomm |
Roblox |
Salesforce |
SAP America, Inc. |
Scale AI |
ServiceNow |
SHRM |
Siemens |
Software & Information Industry Association |
Stemuli |
TeachShare |
Telecommunications Industry Association (TIA) |
Thinkverse |
Vantage Data Centers |
Varsity Tutors |
Winnie |
Workday |
Y Combinator |
Tools & Platforms
CarMax’s top tech exec shares his keys to reinventing a legacy retailer in the age of AI
More than 30 years ago, CarMax aimed to transform the way people buy and sell used cars with a consistent, haggle-free experience that separated it from the typical car dealership.
Despite evolving into a market leader since then, its chief information and technology officer, Shamim Mohammad, knows no company is guaranteed that title forever; he had previously worked for Blockbuster, which, he said, couldn’t change fast enough to keep up with Netflix in streaming video.
Mohammad spoke with Modern Retail at the Virginia-based company’s technology office in Plano, Texas, which it opened three to four years ago to recruit for tech workers like software engineers and analysts in the region home to tech companies such as AT&T and Texas Instruments. At that office, CarMax has since hired almost 150 employees — more than initially expected — including some of Mohammad’s former colleagues from Blockbuster, which he had worked for in Texas in the early 2000s.
He explained how other legacy retailers can learn from how CarMax leveraged new technology like artificial intelligence and a startup mindset as it embraced change, becoming an omnichannel retailer where customers can buy cars in person, entirely online or through a combination of both. Many customers find a car online and test-drive and complete their purchase at the store.
“Every company, every industry is going through a lot of disruption because of technology,” Mohammad said. “It’s much better to do self-disruption: changing our own business model, challenging ourselves and going through the pain of change before we are disrupted by somebody else.”
Digitizing the dealership
Mohammad has been with CarMax for more than 12 years and had also been vp of information technology for BJ’s Wholesale Club. Since joining the auto retailer, he and his team have worked to use artificial intelligence to fully digitize the process of car buying, which is especially complex given the mountain of vehicle information and regulations dealers have to consider.
He said the company has been using AI and machine learning for at least 12-13 years to price cars, make sure the right information is online for the cars, and understand where cars need to be in the supply chain and when. That, he said, has powered the company’s website in becoming a virtual showroom that helps customers understand the vehicles, their functions and how they fit their needs. Artificial intelligence has also powered its online instant offer tool for selling cars, giving customers a fair price that doesn’t lose the company money, Mohammad said.
“Technology is enabling different types of experiences, and it’s setting new expectations, and new types of ways to shop and buy. Our industry is no different. We wanted to be that disruptor,” Mohammad said. “We want to make sure we change our business model and we bring those experiences so that we continue to remain the market leader in our industry.”
About three or four years ago, CarMax was an early adopter of ChatGPT, using it to organize data on the different features of car models and make it presentable through its digital channels. Around the same time, the company also used generative AI to comb through and summarize thousands of customer product reviews — it did what would have taken hundreds of content writers more than 10 years to do in a matter of days, he said — and keep them up to date.
As the technology has improved over the last few years, the company has adopted several new AI-powered features. One is Rhodes, a tool associates use to get support and information they need to help customers, which launched about a year ago, Mohammad said. It uses a large language model combining CarMax data with outside information like state or federal rules and regulations to help employees quickly access that data.
Anything that requires a lot of human workload and mental capacity can be automated, he said, from looking at invoices and documents to generating code for developers and engineers, saving them time to do more valuable work. Retailers like Target and Walmart have done the same by using AI chatbots as tools for employees.
“We used to spend a fortune on employee training, and employees only retained and reliably repeated a small percentage of what we trained,” said Jason Goldberg, chief commerce strategy officer for Publicis Groupe. “Increasingly, AI is letting us give way better tools to the salespeople, to train them and to support them when they’re talking to customers.”
In just the last few months, Mohammad said, CarMax has been rolling out an agentic version of a previous buying and selling assistant on its website called Skye that better understands the intent of the user — not only answering the question the customer asks directly, but also walking the customer through the entire car buying process.
“It’ll obviously answer [the customer’s question], but it will also try to understand what you’re trying to do and help you proactively through the entire process. It could be financing; it could be buying; it could be selling; it could be making an appointment; it could be just information about the car and safety,” he said.
The new Skye is more like talking to an actual human being, Mohammad said, where, in addition to answering the question, the agent can make other recommendations in a more natural conversation. For example, if someone is trying to buy a car and asks for a family car that’s safe, it will pull one from its inventory, but it may also ask if they’d like to talk to someone or even how their day is going.
“It’s guiding you through the process beyond what you initially asked. It’s building a rapport with you,” Mohammad said. “It knows you very well, it knows our business really well, and then it’s really helping you get to the right car and the right process.”
Goldberg said that while many functions of retail, from writing copy to scheduling shifts, have also been improved with AI, pushing things done by humans to AI chatbots could lead to distrust or create results that are inappropriate or offensive. “At the moment, most of the AI things are about efficiency and reducing friction,” Goldberg said. “They’re taking something you’re already doing and making it easier, which is generally appealing, but there is also the potential to dehumanize the experience.”
In testing CarMax’s new assistant, other AI agents are actually monitoring it to make sure it’s up to the company’s standards and not saying bad words, Mohammad said, adding it would be impossible for humans to look at everything the new assistant is doing.
The company doesn’t implement AI just to implement AI, Mohammad said, adding that his teams are using generative AI as a tool when needing to solve particular problems instead of being forced to use it.
“Companies don’t need an AI strategy. … They need a strategy that uses AI,” Mohammad said. “Use AI to solve customer problems.”
Working like a tech startup
In embracing change, CarMax has had to change the way it works, Mohammad said. It has created a more startup-like culture, going from cubicles to more open, collaborative office spaces where employees know what everyone else is working on.
About a decade ago, he said, the company started working with a project-based mindset, where it would deliver a new project every six to nine months — each taking about a year in total, with phases for designing and testing.
Now, the company has small, cross-functional product teams of seven to nine people, each with a mission around improving a particular area like finance, digital merchandising, SEO, logistics or supply chain — some even have fun names like “Ace” or “Top Gun.”
Teams have just two weeks to create a prototype of a feature and get it in front of customers. He said that, stacked up over time, those small new changes those teams created completely transformed the business.
“The teams are empowered, and they’re given a mission. I’m not telling them what to do. I’m giving them a goal. They figure out how,” Mohammad said. “Create a culture of experimentation, and don’t wait for things to be perfect. Create a culture where your teams are empowered. It’s OK for them to make mistakes; it’s OK for them to learn from their mistakes.”
-
Funding & Business6 days ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers6 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions6 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business6 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers6 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business3 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Funding & Business6 days ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers6 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business6 days ago
Europe’s Most Ambitious Startups Aren’t Becoming Global; They’re Starting That Way
-
Tools & Platforms6 days ago
Winning with AI – A Playbook for Pest Control Business Leaders to Drive Growth