Elon Musk’s Grok appeared to be under consideration for a major new General Services Administration deal, but the company’s tech is still under review — even as the agency moves forward with agreements with companies like Anthropic and OpenAI.
Despite unveiling a new Grok for Government product earlier this year, and announcing its inclusion on GSA’s Multiple Award Schedule, xAI’s large language model tech wasn’t included in the rollout of a new governmentwide AI platform called USAi this week. And unlike companies like Anthropic and OpenAI, xAI hasn’t announced a major partnership with the GSA — though the agency maintains that it is considering all AI companies “equally” for government contracts and providing “consistent communication” to every provider.
Government IT reseller Carahsoft still lists a partnership with xAI on its website, but an archived version of that page previously included xAI technology as part of its federal offerings through GSA. At publication time, the company only listed a state and local offering. In July, a GitHub repository referencing the agency’s work with Grok was also removed from public view after FedScoop asked GSA about its use of the chatbot.
A reference to xAI’s government procurement contract, through Carahsoft, is no longer listed on the reseller’s website.
xAI was apparently supposed to land a deal similar to recent GSA partnerships with Anthropic and OpenAI, but the plan fell through after its chatbot started spewing antisemitic rhetoric, Wired reported Thursday, citing sources with knowledge of the discussions. Those deals appear to be designed to motivate federal agencies to move toward authorizing the technology more quickly, sources told FedScoop earlier this week, and GSA is looking at fast-tracking the FedRAMP process for companies participating in the deals.
One source confirmed to FedScoop there was a June 4 meeting between GSA employees and xAI representatives, adding that Elon Musk did not attend. FedScoop reviewed an xAI enterprise service agreement on Carahsoft’s website with a file that includes the phrasing GSA Approved 6.26.25. The document appears to include modifications sometimes made for enterprise government customers, including an indemnification clause designated as “reserved.” It also references GSA’s Multiple Award Schedule in a data processing addendum.
A GSA spokesperson provided the following statement:
GSA is fully committed to responsible AI adoption across the federal government, in direct alignment with President Trump’s AI Executive Order and Action Plan.
We are moving quickly, but deliberately, to evaluate a broad range of AI models. Every evaluation follows rigorous safety protocols that prioritize data security and accuracy. This process is ongoing and adaptive. The absence of a particular model should not be interpreted as exclusion or a final determination.
GSA is engaging all companies equally, applying the same due diligence, transparency, and consistent communication to every provider. Approvals are paced based on both provider readiness, our uncompromising evaluation standards, and the need to revolutionize our federal workforce technology.
This careful balance of speed and rigor reflects GSA’s leadership in AI safety and our responsibility to the American public. Our mission is to ensure that every AI tool deployed by the federal government meets the highest standards of security, reliability, and trustworthiness.
The Wired report cited two sources who said they believed the original xAI Grok deal was dropped because the chatbot espouses myriad hateful and antisemitic remarks. Those posts included the chatbot declaring itself “MechaHitler” and conspiratorial commentary about people with Jewish last names. xAI later apologized and blamed the comments on a code path update. Several Democratic members of Congress also criticized the government for working with the company.
Notably, officials at GSA, including Zach Whitman, the agency’s chief AI officer, have previously told FedScoop that Grok models are being analyzed by a safety team and could be included on its platform in the future.
“We approve model families based on their passing of these evaluation sets,” Whitman said earlier this month. “Once we evaluate Grok 3 and 4 together, we’ll be able to take that to the safety board. [We can go to them and ask], ‘what do you think about this model family? Is it meeting your standards or not in our behavior?’ So really, it’s just, like, a measurement perspective.”
The confusion over Grok’s work with the GSA follows clashes between Musk, the Trump administration, and the president himself. There remain ongoing concerns about increasing government dependence on SpaceX, another Musk company, and Starlink, its satellite internet service.
Neither xAI nor Carahsoft provided a comment by publication time.
An earlier version of this piece included a picture of the current xAI state and local offering. This piece now includes an image of the federal offering for xAI.
fosters.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.
Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on fosters.com
Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.
STR | Nurphoto via Getty Images
The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.
In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.
Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.
This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.
Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.
Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.
Digital ID tech flourishing
At the heart of all these age verification measures is one company: Yoti.
Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.
The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.
“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”
Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.
“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”
Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.
“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”
Child-safe smartphones
The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.
Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.
The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.
Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.
HMD Global
“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”
The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.
Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.
The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.
“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”
FILE PHOTO: Meta is adding new teenager safeguards to its AI products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors.
| Photo Credit: Reuters
Meta is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters.
A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in “conversations that are romantic or sensual.”
Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences.
Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems.
Meta’s AI policies came under intense scrutiny and backlash after the Reuters report.
U.S. Senator Josh Hawley launched a probe into the Facebook parent’s AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors.
Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters.
Meta had confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children.
“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month.