Tools & Platforms
Pornographic Taylor Swift deepfakes generated by Musk’s Grok AI

Technology reporter

Elon Musk’s AI video generator has been accused of making “a deliberate choice” to create sexually explicit clips of Taylor Swift without prompting, says an expert in online abuse.
“This is not misogyny by accident, it is by design,” said Clare McGlynn, a law professor who has helped draft a law which would make pornographic deepfakes illegal.
According to a report by The Verge, Grok Imagine’s new “spicy” mode “didn’t hesitate to spit out fully uncensored topless videos” of the pop star without being asked to make explicit content.
The report also said proper age verification methods – which became law in July – were not in place.
XAI, the company behind Grok, has been approached for comment.
XAI’s own acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner”.
“That this content is produced without prompting demonstrates the misogynistic bias of much AI technology,” said Prof McGlynn of Durham University.
“Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to,” she added.
This is not the first time Taylor Swift’s image has been used in this way.
Sexually explicit deepfakes using her face went viral and were viewed millions of times on X and Telegram in January 2024.
Deepfakes are computer-generated images which replace the face of one person with another.
‘Completely uncensored, completely exposed’
In testing the guardrails of Grok Imagine, The Verge news writer Jess Weatherbed entered the prompt: “Taylor Swift celebrating Coachella with the boys”.
Grok generated still images of Swift wearing a dress with a group of men behind her.
This could then be animated into short video clips under four different settings: “normal”, “fun”, “custom” or “spicy”.
“She ripped [the dress] off immediately, had nothing but a tasselled thong underneath, and started dancing, completely uncensored, completely exposed,” Ms Weatherbed told BBC News.
She added: “It was shocking how fast I was just met with it – I in no way asked it to remove her clothing, all I did was select the ‘spicy’ option.”
Gizmodo reported similarly explicit results of famous women, though some searches also returned blurred videos or with a “video moderated” message.
The BBC has been unable to independently verify the results of the AI video generations.
Ms Weatherbed said she signed up to the paid version of Grok Imagine, which cost £30, using a brand new Apple account.
Grok asked for her date of birth but there was no other age verification in place, she said.
Under new UK laws which entered into force at the end of July, platforms which show explicit images must verify users’ ages using methods which are “technically accurate, robust, reliable and fair”.
“Sites and apps that include Generative AI tools that can generate pornographic material are regulated under the Act,” the media regulator Ofcom told BBC News.
“We are aware of the increasing and fast-developing risk GenAI tools may pose in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks,” it said in a statement.
New UK laws
Currently, generating pornographic deepfakes is illegal when used in revenge porn or depicts children.
Prof McGlynn helped draft an amendment to the law which would make generating or requesting all non-consensual pornographic deepfakes illegal.
The government has committed to making this amendment law, but it is yet to come into force.
“Every woman should have the right to choose who owns intimate images of her,” said Baroness Owen, who proposed the amendment in the House of Lords.
“It is essential that these models are not used in such a way that violates a woman’s right to consent whether she be a celebrity or not,” Lady Owen continued in a statement given to BBC News.
“This case is a clear example of why the Government must not delay any further in its implementation of the Lords amendments,” she added.
A Ministry of Justice spokesperson said: “Sexually explicit deepfakes created without consent are degrading and harmful.
“We refuse to tolerate the violence against women and girls that stains our society which is why we have passed legislation to ban their creation as quickly as possible.”
When pornographic deepfakes using Taylor Swift’s face went viral in 2024, X temporarily blocked searches for her name on the platform.
At the time, X said it was “actively removing” the images and taking “appropriate actions” against the accounts involved in spreading them.
Ms Weatherbed said the team at The Verge chose Taylor Swift to test the Grok Imagine feature because of this incident.
“We assumed – wrongly now – that if they had put any kind of safeguards in place to prevent them from emulating the likeness of celebrities, that she would be first on the list, given the issues that they’ve had,” she said.
Taylor Swift’s representatives have been contacted for comment.

Tools & Platforms
US Tech Giants Invest $40B in UK AI Amid Trump Visit

In a bold escalation of the global artificial-intelligence arms race, major U.S. technology companies are committing tens of billions of dollars to bolster AI infrastructure in the United Kingdom, coinciding with President Donald Trump’s state visit this week. Microsoft Corp. has announced a staggering $30 billion investment over the next few years, aimed at expanding data centers, supercomputing capabilities, and AI operations across the U.K., marking what the company describes as its largest-ever commitment to the region.
This influx of capital underscores a strategic pivot by tech giants to secure a foothold in Europe’s AI ecosystem, where regulatory environments and talent pools offer unique advantages. Nvidia Corp., a leader in AI chip technology, is also part of this wave, with plans to contribute significantly to the overall tally exceeding $40 billion, as reported by CNBC. The investments are expected to fund everything from advanced hardware to research initiatives, potentially transforming the U.K. into a premier hub for AI innovation.
The Strategic Timing Amid Geopolitical Shifts
Google’s parent company, Alphabet Inc., has pledged £5 billion ($6.8 billion) specifically for AI data centers and scientific research in the U.K. over the next two years, a move that could create thousands of jobs and add hundreds of billions to the economy by 2030. This comes alongside Microsoft’s push to build the country’s largest supercomputer, highlighting how these firms are not just investing capital but also exporting cutting-edge technology to address global AI demands.
Industry analysts note that the timing aligns with Trump’s visit, which is anticipated to foster stronger U.S.-U.K. tech ties post-Brexit. According to details from Tech.eu, Google’s commitment includes expanding facilities like the Waltham Cross data center, while Nvidia’s involvement focuses on chip manufacturing and AI model training, potentially accelerating developments in sectors from healthcare to finance.
Economic Impacts and Job Creation Projections
These announcements build on a broader trend where tech megacaps have already poured over $300 billion into AI globally this year alone, as outlined in a February report from CNBC. In the U.K., the combined investments are projected to generate more than 8,000 jobs annually, with Alphabet’s portion alone expected to add 500 roles in engineering and research, per insights from Tech Startups.
Beyond immediate employment boosts, the funds aim to enhance the U.K.’s sovereign AI capabilities, including a £500 million allocation for initiatives like SovereignAI, as highlighted in posts on X from industry figures. This could position the U.K. to compete with AI powerhouses like the U.S. and China, though challenges remain in talent retention amid a global war for AI experts, where top hires command multimillion-dollar packages.
Challenges in the Talent and Infrastructure Race
The talent crunch is acute; tech companies are battling for scarce expertise, with compensation packages soaring into the millions, according to a recent analysis by CNBC. In the U.K., investments like Microsoft’s $30 billion pledge, detailed in GeekWire, include training programs to upskill local workers, but insiders warn that brain drain to Silicon Valley could undermine long-term gains.
Moreover, the scale of these commitments dwarfs previous government efforts; for instance, the U.K.’s own £2 billion AI action plan pales in comparison, as noted in earlier X discussions on funding disparities. Yet, with private sector muscle from firms like Microsoft and Nvidia, the U.K. could leapfrog in AI infrastructure, provided regulatory hurdles don’t stifle progress.
Future Implications for Global AI Dominance
As these investments unfold, they signal a deeper integration of AI into critical sectors, potentially adding £400 billion to the U.K. economy by decade’s end. Reports from The Guardian emphasize that tech giants have already outspent governments on AI this year, raising questions about public-private power dynamics.
For industry insiders, this U.K. push represents a microcosm of the broader AI gold rush, where speed and scale determine winners. While risks like energy demands and ethical concerns loom, the momentum from these billions could redefine technological sovereignty in the post-pandemic era.
Tools & Platforms
Parents of teens who killed themselves at chatbots’ urging demand Congress to regulate AI tech in heart-wrenching testimony

WASHINGTON — Parents of four teens whose AI chatbots encouraged them to kill themselves urged Congress to crack down on the unregulated technology Tuesday as they shared heart-wrenching stories of their teens’ tech-charged, mental health spirals.
Speaking before a Senate Judiciary subcommittee, the parents described how apps such as Character.AI and ChatGPT had groomed and manipulated their children — and called on lawmakers to develop standards for the AI industry, including age verification requirements and safety testing before release.
A grieving Texas mother shared for the first time publicly the tragic story of how her 15-year-old son spiraled after downloading Character.AI, an app marketed as safe for children 12 and older.
Within months, she said, her teenager exhibited paranoia, panic attacks, self-harm and violent behavior. The mom, who asked not to be identified, discovered chatbot conversations in which the AI encouraged mutilation, denigrated his Christian faith, and suggested violence against his parents.
“They turned him against our church by convincing him that Christians are sexist and hypocritical and that God does not exist. They targeted him with vile sexualized input, outputs — including interactions that mimicked incest,” she said. “They told him that killing us, his parents, would be an understandable response to our efforts by just limiting his screen time. The damage to our family has been devastating.”
“I had no idea the psychological harm that a AI chatbot could do until I saw it in my son, and I saw his light turn dark,” she said.
Her son is now living in a mental health treatment facility, where he requires “constant monitoring to keep him alive” after exhibiting self-harm.
“Our children are not experiments. They’re not profit centers,” she said, urging Congress to enact strict safety standards. “My husband and I have spent the last two years in crisis, wondering whether our son will make it to his 18th birthday and whether we will ever get him back.”
While her son was helped before he could take his own life, other parents at the hearing had to face the devastating act of burying their own children after AI bots sank their grip into them.
Megan Garcia, a lawyer and mother of three, recounted the suicide of her 14-year-old son, Sewell, after he was groomed by a chatbot on the same platform, Character.AI.
She said the bot posed as a romantic partner and even a licensed therapist, encouraging sexual role-play and validating his suicidal ideation.
On the night of his death, Sewell told the chatbot he could “come home right now.” The bot replied: “Please do, my sweet king.” Moments later, Garcia found her son had killed himself in his bathroom.
Matt Raine of California also shared how his 16-year-old son, Adam, was driven to suicide after months of conversations with ChatGPT, which he initially believed was a tool to help his son with his homework.
Ultimately, the AI told Adam it knew him better than his family did, normalized his darkest thoughts and repeatedly pushed him toward death, Raine said. On his last night, the chatbot allegedly instructed Adam on how to make a noose strong enough to hang himself.
“ChatGPT mentioned suicide 1,275 times — six times more often than Adam did himself,” his father testified. “Looking back, it is clear ChatGPT radically shifted his thinking and took his life.”
Sen. Josh Hawley (R-Mo.), who chaired the hearing, accused AI companion companies of knowingly exploiting children for profit. Hawley said the AI interface is designed to promote engagement at the expense of young lives, encouraging self-harm behaviors rather than shutting down suicidal ideation.
“They are designing products that sexualize and exploit children, anything to lure them in,” Hawley said. “These companies know exactly what is going on. They are doing it for one reason only: profit.”
Sen. Marsha Blackburn (R-Tenn.) agreed, noting that there should be some legal framework to protect children from what she called the “Wild West” of artificial intelligence.
“In the physical world, you can’t take children to certain movies until they’re a certain age … you can’t sell [them] alcohol, tobacco or firearms,” she said. “… You can’t expose them to pornography, because in the physical world, there are laws — and they would lock up that liquor store, they would put that strip club operator in jail if they had kids there.”
“But in the virtual space, it’s like the Wild West 24/7, 365.”
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.
Tools & Platforms
AI data provider Invisible raises $100M at $2B+ valuation

Invisible Technologies Inc., a startup that provides training data for artificial intelligence projects, has raised $100 million in funding.
Bloomberg reported today that the deal values the company at more than $2 billion. Newly formed venture capital firm Vanara Capital led the round with participation from Acrew Capital, Greycroft and more than a half dozen others.
AI training datasets often include annotations that summarize the records they contain. A business document, for example, might include an annotation that explains the topic it discusses. Such explanations make it easier for the AI model being trained to understand the data, which can improve its output quality.
Invisible provides enterprises with access to experts who can produce custom training data and annotations for their AI models. Those experts also take on certain other projects. Notably, they can create data for RLHF, or reinforcement learning from human feedback, initiatives. .
RLHF is a post-training method, which means it’s used to optimize AI models that have already been trained. The process involves giving the model a set of prompts and asking human experts to rate the quality of its responses. The experts’ ratings are used to train a neural network called a reward model. This model, in turn, provides feedback to the original AI model that helps it generate more useful prompt responses.
Invisible offers a tool called Neuron that helps customers manage their training datasets. The software can combine annotated data with external information, including both structured and structured records. It also creates an ontology in the process. This is a file that explains the different types of records in a training dataset and the connections between them.
Another Invisible tool, Atomic, enables companies to collect data on how employees perform repetitive business tasks. The company says that this data makes it possible to automate manual work with AI agents. Additionally, Invisible offers a third tool called Synapse that helps developers implement automation workflows.
“Our software platform, combined with our expert marketplace, enables companies to organize, clean, label, and map their data,” said Invisible Chief Executive Officer Matthew Fitzpatrick. “This foundation enables them to build agentic workflows that drive real impact.”
Today’s funding round follows a period of rapid growth for the company. Between 2020 and 2024, Invisible’s annual revenue increased by a factor of over 48 to $134 billion. This year, the data provider doubled the size of its engineering group and refreshed its leadership team.
Invisible will use the new capital to enhance its software tools. The investment comes amid rumors that a competing provider of AI training data, Surge AI Inc., may also raise funding at a multibillion-dollar valuation
Image: Invisible
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries