Connect with us

AI Research

Pornographic Taylor Swift deepfakes generated by Musk’s Grok AI

Published

on


Imran Rahman-Jones

Technology reporter

Getty Images Taylor Swift smiling wearing a black beanie hat and black jacket.Getty Images

Elon Musk’s AI video generator has been accused of making “a deliberate choice” to create sexually explicit clips of Taylor Swift without prompting, says an expert in online abuse.

“This is not misogyny by accident, it is by design,” said Clare McGlynn, a law professor who has helped draft a law which would make pornographic deepfakes illegal.

According to a report by The Verge, Grok Imagine’s new “spicy” mode “didn’t hesitate to spit out fully uncensored topless videos” of the pop star without being asked to make explicit content.

The report also said proper age verification methods – which became law in July – were not in place.

XAI, the company behind Grok, has been approached for comment.

XAI’s own acceptable use policy prohibits “depicting likenesses of persons in a pornographic manner”.

“That this content is produced without prompting demonstrates the misogynistic bias of much AI technology,” said Prof McGlynn of Durham University.

“Platforms like X could have prevented this if they had chosen to, but they have made a deliberate choice not to,” she added.

This is not the first time Taylor Swift’s image has been used in this way.

Sexually explicit deepfakes using her face went viral and were viewed millions of times on X and Telegram in January 2024.

Deepfakes are computer-generated images which replace the face of one person with another.

‘Completely uncensored, completely exposed’

In testing the guardrails of Grok Imagine, The Verge news writer Jess Weatherbed entered the prompt: “Taylor Swift celebrating Coachella with the boys”.

Grok generated still images of Swift wearing a dress with a group of men behind her.

This could then be animated into short video clips under four different settings: “normal”, “fun”, “custom” or “spicy”.

“She ripped [the dress] off immediately, had nothing but a tasselled thong underneath, and started dancing, completely uncensored, completely exposed,” Ms Weatherbed told BBC News.

She added: “It was shocking how fast I was just met with it – I in no way asked it to remove her clothing, all I did was select the ‘spicy’ option.”

Gizmodo reported similarly explicit results of famous women, though some searches also returned blurred videos or with a “video moderated” message.

The BBC has been unable to independently verify the results of the AI video generations.

Ms Weatherbed said she signed up to the paid version of Grok Imagine, which cost £30, using a brand new Apple account.

Grok asked for her date of birth but there was no other age verification in place, she said.

Under new UK laws which entered into force at the end of July, platforms which show explicit images must verify users’ ages using methods which are “technically accurate, robust, reliable and fair”.

“Sites and apps that include Generative AI tools that can generate pornographic material are regulated under the Act,” the media regulator Ofcom told BBC News.

“We are aware of the increasing and fast-developing risk GenAI tools may pose in the online space, especially to children, and we are working to ensure platforms put appropriate safeguards in place to mitigate these risks,” it said in a statement.

New UK laws

Currently, generating pornographic deepfakes is illegal when used in revenge porn or depicts children.

Prof McGlynn helped draft an amendment to the law which would make generating or requesting all non-consensual pornographic deepfakes illegal.

The government has committed to making this amendment law, but it is yet to come into force.

“Every woman should have the right to choose who owns intimate images of her,” said Baroness Owen, who proposed the amendment in the House of Lords.

“It is essential that these models are not used in such a way that violates a woman’s right to consent whether she be a celebrity or not,” Lady Owen continued in a statement given to BBC News.

“This case is a clear example of why the Government must not delay any further in its implementation of the Lords amendments,” she added.

A Ministry of Justice spokesperson said: “Sexually explicit deepfakes created without consent are degrading and harmful.

“We refuse to tolerate the violence against women and girls that stains our society which is why we have passed legislation to ban their creation as quickly as possible.”

When pornographic deepfakes using Taylor Swift’s face went viral in 2024, X temporarily blocked searches for her name on the platform.

At the time, X said it was “actively removing” the images and taking “appropriate actions” against the accounts involved in spreading them.

Ms Weatherbed said the team at The Verge chose Taylor Swift to test the Grok Imagine feature because of this incident.

“We assumed – wrongly now – that if they had put any kind of safeguards in place to prevent them from emulating the likeness of celebrities, that she would be first on the list, given the issues that they’ve had,” she said.

Taylor Swift’s representatives have been contacted for comment.

A green promotional banner with black squares and rectangles forming pixels, moving in from the right. The text says: “Tech Decoded: The world’s biggest tech news in your inbox every Monday.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Prediction: Oracle Will Surpass Amazon, Microsoft, and Google to Become the Top Cloud for Artificial Intelligence (AI) By 2031

Published

on

By


Key Points

  • Oracle is on its way to becoming the best cloud for AI and high-performance computing.

  • Oracle multicloud is cutting networking complexity and reducing data transfer latency.

  • OpenAI will need to raise capital or generate cash flow to afford its $300 billion cloud deal with Oracle.

On Sept. 10, Oracle (NYSE: ORCL) stock popped 36% in response to a massive increase in customer orders for Oracle’s cloud services.

Oracle forecasts that revenue from its Oracle Cloud Infrastructure (OCI) segment could grow from around $10 billion in its last fiscal year (fiscal 2025), to $18 billion in its current fiscal year (fiscal 2026), $32 billion in fiscal 2027, $73 billion in fiscal 2028, $114 billion in fiscal 2029, and $144 billion in fiscal 2030 — corresponding with calendar year 2031.

Where to invest $1,000 right now? Our analyst team just revealed what they believe are the 10 best stocks to buy right now. Learn More »

For context, Amazon Web Services (AWS) generated over $60 billion in net sales in the first half of 2025 — so a $120 billion annual run rate. Microsoft, which just wrapped up its fiscal 2025 year, reported $106 billion in Intelligent Cloud revenue. And Alphabet‘s Google Cloud generated $26 billion in revenue in the first half of 2025. This means that OCI is forecast to exceed the current size of Google Cloud within three years, the current size of Microsoft Azure within four years, and the current size of AWS within five years.

Here’s why Oracle is winning cloud contracts from leading artificial intelligence (AI) companies like OpenAI, and why the company could become the top cloud for AI within the next five years.

A rendering of a cloud with touch points extending around the world, illustrating the growing need for cloud computing in the age of artificial intelligence (AI).

Image source: Getty Images.

The future of cloud computing

Oracle’s push into cloud infrastructure is arguably its boldest bet in the company’s history. Oracle isn’t cutting corners, either; it is bringing on dozens of data centers online in just a few years. It has built 34 multicloud data centers and should have another 37 online in less than a year.

These multicloud data centers are unique because they allow an organization to use services or workloads from two or more cloud providers, such as AWS, Microsoft Azure, Google Cloud, and OCI. All of these clouds can work with the Oracle database. The idea is to allow customers to select the best cloud service for each task.

AWS, Azure, and Google Cloud all have multicloud strategies too, but the big difference is that Oracle is embedding native versions of its infrastructure (Oracle Autonomous Database and Exadata Database Service) inside the big three clouds to boost performance and decrease latency. Examples include Oracle Database@AWS, Oracle Database@Azure, and Oracle Database@Google Cloud. The “big three” are more about managing workloads rather than integrating them natively.

The buildout of OCI as a formidable alternative to the big three, paired with Oracle’s ultra-modern data centers, put Oracle on the cutting edge of data center workflow. According to Oracle, OCI can achieve 50% better price-to-performance and 3.5 times time savings for high-performance cloud computing workflows compared to the previous generation of computing.

Race to the clouds

Oracle is purpose-building its cloud from scratch specifically for AI, whereas the majority of AWS, Microsoft Azure, and Google Cloud handle non-AI tasks, like basic compute and storage, database and analytics, networking, etc. So while Oracle will likely become the biggest cloud for AI if it hits its fiscal 2030 OCI revenue target of $144 billion, it still may be a smaller cloud by total revenue compared to the more established giants.

Still, Oracle is achieving milestones that are impossible to ignore — laying the foundation for Oracle to be the go-to cloud for AI. It exited the recent quarter with a 359% increase in its contract backlog, bringing the total to $455 billion. Reports indicate that Oracle landed a multiyear $300 billion contract with OpenAI. To afford that deal, OpenAI will need to start generating more cash flow.

On Sept. 11 — two days after Oracle reported earnings — OpenAI and Microsoft released a joint statement to transition OpenAI from a pure-play nonprofit to a nonprofit owning a majority stake in a Public Benefit Corporation (PBC). A PBC is like a corporation with mission-backed guardrails. The aim is to generate a profit, but only if it fulfills a mission. Still, OpenAI’s transition could allow it to raise billions more in funding, which would presumably help fund its deal with OCI even if OpenAI isn’t generating positive free cash flow.

OpenAI, as the cornerstone of Oracle’s backlog, has its pros and cons. On the one hand, it demonstrates that one of the most cutting-edge AI companies recognizes the value in what Oracle is building. But it also adds concentration risk to Oracle’s projections. And if OpenAI’s targets don’t go as planned, Oracle’s forecast could fall apart.

A high-risk, high-potential-reward AI play

Oracle is attracting massive deals from the big three cloud players with its multicloud offering. It has also built an attractive pricing model for customers specifically looking for high-performance computing to train AI models.

With customers lining up at the door, including a jewel in OpenAI, all Oracle has to do now is scale its infrastructure. It’s become the best restaurant in town with reservations booked years in advance. The demand is undeniable, especially given these are multibillion-dollar, multiyear contracts.

Given Oracle’s extremely pricey valuation, investors should only consider the stock if they have a high risk tolerance, a long-term time horizon, and believe that Oracle’s multicloud offering will be the premier option for AI customers. If that thesis plays out, Oracle will likely be worth considerably more in the future than it is today, even after the stock has nearly doubled over the last year and more than quadrupled over the last three years.

Should you invest $1,000 in Oracle right now?

Before you buy stock in Oracle, consider this:

The Motley Fool Stock Advisor analyst team just identified what they believe are the 10 best stocks for investors to buy now… and Oracle wasn’t one of them. The 10 stocks that made the cut could produce monster returns in the coming years.

Consider when Netflix made this list on December 17, 2004… if you invested $1,000 at the time of our recommendation, you’d have $648,369!* Or when Nvidia made this list on April 15, 2005… if you invested $1,000 at the time of our recommendation, you’d have $1,089,583!*

Now, it’s worth noting Stock Advisor’s total average return is 1,060% — a market-crushing outperformance compared to 189% for the S&P 500. Don’t miss out on the latest top 10 list, available when you join Stock Advisor.

See the 10 stocks »

*Stock Advisor returns as of September 15, 2025

Daniel Foelber has no position in any of the stocks mentioned. The Motley Fool has positions in and recommends Alphabet, Amazon, Microsoft, and Oracle. The Motley Fool recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.



Source link

Continue Reading

AI Research

How Oakland teachers use — or avoid — AI in the classroom

Published

on


When Calupe Kaufusi was a freshman at McClymonds High School in West Oakland, he’d use platforms like ChatGPT or Google Gemini for written assignments in his history class. But he quickly learned they weren’t infallible. 

“It became kind of inconvenient,” Kaufusi said. “As I learned more about AI, I learned it wouldn’t give you correct information and we’d have to fact check it.”

Like many students, Kaufusi used generative AI platforms — where users can input a prompt and receive answers in various formats, be it an email text, an essay, or the answers to a test — to get his work done quickly and without much effort. Now a junior, Kaufusi said he’s dialed down his AI use.

Already rampant in college and university settings, artificial intelligence software is also reshaping the K-12 education landscape. Absent a detailed policy in the Oakland Unified School District, individual teachers and schools have been left to navigate how to integrate the technology in their classrooms — or how to try to keep it out. 

McClymonds High School in West Oakland. Credit: Jungho Kim for The Oaklandside

Some teachers told The Oaklandside they are choosing to embrace AI by incorporating it into student projects or using it to assist with their own lesson planning, while others have said they’ve rejected it for its environmental impacts and how it enables students to cut corners. Some teachers are returning to old forms of assessment, such as essays handwritten during class that can’t be outsmarted by the platforms. 

What’s clear to many is that AI platforms are already ubiquitous on the internet and many students are going to use them whether their teachers advise them to or not.

Kaufusi, who is in McClymonds’ engineering pathway, is interested in studying machine learning or software engineering, so he wants to see more of his teachers discuss responsible uses for AI. “They know there’s no way to stop us” from using it, he said, “so they can try to teach us how to use it properly.” 

A new policy in the works

Under current OUSD guidance, published in March, teachers and principals are left to determine whether students are allowed to use AI in their work; if they do, students are required to cite it. The guidance also outlines procedures for teachers to follow if they suspect a student is misusing AI, for example, by representing AI-generated work as their own, starting with a private conversation with the student, then the collection of evidence, and finally a consultation with colleagues about proper discipline. 

Work is underway in Oakland Unified to develop a more comprehensive AI policy for the district, said Kelleth Chinn, the district’s instructional technology coordinator. In his role, he’s been thinking about how to address student use of AI. A former classroom teacher, Chinn can imagine beneficial uses for both students and teachers in the classroom, but he knows teaching students responsible uses for AI doesn’t preclude them from using it in dishonest ways.

“The reason that we need to talk about AI to students is because a lot of students are already using it,” Chinn told The Oaklandside. “In the absence of having any kind of conversations, you’re just leaving this vacuum without guidance for students.”

Any new draft policy would first be evaluated by the school board’s teaching and learning committee before being considered by the full board of directors. VanCedric Williams, chair of that committee, has met with Chinn and his team to discuss potential approaches. Williams, a veteran teacher, said he is hesitant to recommend a policy that would encourage educators to use AI. 

“I do not want to put any expectations for teachers or students to use it or not,” Williams told The Oaklandside. “We’re looking at best practices around the state, what other districts are doing and what pitfalls they’ve incurred.” 

Chinn added that he’s been looking at how colleges and universities are addressing AI. What he’s found is that some professors are turning away from papers and written homework assignments and toward methods like blue book exams and oral presentations that preclude the use of AI.   

‘We just want our kids to be able to critically think’

Some teachers are hesitant to fully embrace the technology, concerned that it could hamper student learning and critical thinking. At Oakland Technical High School, a group of history and English teachers have formed a professional learning community to study AI in education and come up with potential guidance. 

Amanda Laberge and Shannon Carey, who both teach juniors at Oakland Tech, joined the group as AI skeptics. Carey, who has been teaching in OUSD since 1992, sees AI differently than she does other advances in technology that have taken place over the course of her career. 

“A computer is a tool: You can draft your essay and I can put comments on it,” Carey, a history teacher, told The Oaklandside. “Whereas AI, the way many students are using it, is to do their thinking for them.”

Carey noted that after years of a drive to incorporate more tech in the classroom, the tide is turning on cell phones — many schools now have “no smartphone” policies and last year Governor Gavin Newsom signed a law, which goes into effect in 2026, requiring all school districts to prohibit cell phone use during the school day. 

Neither Carey nor Laberge plan to use AI themselves, the way some educators use it for grading or lesson planning.

Oakland Technical High School. Credit: Amir Aziz/The Oaklandside

Laberge, who teaches English in Oakland Tech’s race, policy, and law pathway, assigned her students a project encouraging them to think critically about AI. They’ll survey other students on how they use AI, research the cognitive impacts of relying on AI, gain an understanding of how exactly the algorithms and platforms operate, and examine wider societal implications. 

“Our job is to help them develop skills and thinking so as adults they can do whatever they want,” Laberge said. 

Laberge and Carey said they want to see OUSD put together an evidence-based policy around AI use. They mentioned a 2025 MIT study that monitored brain function for groups writing an essay. The  authors found that those using a large language model to assist in writing the essay had lower brain activity than those who didn’t, and they had more trouble quoting their own work. 

“We just want our kids to be able to critically think and read and write fluently and with grace,” Carey said. “We do not see a way in which AI is going to make that happen.”

Using AI strategically

At Latitude High School in Fruitvale, educators are taking a different approach. Computer science students at the charter school, which emphasizes project-based learning, are incorporating AI into math video games they’re creating for local fourth graders. This is the first year that classes have introduced AI as part of the curriculum, according to Regina Kruglyak, the school’s dean of instruction. 

Students first write out code on their own, then run it through ChatGPT to test their ideas and find errors. The school uses GoGuardian, a software that can block websites, to restrict access to ChatGPT when students aren’t actively using it for an assignment, Kruglyak said. 

“We were nervous about the possibility that students will forget how to do certain things, or they’ll never learn how to do it in the first place because they’ll just fall back on having ChatGPT do it for them,” Kruglyak said. “That’s where we use GoGuardian. Making sure that students are using their own brains and learning the skills in the first place feels very crucial.” 

Kruglyak coaches Latitude’s science teachers and has held professional development sessions on new AI platforms. She recently introduced Notebook LM, a Google platform that can summarize documents and organize notes into various media. Kruglyak tested it by uploading a grant application and having the software turn it into a podcast. Her goal, she said, is to “change teachers’ minds about what AI can do, and how to help students learn from it rather than be scared of it as a teacher.”

It’s not only high school educators who are confronting students using AI. Joel Hamburger, a fifth grade teacher at Redwood Heights Elementary School, said with students using Google on their Chromebooks, AI results come up every time they type in a Google search. Hamburger, who has been teaching for four years, said this calendar year is when he first started noticing how unavoidable AI is in the classroom. 

“Google AI culls the information from the internet and immediately gives you a response,” Hamburger told The Oaklandside. “Whereas a year or two ago, it gave you websites to go to.”

For now, he allows his students to use Google’s AI for filling out simple worksheets in class. At this time of year, Hamburger’s focus is teaching his students how to craft the right inputs to get the answers they’re looking for. During a spring unit on research projects, he’ll lay out the foundations for evaluating information and factchecking what Google serves up. 

Any kind of AI policy should include tiered guidance for various grade levels, Hamburger said. While fifth graders may not be using ChatGPT, he said, they’re surrounded by AI on their devices and guidance for them may not look the same as instructions for a high schooler. 

“The genie’s just about to be brought out of the bottle for these 10-year-olds,” he said. “They need to know appropriate uses.”

*” indicates required fields



Source link

Continue Reading

AI Research

A Realistic Direction for Artificial General Intelligence Today

Published

on


In November 2024, OpenAI’s Sam Altman said that ChatGPT would achieve the holy grail of artificial general intelligence (AGI) in 2025.

AGI is admittedly a fuzzy goal. Most agree that it involves an ability to perform any intellectual task as well as or better than humans. But which humans? An average human? A Nobel laureate? All Nobel laureates put together?

When GPT-5 was released a few weeks ago, it seemed that a version of AGI had been achieved, on time, when Altman boasted that “it really feels like talking to an expert in any topic, like a PhD-level expert.” Alas, Altman had again overpromised and underdelivered.

It immediately became clear that GPT-5 is not a PhD-level expert (see, for example, here, here, and here.) The fundamental problem remains; large language models (LLMs) like GPT-5 are hobbled by their inherent inability to relate the text they input and output to the real world. No amount of scaling will change that. Tens of thousands of human trainers can patch billions of blunders but (a) an LLM following instructions it does not understand is not intelligent; (b) the trainers cannot anticipate every future blunder that LLMs will make; and (c) many important decisions should elicit subjective probabilities that depend on the specific details of the situation and are beyond the capabilities of LLMs.

The reality is that we are not going to pay big bucks for LLMs that simply recite facts, like the capital of Chile or the population of Portugal. Boring! We can use Wikipedia to look up facts, and the answers are more trustworthy than those generated by LLMs.

For example, I gave GPT-5 this prompt four times, “How many siblings did George Washington have?” and GPT-5 gave four different answers: 7, 8, 9, and 12:

GPT-5:   George Washington, the first President of the United States, had five full siblings and seven half-siblings, for a total of 12 siblings.

               Here’s the breakdown:

Full siblings (same mother and father):

  1. Samuel Washington
  2. John Augustine Washington
  3. Charles Washington
  4. Betty Washington Lewis
  5. Mildred Washington

Half-siblings (from his father Augustine Washingtons first marriage to Jane Butler):

  1. Butler Washington
  2. Lawrence Washington
  3. Augustine Washington Jr.
  4. Jane Washington
  5. Mildred Washington
  6. Richard Washington
  7. John Washington

               So, George grew up in a fairly large blended family.

               If you want, I can also make a family tree diagram showing all his siblings—it makes it much clearer. Do you want me to do that?

I responded yes and GPT-5 drew me a confusing family tree with ten siblings:

Wikipedia gives the correct answer (nine siblings — five full siblings and four half-siblings) and, even though GPT-5 trained on Wikipedia, something in its training set had led it astray.

What should Sam Altman and other developers do now?

Instead of admitting defeat (or continuing to make increasingly embarrassing claims), Altman and his colleagues might heed some timeless advice by declaring victory and getting the hell out. Instead of chasing a goal they cannot achieve, change the goal to be something that has already been achieved by redefining AGI.

I have been thinking about this for several years now. A realistic and easily understood goal is for a computer to be as intelligent as a friend I will call Brock. Everyone knows someone like Brock, so we can all relate to what Brock Intelligence means.

Brock is a prototypical mansplainer. Ask him (or anyone within his earshot) any question and he immediately responds with a long-winded, confident answer — sometimes at 200 words a minute with gusts up to 600. Kudos to those who can listen to half of his answer. Condolences to those who live or work with Brock and have to endure his seemingly endless blather.

Instead of trying to compete with Wikipedia, Altman and his competitors might instead pivot to a focus on Brock Intelligence, something LLMs excel at by being relentlessly cheerful and eager to offer facts-be-damned advice on most any topic.

Brock Intelligence vs. GPT Intelligence

The most substantive difference between Brock and GPT is that GPT likes to organize its output in bullet points. Oddly, Brock prefers a less-organized, more rambling style that allows him to demonstrate his far-reaching intelligence. Brock is the chatty one, while ChatGPT is more like a canned slide show.

They don’t always agree with each other (or with themselves). When I recently asked Brock and GPT-5, “What’s the best state to retire to?,” they both had lengthy, persuasive reasons for their choices. Brock chose Arizona, Texas, and Washington. GPT-5 said that the “Best All-Around States for Retirement” are New Hampshire and Florida. A few days later, GPT-5 chose Florida, Arizona, North Carolina, and Tennessee. A few minutes after that, GPT-5 went with Florida, New Hampshire, Alaska, Wyoming, and New England states (Maine/Vermont/Massachusetts).

Consistency is hardly the point. What most people seek with advice about money, careers, retirement, and romance is a straightforward answer. As Harry Truman famously complained, “Give me a one-handed economist. All my economists say “on the one hand…,” then “but on the other….” People ask for advice precisely because they want someone else to make the decision for them. They are not looking for accuracy or consistency, only confidence.

Sam Altman says that GPT can already be used as an AI buddy that offers advice (and companionship) and it is reported that OpenAI is working on a portable, screen-free “personal life advisor.” Kind of like hanging out with Brock 24/7. I humbly suggest that they name this personal life advisor, Brock Says (Design generated by GPT-5.)



Source link

Continue Reading

Trending