Connect with us

AI Insights

Pornhub to introduce ‘government approved’ UK age checks

Published

on


Chris Vallance & Liv McMahon

Technology reporters

Getty Images Pornhub's logo displayed on a smartphone, with the company's logo displayed on a white background behind it.Getty Images

Pornhub and a number of other major adult websites have confirmed they will introduce enhanced age checks for users from next month.

Parent company Aylo says it is bringing in “government approved age assurance methods” but has not yet revealed how it will require users to prove they are over 18.

Regulator Ofcom has previously said simply clicking a button, which is all the adult site currently requires, is not enough.

Ofcom said the changes would “bring pornography into line with how we treat adult services in the real world.”

The Online Safety Act requires adult sites to introduce “robust” age checking techniques by this summer.

Approved measures include demanding photo ID or running credit card checks before users can view sexually explicit material.

“Society has long protected youngsters from products that aren’t suitable for them, from alcohol to smoking or gambling,” said Oliver Griffiths, Ofcom’s group director of online safety, in a statement.

“For too long children have been only a click away from harmful pornography online.”

Mr Griffiths said assurances from Aylo and several other porn providers, including Stripchat and Streamate, regarding the introduction of new age checks showed “change is happening”.

The regulator said its recent research indicated 8% of children aged 8-14 in the UK had visited an online porn site or app over a 28-day period.

This included about 3% of eight to nine year olds, its survey suggests.

Derek Ray-Hill, interim chief executive at the Internet Watch Foundation, warned that children’s’ exposure to online porn at a very young age, or to violent sexual material, could normalise harmful behaviour offline.

“We welcome platforms doing all they can to comply with the Online Safety Act and prevent children accessing pornography,” he said.

“We know that highly effective age assurance can play a vital role in protecting young users from accessing harmful and inappropriate material on social media and other platforms,” said Rani Govender, policy manager for child safety online at the NSPCC.

“It is time tech companies take responsibility for ensuring children have safe, age-appropriate experiences online, and we welcome the progress that Ofcom are making in this space.”

Scrutiny over child safety

Getty Images A screenshot shown of Pornhub's age gate which tells users "this is an adult website" and gives them an option to select "I am 18 or older - enter" as the highlighted option.Getty Images

“Click away” age gates on porn sites including Pornhub have been criticised by regulators.

Pornhub is the most visited porn site in the UK and around the world, according to data from Similarweb.

It has been under scrutiny by regulators worldwide over its measures to prevent children accessing adult content.

The European Commission announced an investigation into Pornhub, along with two other adult platforms, at the end of May.

In the UK, Ofcom is probing several adult sites it believes may be failing to abide by its child safety rules.

Aylo’s vice president of brand and community Alex Kekesi said Ofcom presented a variety of flexible methods of age assurance that were less intrusive than those it had seen in other jurisdictions.

“Ofcom recognises the scale of the challenge ahead and is approaching it with thorough consideration,” she said.

The regulator’s model is “the most robust in terms of actual and meaningful protection we’ve seen to date,” she added.

“When governments and regulators engage with industry in good faith, the outcome is not just better compliance, it’s smarter, more effective solutions”.

Aylo said it would introduce the new methods to check user ages on its sites by 25 July, but so far has not spelt out what techniques it will use to verify age.

It says it will detail the measures closer to the July enforcement date, but users will be offered a range of options.

Under the Online Safety Act, providers of platforms where children could encounter porn and harmful content must have measures in place to stop them accessing it.

The Act requires this to take place chiefly through the use of technology that is “highly effective” in determining whether a user is 18.

Ofcom said in January this could include solutions such as photo ID matching, digital identity services or facial age estimation.

Porn providers that fail to meet the Act’s requirements could face enforcement action such as huge fines.

‘Greater danger’

Some experts and digital rights groups, as well as porn providers themselves, are concerned that regulators’ efforts to compel porn sites to verify the age of users could have privacy and security implications.

Civil liberties organisation the Open Rights Group is concerned the Online Safety Act age check requirements for platforms may introduce “a wide range of new cybersecurity risks for users,” or even push children towards more dangerous sites.

“Speaking as a parent I’m worried that children and young people who attempt to bypass age checks may inadvertently expose themselves to greater online harms,” said James Baker, its platform power and free expression program manager.

“These include stumbling across unregulated or underground sites, installing malware, or being targeted by exploitative actors in less moderated spaces,” he told the BBC.

“In trying to ‘protect’ them, we risk pushing them toward greater danger.”

The group said it also doubts the UK’s data protection regime or regulatory enforcement is “robust enough” to ensure user safety around possible collection of sensitive identity data or linking it to browsing habits.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Do AI systems socially interact the same way as living beings?

Published

on


Key takeaways

  • A new study that compares biological brains with artificial intelligence systems analyzed the neural network patterns that emerged during social and non-social tasks in mice and programmed artificial intelligence agents.
  • UCLA researchers identified high-dimensional “shared” and “unique” neural subspaces when mice interact socially, as well as when AI agents engaged in social behaviors.
  • Findings could help advance understanding of human social disorders and develop AI that can understand and engage in social interactions.

As AI systems are increasingly integrated into from virtual assistants and customer service agents to counseling and AI companions, an understanding of social neural dynamics is essential for both scientific and technological progress. A new study from UCLA researchers shows biological brains and AI systems develop remarkably similar neural patterns during social interaction.

The study, recently published in the journal Nature, reveals that when mice interact socially, specific brain cell types create synchronize in “shared neural spaces,” and artificial intelligence agents develop analogous patterns when engaging in social behaviors.     

The new research represents a striking convergence of neuroscience and artificial intelligence, two of today’s most rapidly advancing fields. By directly comparing how biological brains and AI systems process social information, scientists can now better understand fundamental principles that govern social cognition across different types of intelligent systems. The findings could advance understanding of social disorders like autism while simultaneously informing the development of more sophisticated, socially  aware AI systems.  

This work was supported in part by , the National Science Foundation, the Packard Foundation, Vallee Foundation, Mallinckrodt Foundation and the Brain and Behavior Research Foundation.

Examining AI agents’ social behavior

A multidisciplinary team from UCLA’s departments of neurobiology, biological chemistry, bioengineering, electrical and computer engineering, and computer science across the David Geffen School of Medicine and UCLA Samueli School of Engineering used advanced brain imaging techniques to record activity from molecularly defined neurons in the dorsomedial prefrontal cortex of mice during social interactions. The researchers developed a novel computational framework to identify high-dimensional “shared” and “unique” neural subspaces across interacting individuals. The team then trained artificial intelligence agents to interact socially and applied the same analytical framework to examine neural network patterns in AI systems that emerged during social versus non-social tasks.

The research revealed striking parallels between biological and artificial systems during social interaction. In both mice and AI systems, neural activity could be partitioned into two distinct components: a “shared neural subspace” containing synchronized patterns between interacting entities, and a “unique neural subspace” containing activity specific to each individual.

Remarkably, GABAergic neurons — inhibitory brain cells that regulate neural activity —showed significantly larger shared neural spaces compared with glutamatergic neurons, which are the brain’s primary excitatory cells. This represents the first investigation of inter-brain neural dynamics in molecularly defined cell types, revealing previously unknown differences in how specific neuron types contribute to social synchronization.

When the same analytical framework was applied to AI agents, shared neural dynamics emerged as the artificial systems developed social interaction capabilities. Most importantly, when researchers selectively disrupted these shared neural components in artificial systems, social behaviors were substantially reduced, providing the direct evidence that synchronized neural patterns causally drive social interactions.

The study also revealed that shared neural dynamics don’t simply reflect coordinated behaviors between individuals, but emerge from representations of each other’s unique behavioral actions during social interaction.

“This discovery fundamentally changes how we think about social behavior across all intelligent systems,” said Weizhe Hong, professor of neurobiology, biological chemistry and bioengineering at UCLA and lead author of the new work. “We’ve shown for the first time that the neural mechanisms driving social interaction are remarkably similar between biological brains and artificial intelligence systems. This suggests we’ve identified a fundamental principle of how any intelligent system — whether biological or artificial — processes social information. The implications are significant for both understanding human social disorders and developing AI that can truly understand and engage in social interactions.”

Continuing research for treating social disorders and training AI

The research team plans to further investigate shared neural dynamics in different and potentially more complex social interactions. They also aim to explore how disruptions in shared neural space might contribute to social disorders and whether therapeutic interventions could restore healthy patterns of inter-brain synchronization. The artificial intelligence framework may serve as a platform for testing hypotheses about social neural mechanisms that are difficult to examine directly in biological systems. They also aim to develop methods to train socially intelligent AI.

The study was led by UCLA’s Hong and Jonathan Kao, associate professor of electrical and computer engineering. Co-first authors Xingjian Zhang and Nguyen Phi, along with collaborators Qin Li, Ryan Gorzek, Niklas Zwingenberger, Shan Huang, John Zhou, Lyle Kingsbury, Tara Raam, Ye Emily Wu and Don Wei contributed to the research.



Source link

Continue Reading

AI Insights

I tried recreating memories with Veo 3 and it went better than I thought, with one big exception

Published

on


If someone offers to make an AI video recreation of your wedding, just say no. This is the tough lesson I learned when I started trying to recreate memories with Google’s Gemini Veo model. What started off as a fun exercise ended in disgust.

I grew up in the era before digital capture. We took photos and videos, but most were squirreled away in boxes that we only dragged out for special occasions. Things like the birth of my children and their earliest years were caught on film and 8mm videotape.



Source link

Continue Reading

AI Insights

That’s Our Show

Published

on


July 07, 2025

This is the last episode of the most meaningful project we’ve ever been part of.

The Amys couldn’t imagine signing off without telling you why the podcast is ending, reminiscing with founding producer Amanda Kersey, and fitting in two final Ask the Amys questions. HBR’s Maureen Hoch is here too, to tell the origin story of the show—because it was her idea, and a good one, right?

Saying goodbye to all the women who’ve listened since 2018 is gut-wrenching. If the podcast made a difference in your life, please bring us to tears/make us smile with an email: womenatwork@hbr.org.

If and when you do that, you’ll receive an auto reply that includes a list of episodes organized by topic. Hopefully that will direct you to perspectives and advice that’ll help you make sense of your experiences, aim high, go after what you need, get through tough times, and take care of yourself. That’s the sort of insight and support we’ve spent the past eight years aiming to give this audience, and you all have in turn given so much back—to the Women at Work team and to one another.



Source link

Continue Reading

Trending