AI Research
From Cyberbullying to AI-Generated Content – McAfee’s Research Reveals the Shocking Risks

The landscape of online threats targeting children has evolved into a complex web of dangers that extend far beyond simple scams. New research from McAfee reveals that parents now rank cyberbullying as their single highest concern, with nearly one in four families (22%) reporting their child has already been targeted by some form of online threat. The risks spike dramatically during the middle school years and peak around age 13, precisely when children gain digital independence but may lack the knowledge and tools to protect themselves.
The findings paint a troubling picture of digital childhood, where traditional dangers like cyberbullying persist alongside emerging threats like AI-generated deepfakes, “nudify” technology, and sophisticated manipulation tactics that can devastate young people’s mental health and safety.
Cyberbullying is Parents’ Top Concern
Cyberbullying and harassment are devastating to young people’s digital experiences. The research shows that 43% of children who have encountered online threats experienced cyberbullying, making it the most common threat families face. The impact disproportionately affects girls, with more than half of targeted girls (51%) experiencing cyberbullying compared to 39% of boys.
The peak vulnerability occurs during early adolescence, with 62% of targeted girls and 52% of targeted boys aged 13-15 facing harassment online. For parents of teen daughters aged 13-15, cyberbullying ranks as the top concern for 17% of families, reflecting the real-world impact these digital attacks have on young people’s well-being.
AI-Generated Content Creates New Dangers
The emergence of AI-powered manipulation tools has introduced unprecedented risks to children’s online safety. Nearly one in five targeted kids (19%) have faced deepfake and “nudify” app misuse, with rates doubling to 38% among girls aged 13-15. These statistics become even more alarming when considering that 18% of parents overall list AI-generated deepfakes and nudify technology among their top three concerns, rising to one in three parents (33%) under age 35.
The broader landscape of AI-generated content exposure is widespread, with significant implications for how children understand truth and authenticity online. The research underscores the challenge parents face in preparing their children to navigate an environment where sophisticated forgeries can be created and distributed with relative ease.
“Today’s online threats aren’t abstract risks — families are facing them every day,” said Abhishek Karnik, head of threat research for McAfee. “Parents’ top concerns are the toll harmful content, particularly cyberbullying and AI-generated deepfakes, takes on their children’s mental health, self-image, and safety. That’s why it’s critical to pair AI-powered online protection with open, ongoing conversations about what kids encounter online. When children know how to recognize risks and misinformation and feel safe talking about these issues with loved ones, they’re better prepared to navigate the digital world with confidence.”
The Growing Confidence Gap
As digital threats become more sophisticated, parents find themselves increasingly outpaced by both technology and their children’s technical abilities. The research reveals that nearly half of parents (48%) admit their child knows more about technology than they do, while 42% say it’s challenging to keep up with the pace of evolving risks.
This knowledge disparity creates real vulnerabilities in family digital safety strategies. Only 34% of parents feel very confident their child can distinguish between real and fake content online, particularly when it comes to AI-generated material or misinformation. The confidence crisis deepens as children age and gain more independence online, precisely when threats become most complex and potentially harmful.
The monitoring habits of families reflect these growing challenges. While parents identify late at night (56%) and after school (41%) as the times when children face the greatest online risks, monitoring practices don’t align with these danger windows. Only about a third of parents (33%) check devices daily, and 41% review them weekly, creating significant gaps in oversight during high-risk periods.
Age-Related Patterns Reveal Critical Vulnerabilities
The research uncovers troubling patterns in how online safety behaviors change as children mature. While 95% of parents report discussing online safety with their children, the frequency and effectiveness of these conversations decline as kids enter their teen years. Regular safety discussions drop from 63% with younger children to just 54% with teenagers, even as threats become more severe and complex.
Daily device monitoring shows even sharper declines, plummeting to just 20% for boys aged 16-18 and dropping as low as 6-9% for girls aged 17-18. This reduction in oversight occurs precisely when older teens face heightened risks of blackmail, “scamtortion,” and other sophisticated threats. The research shows that more than half of targeted boys aged 16-18 (53%) have experienced threats to release fake or real content, representing one of the most psychologically damaging forms of online exploitation.
Gaming and Financial Exploitation
Online gaming platforms have become significant vectors for exploitation, particularly targeting boys. The research shows that 30% of children who have been targeted experienced online gaming scams or manipulation, with the rate climbing to 43% among targeted boys aged 13-15. These platforms often combine social interaction with financial incentives, creating opportunities for bad actors to manipulate young users through false friendships, fake rewards, and pressure tactics.
Real-World Consequences Extend Beyond Screens
The emotional and social impact of online threats creates lasting effects that extend well into children’s offline lives. Among families whose children have been targeted, the consequences reach far beyond momentary embarrassment or frustration. The research shows that 42% of affected families report their children experienced anxiety, felt unsafe, or were embarrassed after online incidents.
The social ramifications prove equally significant, with 37% of families dealing with issues that spilled over into school performance or friendships. Perhaps most concerning, 31% of affected children withdrew from technology altogether after negative experiences, potentially limiting their ability to develop healthy digital literacy skills and participate fully in an increasingly connected world.
The severity of these impacts has driven many families to seek professional support, with 26% requiring therapy or counseling to help their children cope with online harms. This statistic underscores that digital threats can create trauma requiring the same level of professional intervention as offline dangers.
Building Trust Through Technology Agreements
Creating a foundation for open dialogue about digital safety starts with establishing clear expectations and boundaries. McAfee’s Family Tech Pledge provides parents with a structured framework to initiate these crucial conversations with their children about responsible device use. Currently, few families have implemented formal agreements about technology use, representing a significant opportunity for improving digital safety through collaborative rule-setting.
A technology pledge serves as more than just a set of rules, it becomes a collaborative tool that helps parents and children discuss the reasoning behind safe online practices. By involving children in the creation of these agreements, families can address age-appropriate concerns while building trust and understanding. The process naturally opens doors to conversations about the threats identified in the research, from predators and cyberbullying to AI-generated content and manipulation attempts.
These agreements work best when they evolve alongside children’s digital maturity. What starts as basic screen time limits for younger children can expand to include discussions about social media interactions, sharing personal information, and recognizing suspicious content as they enter their teen years. The key is making the technology pledge a living document that adapts to new platforms, emerging threats, and changing family circumstances.
Advanced Protection Through AI-Powered Detection
While conversations and agreements form the foundation of digital safety, today’s threat landscape requires technological solutions that can keep pace with rapidly evolving risks. McAfee’s Scam Detector represents a crucial additional layer of defense, using artificial intelligence to identify and flag suspicious links, manipulated content, and potential threats before they can cause harm.
The tool’s AI-powered approach is particularly valuable given the research findings about manipulated media and deepfake content. With AI-generated content becoming weapons used against children, especially teenage girls, automated detection becomes essential for catching threats that might bypass both parental oversight and children’s developing digital literacy skills.
For parents who feel overwhelmed by the pace of technological change, 42% report struggling to keep up with the risk landscape, Scam Detector provides professional-grade protection without requiring extensive technical knowledge. It offers families a way to maintain security while fostering the trust and communication that the research shows is essential for long-term digital safety.
The technology is especially crucial during the high-risk periods identified in the research. Since 56% of parents recognize that late-night hours present the greatest danger, and monitoring naturally decreases during these times, automated protection tools can provide continuous vigilance when human oversight is most difficult to maintain.
A Path Forward for Families
The research reveals that addressing online threats requires a comprehensive approach combining technology, communication, and ongoing education. Parents need practical tools and strategies that can evolve with both the threat landscape and their children’s developing digital independence.
Effective protection starts with pairing parental controls with regular, judgment-free conversations about harmful content, coercion, and bullying, ensuring children know they can seek help without fear of punishment or restrictions. Teaching children to “trust but verify” by checking sources and asking for help when something feels suspicious becomes especially important as AI-generated content makes deception increasingly sophisticated.
Keeping devices secure with updated security settings and AI-powered protection tools like McAfee’s Scam Detector helps create multiple layers of defense against evolving threats. These technological safeguards work best when combined with family agreements that establish clear expectations for online behavior and regular check-ins that maintain open communication as children mature.
Research Methodology
This comprehensive analysis is based on an online survey conducted in August 2025 of approximately 4,300 parents or guardians of children under 18 across Australia, France, Germany, India, Japan, the United Kingdom, and the United States. The research provides crucial insights into the current state of children’s online safety and the challenges families face in protecting their digital natives from increasingly sophisticated threats.
The data reveals that today’s parents are navigating unprecedented challenges in protecting their children online, with peak vulnerability occurring during the middle school years when digital independence collides with developing judgment and incomplete knowledge of online risks. While the threats may be evolving and complex, the research shows that informed, proactive families who combine technology tools with open communication are better positioned to help their children develop the skills needed to safely navigate the digital world.
AI Research
Study shakes Silicon Valley: Researchers break AI
Study shows researchers can manipulate chatbots with simple psychology, raising serious concerns about AI’s vulnerability and potential dangers.
AI Research
And Sci Fi Thought AI Was Going To… Take Over? – mindmatters.ai
AI Research
Measuring Machine Intelligence Using Turing Test 2.0

In 1950, British mathematician Alan Turing (1912–1954) proposed a simple way to test artificial intelligence. His idea, known as the Turing Test, was to see if a computer could carry on a text-based conversation so well that a human judge could not reliably tell it apart from another human. If the computer could “fool” the judge, Turing argued, it should be considered intelligent.
For decades, Turing’s test shaped public understanding of AI. Yet as technology has advanced, many researchers have asked whether imitating human conversation really proves intelligence — or whether it only shows that machines can mimic certain human behaviors. Large language models like ChatGPT can already hold convincing conversations. But does that mean they understand what they are saying?
In a Mind Matters podcast interview, Dr. Georgios Mappouras tells host Robert J. Marks that the answer is no. In a recent paper, The General Intelligence Threshold, Mappouras introduces what he calls Turing Test 2.0. This updated approach sets a higher bar for intelligence than simply chatting like a human. It asks whether machines can go beyond imitation to produce new knowledge.
From information to knowledge
At the heart of Mappouras’s proposal is a distinction between two kinds of information, non-functional vs. functional:
- Non-functional information is raw data or observations that don’t lead to new insights by themselves. One example would be noticing that an apple falls from a tree.
- Functional information is knowledge that can be applied to achieve something new. When Isaac Newton connected the falling apple to the force of gravity, he transformed ordinary observation into scientific law.
True intelligence, Mappouras argues, is the ability to transform non-functional information into functional knowledge. This creative leap is what allows humans to build skyscrapers, develop medicine, and travel to the moon. A machine that merely rearranges words or retrieves facts cannot be said to have reached the same level.
The General Intelligence Threshold
Mappouras calls this standard the General Intelligence Threshold. His threshold sets a simple challenge: given existing knowledge and raw information, can the system generate new insights that were not directly programmed into it?
This threshold does not require constant displays of brilliance. Even one undeniable breakthrough — a “flash of genius” — would be enough to demonstrate that a machine possesses general intelligence. Just as a person may excel in math but not physics, a machine would only need to show creativity once to prove its potential.
Creativity and open problems
One way to apply the new test is through unsolved problems in mathematics. Throughout history, breakthroughs such as Andrew Wiles’s proof of Fermat’s Last Theorem or Grigori Perelman’s solution to the Poincaré Conjecture marked milestones of human creativity. If AI could solve open problems like the Riemann Hypothesis or the Collatz Conjecture — problems that no one has ever solved before — it would be strong evidence that the system had crossed the threshold into true intelligence.
Large language models already solve equations and perform advanced calculations, but solving a centuries-old unsolved problem would show something far deeper: the ability to create knowledge that has never existed before.
Beyond symbol manipulation
Mappouras also draws on philosopher John Searle’s famous “Chinese Room” thought experiment. In the scenario, a person who does not understand Chinese sits in a room with a rulebook for manipulating Chinese characters. By following instructions, the person produces outputs that convince outsiders he understands the language, even though he does not.
This scenario, Searle argued, shows that a computer might appear intelligent without real understanding. Mappouras agrees but goes further. For him, real intelligence is proven not just by producing outputs, but by acting on new knowledge. If the instructions in the Chinese Room included a way to escape, the person could only succeed if he truly understood what the words meant. In the same way, AI must demonstrate it can act meaningfully on information, not just shuffle symbols.
Can AI pass the new test?
So far, Mappouras does not think modern AI has passed the General Intelligence Threshold. Systems like ChatGPT may look impressive, but their apparent creativity usually comes from patterns in the massive data sets on which they were trained. They have not shown the ability to produce new, independent knowledge disconnected from prior inputs.
That said, Mappouras emphasizes that success would not require constant novelty. One true act of creativity — an undeniable demonstration of new knowledge — would be enough. Until that happens, he remains cautious about claims that today’s AI is truly intelligent.
A shift in the debate
The debate over artificial intelligence is shifting. The original Turing Test asked whether machines could fool us into thinking they were human. Turing Test 2.0 asks a harder question: can they discover something new?
Mappouras believes this is the real measure of intelligence. Intelligence is not imitation — it is innovation. Whether machines will ever cross that line remains uncertain. But if they do, the world will not just be talking with computers. We will be learning from them.
Final thoughts: Today’s systems, tomorrow’s threshold
Models like ChatGPT and Grok are remarkable at conversation, summarization, and problem-solving within known domains, but their strengths still reflect pattern learning from vast training data. By Mappouras’s standard, they will cross the General Intelligence Threshold only when they produce a verifiable breakthrough — an insight not traceable to prior text or human scaffolding, such as an original solution to a major open problem. Until then, they remain powerful imitators and accelerators of human work — impressive, useful, and transformative, but not yet creators of genuinely new knowledge.
Additional Resources
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries