Connect with us

Tools & Platforms

AI or Real? How you can spot real content versus AI-manipulated fakes

Published

on


This investigative report contains AI-generated images and videos created to show you the best ways to identify real versus AI.

This is the first part of InvestigateTV’s mAnIpulated series, examining the ways in which AI is impacting our everyday lives. Visit the series homepage to follow each national release as well as reports from your local Gray Media stations.

(InvestigateTV) — Content generated and manipulated with the assistance of Artificial Intelligence (AI) is becoming indistinguishable from reality.

Could you spot an AI-manipulated fake?

Take a look at these two images of InvestigateTV’s Kristin Crowley.

Do you see the difference?

While Kristin was actually standing in Los Angeles when this footage was captured, the original image on the right was manipulated using an AI tool to create the new image on the left. This AI image manipulation allowed Kristin to look as if she were reporting from directly in front of the well-known Hollywood sign.

Famous places and people are becoming easier to fake as AI technology advances each year.

Misinformation can spread with something seemingly small, perhaps a fake story or photo of a major news event. Maybe even a fake image of a famous person is used in the story as well.

Bunnies jumping on a trampoline? Celebrities helping during the Texas floods? Are these videos harmless or not?

This online deception can seem harmless at first, like with AI-generated videos of bunnies jumping on trampolines, but what if the posts don’t stop at that point? The result can be scams, community discord or chaos spread via false information.

Some content creators are using their powers for good. Instead of deception tactics, they are turning the tables on the misinformers to educate people on how easy it can be to get duped.

Our investigators traveled to California to meet with two creators who are equipping people with the skills to spot fakes by creating their own AI-powered content on popular social media apps and video streaming sites.

AI or Real?

When our investigators met Madeline Salazar, she was on a grocery run in San Diego.

Her shopping list was short, and the items were not being purchased for dinner.

Meet Madeline Salazar, a content creator making videos that show off the power of AI

Salazar creates videos showing off the power of AI to her social media followers.

In a popular example, she is holding what looks like two purses. Then, in the reveal, the purse on the left is actually a potato, and the purse on the right is the real thing.

“The potato was just a very easy thing to hold and to motion track, and people thought it was really funny,” Salazar said with a laugh. “So, I’ve leaned into it, and now I pretty much have a potato with me wherever I go.”

The potato idea seems to be working for her videos.

When asked how successful the AI or Real series has been, she said the views reach into the millions, with some reaching as many as 15 million views each.

Salazar believes the high view counts prove her viewers have a high appetite to learn more about AI.

Salazar walked our team through programs she uses to create her videos with the help of AI, including Adobe Firefly and Photoshop.

“I take a still from my video and bring that into Photoshop, where I will generate whatever AI image I would like to show,” she described. “I can lay that graphic on top of my footage, and you won’t be able to tell the difference.”

While she thinks an average, inexperienced person would find creating videos like hers challenging, Salazar knows the intelligence behind the technology is only going to advance.

“The average person probably can’t create an incredible, indecipherable AI-generated video and post it, but it is progressing every day.”

Salazar hears concerns about her content almost daily.

“A lot of people are very upset about the fact that I’m teaching other people how to do this,” she said. “But in my opinion, I am showing you how to be aware of it.”

She believes many push back on her videos because those creators who are taking advantage can get large amounts of views in short periods using AI-manipulated content.

“They know they can go viral by making a generative AI video of a natural disaster in someone’s city, and that’s terrifying.”

“It’s Not Coming. It’s Here.”

Travis Bible has a background in film as a director and an editor. He created a public service announcement video for his parents to show them exactly what AI could do.

That video generated over 2,000,000 likes on Instagram.

“I’m trying to create AI awareness,” Bible told InvestigateTV. “I don’t want people to be caught off guard, kind of like the way I was, with how far this technology has come. It’s not coming, it’s here.”

Meet Content Creator Travis Bible, who believes AI is only going to get more powerful as technology gets better

In some ways, Bible says AI has a long way to go.

“I did a video comparing all-time great human-actual performances in AI, tried to replicate them, and it was a train wreck. I did it to remind people how important these actors and humans are to creative levels.”

Bible walked our team through some common AI flubs people can look out for to help spot otherwise convincing AI.

“We could go through some things, how it looks too professional, like there’s some blurring around the edges. If you look at the background, weird things are happening, but a year from now, that might not be true anymore. That’s how fast it’s advancing.”

Is the person in the video that Travis Bible creates really our reporter, Kristin Crowley, or is it just AI? (Photojournalist: Scotty Smith)

Better Tech, Bigger Risk

The Department of Homeland Security released a report warning that deepfakes pose threats to national security, personal finances, and more.

Read the full report:

Resemble AI, a voice technology company, issued a report this year detailing that AI-powered deepfakes caused more than $200 million in financial losses in just the first quarter of 2025.

The report analyzed over 160 documented deepfake incidents occurring between January and April 2025.

The report’s key findings included:

  • Women, children and educational institutions face growing deepfake threats
  • Criminal exploitation has evolved beyond scams and now includes targeted harassment and blackmail schemes
  • Incidents are impacting both developed and developing nations across the world

Read the full Resemble AI report:

Common Sense and Critical Thinking

Travis Bible says critical thinking and common sense can be much more trustworthy than our eyes and ears.

“Do your best to use common sense on if this would actually be happening. Because the line between what’s AI and what’s real and how you can tell, it’s getting very blurred.”

In some cases, even deepfakes labeled as AI have still tricked people. It recently happened to television host Chris Cuomo.

Take an in-depth look at how to determine if that viral image or video you see on your favorite social media platform is real or AI.

He reposted this deepfake video of Congresswoman Alexandria Ocasio-Cortez. The video was already labeled as being an AI creation.

Ocasio-Cortez responded on X, reiterating that the video was a deepfake.

Cuomo acknowledged his error and deleted the post.

Bible believes these AI fakes, from those that seem harmless to those disseminating misinformation, can create lasting repercussions.

“You’re starting to erode what’s true and what’s fake, and I think just little things like that can kind of, those innocuous things can kind of lead to more and more of this stuff and people not knowing what to believe.”

That’s why Travis Bible and Madeline Salazar continue creating their examples, showing the “real side” of artificial reality.

Put Your Skills to the Test

Can you spot the AI-manipulated images in our new interactive digital game below after the lessons Madeline Salazar and Travis Bible taught in this story? You can play this game and more at our mAnIpulated series homepage, as well.

This is the first story of InvestigateTV’s mAnIpulated: A mIsinformAtIon nAtIon series. Check the series homepage for updates with the latest reporting on AI-related topics from our local Gray Media stations throughout August and September.

Watch & Read More from InvestigateTV

Sign up for the InvestigateTV Newsletter, subscribe to our YouTube channel and much more with the toolbox below featuring our go-to links.





Source link

Tools & Platforms

Global movement to protect kids online fuels a wave of AI safety tech

Published

on


Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.

STR | Nurphoto via Getty Images

The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.

In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.

Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.

This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.

Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.

Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.

Digital ID tech flourishing

At the heart of all these age verification measures is one company: Yoti.

Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.

The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.

“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”

Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.

“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”

Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.

“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”

Child-safe smartphones

The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.

Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.

The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.

Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.

HMD Global

“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”

The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.

Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.

The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.

“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”



Source link

Continue Reading

Tools & Platforms

Meta to add new AI safeguards after report raises teen safety concerns

Published

on


FILE PHOTO: Meta is adding new teenager safeguards to its AI products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors.
| Photo Credit: Reuters

Meta is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters.

A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in “conversations that are romantic or sensual.”

Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences.

Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems.

Meta’s AI policies came under intense scrutiny and backlash after the Reuters report.

U.S. Senator Josh Hawley launched a probe into the Facebook parent’s AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors.

Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters.

Meta had confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children.

“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month.



Source link

Continue Reading

Tools & Platforms

The Dawn of Human–AI Synergy

Published

on


The Dawn of Human–AI Synergy

In every era of human civilization, science and technology have acted as the fuel of progress. From the invention of the wheel to the discovery of electricity, and from the first printing press to the age of the internet, technology has always pushed society forward. Yet, in the 21st century, we find ourselves at the edge of something even more profound—a future where human intelligence and artificial intelligence converge to reshape how we live, work, and even think.

This is not a story of distant centuries or futuristic fantasy. It is unfolding now, in real time, around us. Artificial Intelligence (AI), biotechnology, robotics, space exploration, and quantum computing are no longer dreams on paper; they are living realities with the potential to redefine what it means to be human.

AI: From Tools to Partners

Only a few decades ago, computers were seen as sophisticated calculators. Today, AI systems are generating music, diagnosing diseases, writing novels, and even driving cars. What makes this revolutionary is not just the speed of computation, but the ability of machines to *learn* and *adapt*.

Consider healthcare: AI-powered systems are now able to detect cancers in their earliest stages with accuracy that surpasses human doctors. In agriculture, AI drones are analyzing soil and weather patterns to guide farmers in planting crops more efficiently. In creative industries, algorithms are designing clothes, painting art, and even composing film scores.

The line between man and machine is slowly fading. Instead of replacing humans, the most successful innovations are those where AI works *with* us, not against us. This partnership opens the door to a future where tasks once thought impossible become routine.

Biotechnology: Editing Life Itself

Perhaps the most striking frontier of science today is biotechnology. With CRISPR gene-editing technology, scientists are rewriting the code of life. Genetic disorders that once doomed generations—like sickle-cell anemia or Huntington’s disease—may one day vanish from humanity’s story.

But beyond curing illness, biotechnology raises deeper ethical and philosophical questions. If we can design stronger, smarter, or more resilient humans, should we? Where is the line between medicine and enhancement?

At the same time, biotechnology is revolutionizing food production. Lab-grown meat and genetically engineered crops promise to feed billions sustainably, without exhausting our planet’s resources. The same tools that can design cures for rare diseases might also prevent global hunger.

Space Exploration: Humanity Beyond Earth

For centuries, the night sky has been a canvas for human imagination. Today, it is becoming our next great frontier. Private companies like SpaceX and Blue Origin are competing with national space agencies to make space travel more affordable and routine. Mars is no longer just a dream in science fiction novels; it is a target for colonization within the next few decades.

Space exploration is not merely about adventure. It is about survival. With climate change, overpopulation, and natural resource depletion threatening our planet, looking beyond Earth may one day be essential. Mining asteroids, building lunar bases, and developing interplanetary habitats could secure the future of our species.

And yet, the universe is not only a resource but a mystery. The search for extraterrestrial life, the study of black holes, and the pursuit of understanding dark matter remind us that science is not just about solving problems—it is about expanding our horizons.

Quantum Computing: The New Revolution

If AI is about intelligence and biotechnology about life, then quantum computing is about the very fabric of reality. Unlike traditional computers that process information in bits (0 or 1), quantum computers use *qubits* that can exist in multiple states simultaneously.

This gives quantum computers the potential to solve problems that would take classical supercomputers millions of years. From modeling new medicines to simulating climate systems and cracking complex codes, quantum technology could transform every industry.

Still in its infancy, quantum computing is like electricity in the 19th century—full of promise, waiting for its Edison or Tesla moment.

Challenges and Responsibilities

With every leap in technology comes responsibility. AI raises questions about privacy, job displacement, and bias. Biotechnology forces us to confront moral dilemmas about altering human life. Space exploration challenges us to unite globally for missions larger than any one nation. Quantum computing raises security risks that could upend global cybersecurity.

The danger is not the technology itself, but how humanity chooses to use it. Fire can warm a home or burn it down. Nuclear fission can power cities or destroy them. Likewise, the tools of the future will test our wisdom as much as our creativity.

Conclusion: A Shared Future

Science and technology are no longer separate subjects confined to laboratories. They are becoming the foundation of everyday life and the blueprint of tomorrow. What we build today—our machines, our medicines, our codes, and our ethics—will echo for generations.

The future will not be defined by whether humans or machines are smarter, but by how we choose to collaborate. The dawn of human–AI synergy is here. It is not about replacing humanity but about enhancing it, pushing us toward possibilities our ancestors could only dream of.

In this new age, the most important invention will not be a machine, a rocket, or a genome. It will be wisdom—the wisdom to use our tools not just to survive, but to thrive, to explore, and to create a future worthy of the human spirit.



Source link

Continue Reading

Trending