Tools & Platforms
People left ‘bamboozled’ after video of woman leaves them questioning reality

In a world where artificial intelligence is becoming an increasingly integral part of everyday life, the question of whether we can tell the difference between real life and AI seems more important than ever.
If one tech expert’s videos are to go by, however, most of us definitely can’t.
AI expert and content creator Madeline Salazar regularly leaves people feeling like their ‘brain is broken’ with her ‘Real or AI’ clips on TikTok and YouTube.
Her latest TikTok video shows Madeline cartwheeling over a wall that doesn’t exist, before turning a stunning lime green purse into a potato. It truly is mind blowing, and a little bit terrifying at the same time.
Reckon you’d be able to tell the difference? You can have a go here…
Madeline, who posts from the handle @MadSal on TikTok, points to various different items and asks her followers if they think they’re real or AI and the majority of the time, they’re AI.
One TikTok user commented: “We’re so cooked,” while another wrote: “I am definitely getting scammed when I’m older.”
Meanwhile, others were quick to comment saying they were starting to be able to spot the difference after watching a number of Madeline’s other videos.
“This is the first video I’ve nailed, you’re training us,” one commented, while another added: “I’m getting better and then the next video I’m not.”
Madeline asked her followers which door was real (@MadSal/TikTok)
One follower raised their own concerns about the impact of AI-generated videos and what lawmakers should be doing to protect people from fake content.
“Without clear labelling in every AI generated video, deception and disinformation are at risk,” they wrote.
“A legal requirement for visible AI labelling is urgently needed. Transparency protects the truth in the digital world.”
While there is currently no legal requirement to declare when a video has been AI generated in the UK, platforms such as YouTube do require users to accurately label content when it contains realistic altered media, like deepfakes, voice cloning, or digitally generated scenes that appear lifelike.

Her videos leave people questioning reality (@MadSal/TikTok)
Consumer law does put a ban on misleading practices, meaning AI-generated content cannot be used to trick viewers into believing they’re seeing something that is real.
On her YouTube channel, Madeline teaches her followers how to create their own AI generated content and how to implement it into videos, as well as creating clones and deepfake videos of herself.
In the meantime, you can train yourself on what is and isn’t real by watching her TikToks.
Tools & Platforms
Bridging the AI Regulatory Gap Through Product Liability

Scholar proposes applying product liability principles to strengthen AI regulation.
In a world where artificial intelligence (AI) is evolving at an exponential pace, its presence steadily reshapes relationships and risks. Although some actors can abuse AI technology to harm others, other AI technologies can cause harm without malicious human intent involved. Individuals have reported forming deep emotional attachments to AI chatbots, sometimes perceiving them as real-life partners. Other chatbots have deviated from their intended purpose in harmful ways—such as a mental health chatbot that, rather than providing emotional support, inadvertently prescribed diet advice.
Despite growing public concern over the safety of AI systems, there is still no global consensus on the best approach to regulate AI.
In a recent article, Catherine M. Sharkey of the New York University School of Law argues that AI regulation should be informed by the government’s own experiences with AI technologies. She explores how lessons from the approach of the Food and Drug Administration (FDA) to approving high-risk medical products, such as AI-driven medical devices that interpret medical scans or diagnose conditions, can help shape AI regulation as a whole.
Traditionally, FDA requires that manufacturers demonstrate the safety and effectiveness of their products before they can enter the market. But as Sharkey explains, this model has proven difficult to apply to adaptive AI technologies that can evolve after deployment—since, under traditional frameworks, each modification would otherwise require a separate marketing submission, an approach ill-suited to systems that continuously learn and change. To ease regulatory hurdles for developers, particularly those whose products update frequently, FDA is moving toward a more flexible framework that relies on post-market surveillance. Sharkey highlights the role of product liability law, a framework traditionally applied to defective physical goods, in addressing accountability where static regulations fail to manage the risks that emerge once AI systems are in use.
FDA has been at the vanguard of efforts to revise its regulatory framework to fit adaptative AI technologies. Sharkey highlights that FDA shifted from a model emphasizing pre-market approval, where products must meet safety and effectiveness standards before entering the market, to one centered on post-market surveillance, which monitors products’ performance and risks after AI medical products are deployed. As this approach evolves, she explains that product liability serves as a crucial deterrent against negligence and harm, particularly during the transition period before a new regulatory framework is established.
Critics argue that regulating AI requires a distinct approach, as no prior technological shift has been as disruptive. Sharkey contends that these critics overlook the strength of existing liability frameworks and their ability to adapt to AI’s evolving nature.
Sharkey argues that crafting pre-market regulations for new technologies can be particularly difficult due to uncertainties about risks.
Further, she notes that regulating emerging technology too early could stifle innovation. Sharkey argues that product liability offers a dynamic alternative because, instead of requiring regulators to predict and prevent every possible AI risk in advance, it allows agencies to identify failures as they occur and adjust regulatory strategies accordingly.
Sharkey emphasizes that FDA’s experience with AI-enabled medical devices serves as a meaningful starting point for developing a product liability framework for AI. In developing such framework, she draws parallels to the pharmaceutical’s drug approval process. When a new drug is introduced to the market, its full risks and benefits remain uncertain. She explains that both manufacturers and FDA gather extensive real-world data after a product is deployed. In light of that process, she proposes that the regulatory framework should be adjusted to ensure that manufacturers either return to FDA with updated information, or that tort lawsuits serve as a corrective mechanism. In this way, product liability has an “information-forcing” function, ensuring that manufacturers remain accountable for risks that surface post-approval.
As Sharkey explains, the U.S. Supreme Court’s decision in Riegel v. Medtronic set an important precedent for the intersection of regulation and product liability. The Court ruled that most product liability claims related to high-risk medical devices approved through FDA’s pre-market approval process—a rigorous review that assesses the device’s safety and effectiveness—are preempted. This means that manufacturers are shielded from state-law liability if their devices meet FDA’s safety and effectiveness standards. In contrast, Sharkey explains that under Riegel, devices cleared under FDA’s pre-market notification process do not receive the same immunity, because that pathway does not involve a full safety efficacy review but instead allows devices to enter the market if they are deemed “substantially equivalent” to existing ones.
Building on Riegel, Sharkey proposes a model in which courts assess whether a product liability claim raises new risk information that was not considered by FDA in its original risk-benefit analysis at the time of approval. Under this framework, if the claim introduces evidence of risks beyond those previously weighted by the agency, the product liability lawsuit should be allowed to proceed.
Sharkey concludes that the rapid evolution of AI technologies and the difficulty of predicting their risks make crafting comprehensive regulations at the pre-market stage particularly challenging. In this context, she asserts that product liability law becomes essential, serving both as a deterrent and an information-forcing tool. Sharkey’s model presents a promise to address AI harms in a way that accommodates the adaptive nature of machine learning systems, as illustrated by FDA’s experience with AI-enabled technologies. Instead of creating rules in a vacuum, she argues that regulators could benefit from the feedback loop between tort liability and regulation, which allows for some experimentation of standards before the regulator commits to a formal pre-market rule.
Tools & Platforms
Energy firm deploys groundbreaking AI-powered tech for offshore wind farms: ‘A defining moment’

Groundbreaking innovation is blowing through the UK’s green energy sector. Renewable Energy Magazine reports the unlikely alignment between AI and wind thanks to a new partnership between two offshore wind developers.
BlueFloat Energy and Nadara’s partnership marks the first time an offshore wind developer has deployed a new AI-powered safety and assurance platform, known as WindSafe. The digital technology company, Fennex, created WindSafe to help ease the challenges of managing offshore wind environments.
Unfortunately, supply chain bottlenecks, approval process delays, and even lawsuits have presented logistical challenges in the industry, according to The Conversation.
With this cloud system, teams can transition from reactive processes to proactive risk management, leveraging real-time intelligence and data.
Nassima Brown, Director at Fennex, told Renewable Energy Magazine, “This collaboration marks a defining moment, not only for Fennex, but for how the offshore wind sector approaches safety in the digital era.”
The rapid development of AI has been controversial, in part due to its impact on the electrical grid and poor water conservation, as reported by MIT. However, its role in WindSafe may offset some of that by making offshore wind energy growth more efficient.
Want to go solar but not sure who to trust? EnergySage has your back with free and transparent quotes from fully vetted providers that can help you save as much as $10k on installation.
|
After all, wind, also known as eolic energy, is a renewable and inexhaustible resource that creates electricity as wind turbines move. There are no dirty fuels or resulting pollution involved.
Another UK partnership to create the world’s largest wind farm will have 277 turbines powering over six million homes. Taiwan’s Hai Long Offshore Wind Project has already begun delivering electricity to its grid.
Offshore wind investment saves land space, thus preserving native vegetation and animal habitats. With no dirty fuel expenses and more advanced technology like WindSafe, wind power can continue to reduce energy costs.
Wind farms also help local economies with more jobs. According to the U.S. Department of Energy, the wind industry has created over 100,000 American jobs to date. Having more localized energy sources creates more stability and independence, especially for rural communities and small towns.
In addition to wind, offshore innovators are harnessing the power of ocean waves as solar continues to expand. For those interested in slashing energy bills and pollution at home, free resources like EnergySage are helping them receive vetted quotes and save thousands on solar installation costs.
All in all, the continued growth of major energy farm projects and other clean technologies reduces and may one day eliminate the need for climate-heating dirty fuels. Without constant polluting exhaust and its associated respiratory and cardiac health problems, a safer and cleaner future is possible.
Join our free newsletter for weekly updates on the latest innovations improving our lives and shaping our future, and don’t miss this cool list of easy ways to help yourself while helping the planet.
Tools & Platforms
How AI will change the CIO role

What you signed up for five years ago is not what the business expects of you today,” Jonathan Rickard told the NZ CIO Summit in Auckland.
Rickard, chief technology officer Microsoft CX at Fusion5, says AI has pushed CIOs from back-office tech management into front-line strategic leadership. Their job is no longer about implementation alone but about steering digital transformation across the business.
Today’s CIOs are now more involved in business areas such as innovation and revenue-generating initiatives. He says: “It’s no longer a matter of just keeping the lights on.”
CIO is an evolving role
More change is on the way with the CIO role becoming a people-focused, innovation-driven position. There is a strong emphasis on culture and measurable business outcomes.
Rickard quoted research to support this view. Following this year’s Sydney CIO Summit, attendees were asked about their roles. Nearly half, 47 percent, say they focus on innovation and strategy. That’s double the number (23 percent) who said the same five years ago.
The survey shows a majority of CIOs (85 percent) are involved in new revenue opportunities and a similar number (84 percent) say they have greater influence on business decisions.
For Rickard, AI is a general-purpose technology that changes everything. He says some skepticism is understandable; only recently CIOs were told they would be leading their businesses into the metaverse by now.
Instead, he compares AI with the steam engine, the internet, and smartphones. Each of these began with hype, which led to a negative reaction before the technologies were accepted and broadly adopted.
Real gains from intensity of use
What made the difference in each case was the intensity of use. Companies that merely swapped out old tools for new ones saw modest gains. Those that embedded the technology deeply into their processes and business models reaped outsized rewards. Rickard says the same will apply to AI: the real benefits will go to organisations that use it imaginatively and pervasively, not just at the margins.
Troy Gerber, CTO conversational AI and Copilot at Fusion5, says: “In the next two years, 30 percent of our workforce will be digital agents. They’re not going to be replacing people. They’ll be working alongside people.”
Gerber says CIOs will be responsible for integrating these digital agents into the workforce and ensuring they work alongside human employees.
Pressure as expectations increase
This will bring pressure as businesses will expect their AI investments to increase productivity. “The target is to gain two hours per employee every week. It will fall on CIOs to ensure the AI tools are not just implemented, but that they realise the expected gains”.
In addition to dealing with digital employees, he says CIOs will also be asked to help build the talented culture within organisations that is ready and able to leverage the AI technologies as they are rolled out. The responsibility that once resided in an HR department will shift to the CIO.
CIOs are widely expected to take ownership of innovation within a business. Gerber says the way this works will change.
In the past, CIOs rolled out tools and applied guardrails in an orderly process. Now, innovation bubbles up from staff. In many cases, they might adopt consumer-style AI tools (such as ChatGPT) first before looking for support.
Responsive to employee demands
Gerber says the CIO role here will be to respond to demand from employees and shape secure, scalable platforms around it.
This happened in the past with mobile phones. At first, they were telco or phone manufacturer-controlled. Then the smartphone arrived, and we shifted to the app-store-driven model. AI is going through the same change.
These changes are not abstract. Gerber says Fusion5 is going through the process in its own business. “We think of ourselves as a frontier firm: We live that every day, and we take it to our customers.
“Everyone in our organisation has to be AI literate. It’s mandatory. Staff have to go to our monthly AI training. We had a staff meeting last week where we showed a slide featuring our new joiners, and then we had a slide showing the new AI agents that had joined our organisation.”
He says there were eight of them, and they were featured because they are integral to the business. “We don’t bolt AI onto our solutions; it is part of our strategy.”
By showing staff the newcomers, they are able to see where they can be used in the company’s workflow.
Leading teams that mix humans and AI agents requires new leadership styles. That means listening, asking questions, and encouraging participation, not traditional command-and-control.
Gerber likes to quote Nvidia CEO Jensen Huang, who says success comes from strategic vision with executional discipline. “Success in the AI era belongs to those who can match strategic vision with executional discipline.”
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions