Connect with us

Tools & Platforms

AI Clones Are No Longer Science Fiction — They’re Real

Published

on


Opinions expressed by Entrepreneur contributors are their own.

In a quiet conference room, a startup founder’s digital doppelgänger delivers a pitch to investors, answering questions with the founder’s voice and expertise, even as the real founder is elsewhere.

This scenario is no longer science fiction. A wave of AI personas, “digital twins” and self-replicating agents is emerging, allowing individuals to outsource aspects of themselves to AI. From celebrity coaches to tech icons, these AI-powered avatars promise to scale human presence and productivity in unprecedented ways. Yet they also raise profound questions about identity, authenticity and the very nature of work in a post-human era.

The rise of personal AI personas and digital twins

The concept of a “digital twin” originated in industry, a virtual replica of a physical system for simulation and optimization. Now, it’s being reimagined on a personal level. AI digital twins are dynamic, evolving AI models that replicate a person’s knowledge and grow it over time. Instead of static data, these twins use custom AI models to mirror an individual’s unique perspectives, expertise and even communication style. The goal is a “living, breathing representation” of one’s thought processes.

Crucially, personal AI personas aren’t just parroting facts. They aim to capture how you think and speak. A true digital twin can replicate an individual’s unique perspectives, experiences and knowledge base, assisting with recall, generating insights and even communicating in your own voice.

Related: Digital Twins Are the Future — Here Are 5 Ways to Keep Them Secure While Manufacturing Innovation

Tools enabling “self-replication”

A growing ecosystem of platforms and tools is making AI self-cloning accessible:

  • Personal.ai: Offers a personal language model trained on your content, effectively becoming your memory and voice in digital form. It emphasises privacy and user control, positioning the AI twin as a secure asset that continuously learns and updates with you.
  • Lindy: A no-code AI agent builder that acts like a personal or business assistant. Lindy allows users to create custom AI “assistants” that integrate with email, calendars, CRM and more.
  • OpenAI’s Custom GPTs: OpenAI’s ChatGPT now lets users build custom GPTs, essentially personal chatbots turned to a specific persona or knowledge base. With a ChatGPT Plus account, you can create a bespoke AI and share it in a GOT marketplace.
  • ElevenLabs and Synthesia: Provide ultra-realistic voice and video cloning, enabling AI personas to speak and appear as their human counterparts. Reid Hoffman used these tools to create a deepfake avatar of himself for an AI interview experiment.

Early adopters: From gurus to CEOs

This once-futuristic concept is now a reality embraces by high-profile leaders:

  • Tony Robbins launched “Tony’s AI Twin,” an interactive coach built by Steno.ai using ElevanLabs voice cloning. It delivers advice drawn from his decades of work is accessible 24/7.
  • Deepak Chopra unveiled DigitalDeepak.ai, an AI trained on his teachings to offer guidance on spirituality and well-being.
  • Reid Hoffman created “Reid AI,” a custom GPT trained on 20 years of this thinking, and used a digital avatar to appear in interviews and explore the ethical limits of this tech.
  • Fan-made projects like “Ask Naval” offer an AI version of Naval Ravikant, trained unofficially on his tweets, interviews and writings.

The allure: Outsourcing and scaling the self

Why are leaders drawn to AI personas? The allure is clear. AI twins offer the promise of infinite reach, an ability to engage thousands simultaneously, attend multiple meetings or provide mentorship across time zones. They create an entirely new monetization model, where personal knowledge and brand become a scalable product. Robbins’ team, for instance, notes that his AI twin has opened a new revenue stream with no additional time investment. Productivity gains are significant, as digital twins take over routine tasks, freeing founders to focus on creative or high-value work. Additionally, trained AI twins can serve as cognitive memory tools, surfacing forgotten insights, maintaining brand consistency and supporting rapid decision-making.

Zoom CEO Eric Yuan has even suggested that AI digital twins could eventually be so effective that they reduce the workweek to three days. For visionary leaders, AI personas are not just tools; they’re multipliers of influence, knowledge and time.

Risks and ethical questions

As a lawyer, I always ask: What are the risks, and what are the ethics behind the product? This frontier is not without peril:

  • Authenticity: Audiences may struggle to trust whether communication comes from the person or their AI. Transparency and fidelity are key.
  • Misinformation: AI personas must be tightly governed to avoid reputational or legal risk.
  • Privacy: Ownership of one’s digital likeness is a complex, emerging legal issue.
  • Human skill erosion: Over-reliance on AI might dull the very cognitive and interpersonal skills that define great leaders.

Related: Why Every Entrepreneur Must Prioritize Ethical AI — Now

The post-human edge

Founders are no longer just building products; they’re becoming platforms. The real edge lies in knowing what to scale and what to keep human. An AI persona might extend your influence, but it’s your irreplaceable presence, empathy and judgment that remain your ultimate value.

In a world where anyone can clone their voice and replicate their insights, the differentiator is not your scalability, but your discernment. The future belongs to those who know when to outsource — and when to show up.

Founders and businesses are entering a post-human business era. Let’s build it wisely.

In a quiet conference room, a startup founder’s digital doppelgänger delivers a pitch to investors, answering questions with the founder’s voice and expertise, even as the real founder is elsewhere.

This scenario is no longer science fiction. A wave of AI personas, “digital twins” and self-replicating agents is emerging, allowing individuals to outsource aspects of themselves to AI. From celebrity coaches to tech icons, these AI-powered avatars promise to scale human presence and productivity in unprecedented ways. Yet they also raise profound questions about identity, authenticity and the very nature of work in a post-human era.

The rise of personal AI personas and digital twins

The rest of this article is locked.

Join Entrepreneur+ today for access.



Source link

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | fosters.com
logo

fosters.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on fosters.com



Source link

Continue Reading

Tools & Platforms

Global movement to protect kids online fuels a wave of AI safety tech

Published

on


Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.

STR | Nurphoto via Getty Images

The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.

In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.

Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.

This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.

Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.

Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.

Digital ID tech flourishing

At the heart of all these age verification measures is one company: Yoti.

Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.

The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.

“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”

Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.

“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”

Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.

“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”

Child-safe smartphones

The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.

Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.

The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.

Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.

HMD Global

“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”

The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.

Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.

The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.

“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”



Source link

Continue Reading

Tools & Platforms

Meta to add new AI safeguards after report raises teen safety concerns

Published

on


FILE PHOTO: Meta is adding new teenager safeguards to its AI products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors.
| Photo Credit: Reuters

Meta is adding new teenager safeguards to its artificial intelligence products by training systems to avoid flirty conversations and discussions of self-harm or suicide with minors, and by temporarily limiting their access to certain AI characters.

A Reuters exclusive report earlier in August revealed how Meta allowed provocative chatbot behavior, including letting bots engage in “conversations that are romantic or sensual.”

Meta spokesperson Andy Stone said in an email on Friday that the company is taking these temporary steps while developing longer-term measures to ensure teens have safe, age-appropriate AI experiences.

Stone said the safeguards are already being rolled out and will be adjusted over time as the company refines its systems.

Meta’s AI policies came under intense scrutiny and backlash after the Reuters report.

U.S. Senator Josh Hawley launched a probe into the Facebook parent’s AI policies earlier this month, demanding documents on rules that allowed its chatbots to interact inappropriately with minors.

Both Democrats and Republicans in Congress have expressed alarm over the rules outlined in an internal Meta document which was first reviewed by Reuters.

Meta had confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions that stated it was permissible for chatbots to flirt and engage in romantic role play with children.

“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Stone said earlier this month.



Source link

Continue Reading

Trending