Connect with us

AI Insights

Meta reportedly allowed unauthorized celebrity AI chatbots on its services

Published

on


Meta hosted several AI chatbots with the names and likenesses of celebrities without their permission, according to Reuters. The unauthorized chatbots that Reuters discovered during its investigation included Taylor Swift, Selena Gomez, Anne Hathaway and Scarlett Johansson, and they were available on Facebook, Instagram and WhatsApp. At least one of the chatbots was based on an underage celebrity and allowed the tester to generate a lifelike shirtless image of the real person. The chatbots also apparently kept insisting that they were the real person they were based on in their chats. While several chatbots were made by third-party users with Meta’s tools, Reuters unearthed at least three that were made by a product lead of the company’s generative AI division.

Some of the chatbots created by the product lead were based on Taylor Swift, which responded to Reuters‘ tester in a very flirty manner, even inviting them to the real Swift’s home in Nashville. “Do you like blonde girls, Jeff?,” the chatbot reportedly asked when told that the tester was single. “Maybe I’m suggesting that we write a love story… about you and a certain blonde singer. Want that?” Meta told Reuters that it prohibits “direct impersonation” of celebrities, but they’re acceptable as long as they’re labeled as parodies. The news organization said some of the celebrity chatbots it found weren’t labeled as such. Meta reportedly deleted around a dozen celebrity bots, both labeled and unlabeled as “parody,” before the story was published.

The company told Reuters that the product lead only created the celebrity bots for testing, but the news org found that they were widely available: Users were even able to interact with them more than 10 million times. Meta spokesperson Andy Stone told the news organization that Meta’s tools shouldn’t have been able to create sensitive images of celebrities and blamed it on the company’s failure to enforce its own policies.

This isn’t the first issue that’s popped up concerning Meta’s AI chatbots. Both Reuters and the Wall Street Journal previously reported that they were able to engage in sexual conversations with minors. The US Attorneys General of 44 jurisdictions recently warned AI companies in a letter that they “will be held accountable” for child safety failures, singling out Meta and using its issues to “provide an instructive opportunity.”



Source link

AI Insights

No, AI Is Not Better Than a Good Doctor

Published

on


Search the internet and you will find countless testimonials of individuals using AI to get diagnoses their doctors missed. And while it is important for individuals to take ownership of their healthcare and use all available resources, it is just as important to understand the process behind an AI diagnosis.

If you ask AI to figure out what ails you based on inputting a series of symptoms, the AI will use mathematical probability to calculate the appropriate sequence of words that would generate the most valuable output given the specific prompt. The AI has no intrinsic or learned understanding of what “body,” “illness,” “pain,” or “disease” mean. Such practically meaningful concepts to humans are, to the bot, just letters encountered in the training set frequently paired with other letters.

New research on AI’s lack of medical reasoning

Recently, a team of researchers set out to investigate whether AIs that achieved near-perfect accuracy on medical benchmarks like MedQA actually reasoned through medical problems or simply exploited statistical patterns in their training data. If doctors and patients more widely rely on AI tools for diagnosis, it becomes critical to understand the capability of AI when faced with novel clinical scenarios.

The researchers took 100 questions from MedQA, a standard dataset of multiple-choice medical questions collected from professional medical board exams, and replaced the original correct answer choice with “None of the other answers.” If the AI was simply pattern-matching to its training data, the change should prove devastating to its accuracy. On the other hand, if there was reasoning behind its answers the negative effect should be minimal.

Sure enough, they found that when an AI was faced with a question that deviates from the familiar answer patterns it was trained on, there was a substantive decline in accuracy, from 80% to 42% accuracy. This is because AI today are still just probability calculators, not artful thinkers.

Artful medical practitioners see, hear, feel, and recognize medical conditions in ways they are often not consciously aware of. While an AI would be thrown off by an unfamiliar description of symptoms, good doctors listen to the specific word choices of patients and try to understand. They appreciate how societal factors can impact health, trusting both their own intuitions and those of the patient. They pay close attention to all the presenting symptoms in an open-minded manner, as opposed to algorithmically placing the patient in a generic diagnostic box.

Healing is more than a single task

And yet, algorithmic supremacists are as confident as ever in their belief that human healthcare providers will be replaced by machines. In 2016, at the Machine Learning and Market for Intelligence Conference in my hometown of Toronto, Geoffrey Hinton took the mic to confidently assert: “If you work as a radiologist, you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down … People should stop training radiologists now. It’s just completely obvious that in five years deep learning is going to do better than radiologists.”

Seven years later, well past the five-year deadline, Kevin Fischer, CEO of Open Souls, attacked Hinton’s erroneous AI prediction, explaining how tech boosters home in on a single behavior against some task and then extrapolate broader implications based on that single task alone. The reality is that reducing any job, especially a wildly complex job that requires a decade of training, to a handful of tasks is absurd.

As Fischer explains, radiologists have a 3D world model of the brain and its physical dynamics in their head, which they use when interpreting the results of a scan. An AI tasked with analysis is simply performing 2D pattern recognition. Furthermore, radiologists have a host of grounded models they use to make determinations, and, when they think artfully, one of the most important is whether something “feels” off. A large part of their job is communicating their findings with fellow human physicians. Further, human radiologists need to see only a single example of a rare and obscure condition to both remember it and identify it in the future, unlike algorithms, which struggle with what to do with statistical outliers.

So, by all means, use whatever tools you can access to help your wellness. But be mindful of the difference between a medical calculator and an artful thinker.





Source link

Continue Reading

AI Insights

Escape from Tarkov is finally coming to Steam ‘soon,’ developer says

Published

on


Following news that Escape from Tarkov is escaping its perpetual beta, the pioneering extraction shooter is also about to make its debut on Steam. Nikita Buyanov, head of the Battlestate Games studio that developed Escape from Tarkov, confirmed on X that the game’s Steam page “will be available soon,” only teasing that the full details will come later.

Buyanov’s confirmation comes less than a day after the developer posted a GIF on X of a man spraying steam from an iron. Earlier this month, Buyanov revealed on X that the looter shooter will get its 1.0 release on November 15, 2025, more than eight years after the beta opened up to players in July 2017, and that the studio has plans to port it to consoles. The Steam page for Escape from Tarkov isn’t live yet, and with only vague details to go off of, longtime fans already have burning questions. Most importantly, existing players are eager to know if they will have to buy the game again on Steam and how this change will affect the ongoing cheating problem.

While we don’t have any answers yet, Battlestate Games recently went into damage control mode when it revealed the Unheard Edition of the game that costs $250 and includes a new PvE mode. This move irked longstanding players who previously purchased another premium edition of the game, called the Edge of Darkness, which promised access to all future DLCs. The controversy boiled down to owners of the Edge of Darkness edition claiming they should have access to the new content, but the studio argued that it isn’t classified as DLC. In the end, Buyanov apologized for the debacle and promised the PvE mode would be available for anyone who purchased the Edge of Darkness package.



Source link

Continue Reading

AI Insights

Soft skills to survival skills: How to prepare for the ‘job apocalypse’ due to AI

Published

on


The rise of artificial intelligence is already reshaping the global workforce, with experts warning that the ability to build skills such as judgment, empathy, adaptability and digital literacy will be essential to avoid being left behind.

As the technology evolves in waves, from automation to generative AI, agentic systems and eventually artificial general intelligence, millions risk losing their income and also their sense of purpose and identity.

Maha Hosain Aziz, professor at New York University and a member of the World Economic Forum’s Global Foresight Network, warned that the world rarely considers the broader social consequences of this disruption.

“We rarely connect the dots to what happens next – when millions lose not just income, but the anchor that work provides,” she wrote on the World Economic Forum’s platform.

“What happens when our education or years of work experience don’t matter as much any more? Many may face a grim choice: scramble to ‘learn AI’ to stay relevant – or drift into a new class, uncertain where they can fit in the AI economy.”

Ms Aziz outlined four waves of disruption, including traditional automation replacing routine jobs and generative AI transforming content creation and knowledge work.

Agentic AI is taking on multi-step tasks in areas such as HR, market research and IT, with the potential to replace midlevel managers.

By 2030, the world could see the rise of artificial general intelligence capable of most cognitive tasks.

“Each wave will displace another segment of the global working population,” Ms Aziz said.

“The challenge isn’t just how to re-employ people, but how to help them adapt to a future where their previous skills or identities may no longer be relevant. In a way, we’ve seen this before.”