Connect with us

Business

Business Insider Retracts 40 AI-Generated Essays in Fabrication Scandal

Published

on


In the fast-evolving world of digital media, where content is king and authenticity is increasingly under siege, Business Insider’s recent decision to pull 40 personal essays has sent ripples through the publishing industry. The move, detailed in a report by The Washington Post, stems from suspicions that these pieces were penned under fabricated bylines, potentially part of a coordinated scheme to infiltrate reputable outlets with bogus narratives. The essays, which covered a range of personal anecdotes from career setbacks to life lessons, were removed after internal reviews flagged inconsistencies in authorship and content quality.

Investigators and media watchdogs have linked these retractions to a broader pattern of deception, including the notorious case of “Margaux Blanchard,” a fictitious writer whose AI-generated articles appeared in outlets like Wired and Business Insider. According to The Guardian, at least six publications retracted Blanchard’s work last month, highlighting how generative AI tools can be weaponized to produce convincing but ultimately fraudulent content. This incident underscores the vulnerabilities in editorial processes, where freelance submissions often bypass rigorous vetting due to resource constraints.

The Web of Deception Unraveled

Delving deeper, the connections between these suspect bylines suggest more than isolated fraud. The Washington Post uncovered financial ties between Blanchard and another pseudonymous contributor, pointing to a possible network peddling these stories for profit or influence. Business Insider’s spokesperson confirmed the removals in a statement, emphasizing that the essays failed to meet their standards for originality and veracity, though they stopped short of confirming AI involvement in every case.

Industry insiders note that this scandal arrives amid a surge in AI-assisted writing, with tools like ChatGPT enabling rapid content creation. A separate analysis by The Daily Beast revealed that at least 34 of the yanked pieces bore hallmarks of fabrication, such as generic phrasing and implausible personal details that didn’t align with real-world experiences. Editors at Business Insider, owned by Axel Springer, are now reevaluating their contributor guidelines, potentially implementing AI detection software and enhanced background checks.

Implications for Media Trust

The fallout extends beyond Business Insider, raising alarms about the erosion of trust in online journalism. Publications like MSN, which republished the controversial story, have amplified the discussion, prompting calls for industry-wide standards on AI use. Experts argue that without robust safeguards, such schemes could proliferate, undermining the credibility that readers expect from established brands.

For freelancers and aspiring writers, this episode serves as a cautionary tale. Legitimate contributors may face heightened scrutiny, while platforms experiment with blockchain-based verification or human-AI hybrid editing models. As one media executive told Talking Biz News, the real challenge lies in balancing innovation with integrity, ensuring that technology enhances rather than erodes the human element in storytelling.

Looking Ahead: Reforms and Challenges

Reforms are already underway, with some outlets mandating disclosure of AI assistance in submissions. Yet, the sophistication of these deceptions—evident in the Blanchard saga—suggests that detection alone may not suffice. Broader collaboration among publishers, as advocated in reports from The Washington Post, could lead to shared databases of suspect bylines, fortifying defenses against future infiltrations.

Ultimately, this controversy highlights the precarious balance media companies must strike in an era of abundant, low-cost content. As AI evolves, so too must the gatekeepers, lest the line between fact and fabrication blur irreparably. Business Insider’s purge, while a setback, may catalyze the accountability needed to preserve journalistic standards in the digital age.



Source link

Business

AI’s Real Danger Is It Doesn’t Care If We Live or Die, Researcher Says

Published

on


AI researcher Eliezer Yudkowsky doesn’t lose sleep over whether AI models sound “woke” or “reactionary.”

Yudkowsky, the founder of the Machine Intelligence Research Institute, sees the real threat as what happens when engineers create a system that’s vastly more powerful than humans and completely indifferent to our survival.

“If you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect,” he said in an episode of The New York Times podcast “Hard Fork” released last Saturday.

Yudkowsky, coauthor of the new book If Anyone Builds It, Everyone Dies, has spent two decades warning that superintelligence poses an existential risk to humanity.

His central claim is that humanity doesn’t have the technology to align such systems with human values.

He described grim scenarios in which a superintelligence might deliberately eliminate humanity to prevent rivals from building competing systems or wipe us out as collateral damage while pursuing its goals.

Yudkowsky pointed to physical limits like Earth’s ability to radiate heat. If AI-driven fusion plants and computing centers expanded unchecked, “the humans get cooked in a very literal sense,” he said.

He dismissed debates over whether chatbots sound as though they are “woke” or have certain political affiliations, calling them distractions: “There’s a core difference between getting things to talk to you a certain way and getting them to act a certain way once they are smarter than you.”

Yudkowsky also brushed off the idea of training advanced systems to behave like mothers — a theory suggested by Geoffrey Hinton, often called the “godfather of AI — arguing it wouldn’t make the technology safer. He argued that such schemes are unrealistic at best.

“We just don’t have the technology to make it be nice,” he said, adding that even if someone devised a “clever scheme” to make a superintelligence love or protect us, hitting “that narrow target will not work on the first try” — and if it fails, “everybody will be dead and we won’t get to try again.”

Critics argue that Yudkowsky’s perspective is overly gloomy, but he pointed to cases of chatbots encouraging users toward self-harm, saying that’s evidence of a system-wide design flaw.

“If a particular AI model ever talks anybody into going insane or committing suicide, all the copies of that model are the same AI,” he said.

Other leaders are sounding alarms, too

Yudkowsky is not the only AI researcher or tech leader to warn that advanced systems could one day annihilate humanity.

In February, Elon Musk told Joe Rogan that he sees “only a 20% chance of annihilation” of AI — a figure he framed as optimistic.

In April, Hinton said in a CBS interview that there was a “10 to 20% chance” that AI could seize control.

A March 2024 report commissioned by the US State Department warned that the rise of artificial general intelligence could bring catastrophic risks up to human extinction, pointing to scenarios ranging from bioweapons and cyberattacks to swarms of autonomous agents.

In June 2024, AI safety researcher Roman Yampolskiy estimated a 99.9% chance of extinction within the next century, arguing that no AI model has ever been fully secure.

Across Silicon Valley, some researchers and entrepreneurs have responded by reshaping their lives — stockpiling food, building bunkers, or spending down retirement savings — in preparation for what they see as a looming AI apocalypse.





Source link

Continue Reading

Business

Canadian AI company Cohere opens Paris hub to expand EMEA operations – eeNews Europe

Published

on



Canadian AI company Cohere opens Paris hub to expand EMEA operations  eeNews Europe



Source link

Continue Reading

Business

OpenAI Foresees Millions of AI Agents Running on the Cloud

Published

on


OpenAI is betting the future of software engineering on AI agents.

On the “OpenAI Podcast,” which aired on Monday, cofounder and president Greg Brockman and Codex engineering lead Thibault Sottiaux outlined a vision of vast networks of autonomous AI agents supervised by humans but capable of working continuously in the cloud as full-fledged collaborators.

“We have strong conviction that the way that this is headed is large populations of agents somewhere in the cloud that we as humanity, as people, teams, organizations supervise and steer in order to produce great economical value,” Sottiaux said.

“So if we’re going a couple of years from now, this is what it’s going to look like,” Sottiaux added. “It’s millions of agents working in our and companies’ data centers in order to do useful work.”

OpenAI launched GPT-5 Codex on Monday. Unlike earlier iterations, OpenAI said that GPT-5 Codex can run for hours at a time on complex software projects, such as massive code refactorings, while integrating directly with developers’ workflows in cloud environments.

OpenAI CPO Kevin Weil said on tech entrepreneur Azeem Azhar’s podcast “Exponential View” that internal tools like Codex-based code review systems increased efficiency for its engineers.

This doesn’t mean human coders would be rendered obsolete. Despite successful examples of “vibe coding,” it is obvious when a person using the AI agent doesn’t know how to code, engineers and computer science professors previously told Business Insider.

Brockman said that oversight will still be critical as AI agents take on more ambitious roles. OpenAI has been strategizing since 2017 on how humans or even less sophisticated AIs can supervise more powerful AIs, he said, in order to maintain oversight and “be in the driver’s seat.”

“Figuring out this entire system and then making it multi-agent and steerable by individuals, teams, organizations, and aligning that with the whole intent of organizations, this is where it’s headed for me,” said Sottiaux. “It’s a bit nebulous, but it’s also very exciting.”





Source link

Continue Reading

Trending