Business
Why is the media paying millions to Trump? – podcast

Paramount Global, the parent company of CBS News, settled a lawsuit filed against it by Donald Trump for $16m last week. It came after Disney and Meta settled lawsuits with the president in similar ways. Jonathan Freedland speaks to the Guardian US columnist Margaret Sullivan about why these companies are caving to Trump’s demands, and whether critics are right to be worried about what this means for the future of a free press
Archive: CBS News, PBS, NBC News, WHAS11, CNN, Fox 5 New York
Business
CrowdStrike and Salesforce Partner to Secure the Future of AI-Powered Business

Integration of Falcon Shield with Salesforce Security Center and Charlotte AI with Agentforce delivers enhanced protection, visibility, and faster response for mission-critical AI agents, applications, and workflows
AUSTIN, Texas, September 16, 2025–(BUSINESS WIRE)–Fal.Con 2025, Las Vegas – CrowdStrike (NASDAQ: CRWD) and Salesforce, the world’s #1 AI CRM, today announced a new strategic partnership to enhance the security of AI agents and applications built on Agentforce and the Salesforce Platform. Through integrations between CrowdStrike Falcon® Shield and Salesforce Security Center, Salesforce admins and security professionals will gain enhanced visibility, compliance support, and protection for mission-critical workflows – simplifying operations and uniting business and security teams on a shared foundation of trust in the agentic era.
The partnership also enables customers to access CrowdStrike’s agentic security analyst, Charlotte AI, through Agentforce for Security and use it to work directly alongside teammates in Slack, flagging potential threats and recommending actions in a conversational manner as any other employee would.
As agents join the workforce, security teams must understand what they are doing, trace them back to their human creators, and prevent them from becoming over privileged or compromised. CrowdStrike and Salesforce are meeting this challenge by delivering the visibility and control needed to secure the future of AI-powered business.
“Adversaries are already targeting AI agents and applications with identity-based attacks. Together with Salesforce, we’re extending the power of the Falcon platform to protect mission-critical workflows and secure the next generation of AI-powered business,” said Daniel Bernard, chief business officer at CrowdStrike. “By integrating Falcon Shield into Salesforce Security Center and bringing Charlotte AI into Agentforce, business and security teams gain a unified view of risk and response – protecting today’s operations while enabling tomorrow’s AI-driven enterprise.”
“A key to unlocking the full potential of agentic AI lies in the ability to secure it,” said Brian Landsman, CEO of AppExchange and Global Partnerships at Salesforce. “Our partnership with CrowdStrike ensures that our customers can build their agentic enterprises on Salesforce while maintaining the highest standards of security and compliance.”
Through the integration of Falcon Shield, which provides visibility and automated response to threats targeting SaaS applications, with Salesforce Security Center, which provides one comprehensive view of permissions and controls across the company’s Salesforce environment, customers gain:
Business
AI’s Real Danger Is It Doesn’t Care If We Live or Die, Researcher Says

AI researcher Eliezer Yudkowsky doesn’t lose sleep over whether AI models sound “woke” or “reactionary.”
Yudkowsky, the founder of the Machine Intelligence Research Institute, sees the real threat as what happens when engineers create a system that’s vastly more powerful than humans and completely indifferent to our survival.
“If you have something that is very, very powerful and indifferent to you, it tends to wipe you out on purpose or as a side effect,” he said in an episode of The New York Times podcast “Hard Fork” released last Saturday.
Yudkowsky, coauthor of the new book If Anyone Builds It, Everyone Dies, has spent two decades warning that superintelligence poses an existential risk to humanity.
His central claim is that humanity doesn’t have the technology to align such systems with human values.
He described grim scenarios in which a superintelligence might deliberately eliminate humanity to prevent rivals from building competing systems or wipe us out as collateral damage while pursuing its goals.
Yudkowsky pointed to physical limits like Earth’s ability to radiate heat. If AI-driven fusion plants and computing centers expanded unchecked, “the humans get cooked in a very literal sense,” he said.
He dismissed debates over whether chatbots sound as though they are “woke” or have certain political affiliations, calling them distractions: “There’s a core difference between getting things to talk to you a certain way and getting them to act a certain way once they are smarter than you.”
Yudkowsky also brushed off the idea of training advanced systems to behave like mothers — a theory suggested by Geoffrey Hinton, often called the “godfather of AI — arguing it wouldn’t make the technology safer. He argued that such schemes are unrealistic at best.
“We just don’t have the technology to make it be nice,” he said, adding that even if someone devised a “clever scheme” to make a superintelligence love or protect us, hitting “that narrow target will not work on the first try” — and if it fails, “everybody will be dead and we won’t get to try again.”
Critics argue that Yudkowsky’s perspective is overly gloomy, but he pointed to cases of chatbots encouraging users toward self-harm, saying that’s evidence of a system-wide design flaw.
“If a particular AI model ever talks anybody into going insane or committing suicide, all the copies of that model are the same AI,” he said.
Other leaders are sounding alarms, too
Yudkowsky is not the only AI researcher or tech leader to warn that advanced systems could one day annihilate humanity.
In February, Elon Musk told Joe Rogan that he sees “only a 20% chance of annihilation” of AI — a figure he framed as optimistic.
In April, Hinton said in a CBS interview that there was a “10 to 20% chance” that AI could seize control.
A March 2024 report commissioned by the US State Department warned that the rise of artificial general intelligence could bring catastrophic risks up to human extinction, pointing to scenarios ranging from bioweapons and cyberattacks to swarms of autonomous agents.
In June 2024, AI safety researcher Roman Yampolskiy estimated a 99.9% chance of extinction within the next century, arguing that no AI model has ever been fully secure.
Across Silicon Valley, some researchers and entrepreneurs have responded by reshaping their lives — stockpiling food, building bunkers, or spending down retirement savings — in preparation for what they see as a looming AI apocalypse.
Business
Canadian AI company Cohere opens Paris hub to expand EMEA operations – eeNews Europe
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries