AI Insights
The president blamed AI and embraced doing so. Is it becoming the new ‘fake news’?

Artificial intelligence, apparently, is the new “fake news.”
Blaming AI is an increasingly popular strategy for politicians seeking to dodge responsibility for something embarrassing — among others. AI isn’t a person, after all. It can’t leak or file suit. It does make mistakes, a credibility problem that makes it hard to determine fact from fiction in the age of mis- and disinformation.
And when truth is hard to discern, the untruthful benefit, analysts say. The phenomenon is widely known as “the liar’s dividend.”
On Tuesday, President Donald Trump endorsed the practice. Asked about viral footage showing someone tossing something out an upper-story White House window, the president replied, “No, that’s probably AI” — after his press team had indicated to reporters that the video was real.
But Trump, known for insisting the truth is what he says it is, declared himself all in on the AI-blaming phenomenon.
“If something happens that’s really bad,” he told reporters, “maybe I’ll have to just blame AI.”
He’s not alone.
On the same day in Caracas, Venezuelan Communications Minister Freddy Ñáñez questioned the veracity of a Trump administration video it said showed a U.S. strike on a vessel in Caribbean that targeted Venezuela’s Tren de Aragua gang and killed 11. A video of the strike posted to Truth Social shows a long, multi-engine speedboat at sea when a bright flash of light bursts over it. The boat is then briefly seen covered in flames.
“Based on the video provided, it is very likely that it was created using Artificial Intelligence,” Ñáñez said on his Telegram account, describing “almost cartoonish animation.”
Blaming AI can at times be a compliment. (“He’s like an AI-generated player,” tennis player Alexander Bublik said of his U.S. Open opponent Jannik Sinner’s talent on ESPN ). But when used by the powerful, the practice, experts say, can be dangerous.
Digital forensics expert Hany Farid warned for years about the growing capabilities of AI “deepfake” images, voices and video to aid in fraud or political disinformation campaigns, but there was always a deeper problem.
“I’ve always contended that the larger issue is that when you enter this world where anything can be fake, then nothing has to be real,” said Farid, a professor at the University of California, Berkeley. “You get to deny any reality because all you have to say is, ‘It’s a deepfake.’”
That wasn’t so a decade or two ago, he noted. Trump issued a rare apology (“if anyone was offended”) in 2016 for his comments about touching women without their consent on the notorious “Access Hollywood” tape. His opponent, Democrat Hillary Clinton, said she was wrong to call some of his supporters “a basket of deplorables.”
Toby Walsh, chief scientist and professor of AI at the University of New South Wales in Sydney, said blaming AI leads to problems not just in the digital world but the real world as well.
“It leads to a dark future where we no longer hold politicians (or anyone else) accountable,” Walsh said in an email. “”It used to be that if you were caught on tape saying something, you had to own it. This is no longer the case.”
Danielle K. Citron of the Boston University School of Law and Robert Chesney of the University of Texas foresaw the issue in research published in 2019. In it, they describe what they called “the liar’s dividend.”
“If the public loses faith in what they hear and see and truth becomes a matter of opinion, then power flows to those whose opinions are most prominent—empowering authorities along the way,” they wrote in the California Law Review. “A skeptical public will be primed to doubt the authenticity of real audio and video evidence.”
Polling suggests many Americans are wary about AI. About half of U.S. adults said the increased use of AI in daily life made them feel “more concerned than excited,” according to a Pew Research Center poll from August 2024. Pew’s polling indicates that people have become more concerned about the increased use of AI in recent years.
Most U.S. adults appear to distrust AI-generated information when they know that’s the source, according to a Quinnipiac poll from April. About three-quarters said they could only trust the information generated by AI “some of the time” or “hardly ever.” In that poll, about 6 in 10 U.S. adults said they were “very concerned” about political leaders using AI to distribute fake or misleading information.
They have reason, and Trump has played a sizable role in muddying trust and truth.
Trump’s history of misinformation and even lies to suit his narrative predates AI. He’s famous for the use of “fake news,” a buzz term now widely known to denote skepticism about media reports. Leslie Stahl of CBS’ “60 Minutes” has said that Trump told her off camera in 2016 that he tries to “discredit” journalists so that when they report negative stories, they won’t be believed.
Trump’s claim on Tuesday that AI was behind the White House window video wasn’t his first attempt to blame AI. In 2023, he insisted that the anti-Trump Lincoln Project used AI in a video to make him “look bad.”
In the spot titled ” Feeble,” a female narrator taunts Trump. “Hey Donald … you’re weak. You seem unsteady. You need help getting around.” She questions his ”manhood,” accompanied by an image of two blue pills. The video continues with footage of Trump stumbling over words.
“The perverts and losers at the failed and once-disbanded Lincoln Project, and others, are using A.I. (Artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden,” Trump posted on Truth Social.
The Lincoln Project told The Associated Press at the time that AI was not used in the spot.
___
Associated Press writers Ali Swenson in New York, Matt O’Brien in Providence, Rhode Island, Linley Sanders in Washington and Jorge Rueda in Caracas, Venezuela, contributed to this report.
AI Insights
OpenAI says spending to rise to $115 billion through 2029: Information

OpenAI Inc. told investors it projects its spending through 2029 may rise to $115 billion, about $80 billion more than previously expected, The Information reported, without providing details on how and when shareholders were informed.
OpenAI is in the process of developing its own data center server chips and facilities to drive the technologies, in an effort to control cloud server rental expenses, according to the report.
The company predicted it could spend more than $8 billion this year, roughly $1.5 billion more than an earlier projection, The Information said.
Another factor influencing the increased need for capital is computing costs, on which the company expects to spend more than $150 billion from 2025 through 2030.
The cost to develop AI models is also higher than previously expected, The Information said.
AI Insights
Microsoft Says Azure Service Affected by Damaged Red Sea Cables

Microsoft Corp. said on Saturday that clients of its Azure cloud platform may experience increased latency after multiple international cables in the Red Sea were cut.
Source link
AI Insights
Geoffrey Hinton says AI will cause massive unemployment and send profits soaring

Pioneering computer scientist Geoffrey Hinton, whose work has earned him a Nobel Prize and the moniker “godfather of AI,” said artificial intelligence will spark a surge in unemployment and profits.
In a wide-ranging interview with the Financial Times, the former Google scientist cleared the air about why he left the tech giant, raised alarms on potential threats from AI, and revealed how he uses the technology. But he also predicted who the winners and losers will be.
“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton said. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”
That echos comments he gave to Fortune last month, when he said AI companies are more concerned with short-term profits than the long-term consequences of the technology.
For now, layoffs haven’t spiked, but evidence is mounting that AI is shrinking opportunities, especially at the entry level where recent college graduates start their careers.
A survey from the New York Fed found that companies using AI are much more likely to retrain their employees than fire them, though layoffs are expected to rise in the coming months.
Hinton said earlier that healthcare is the one industry that will be safe from the potential jobs armageddon.
“If you could make doctors five times as efficient, we could all have five times as much health care for the same price,” he explained on the Diary of a CEO YouTube series in June. “There’s almost no limit to how much health care people can absorb—[patients] always want more health care if there’s no cost to it.”
Still, Hinton believes that jobs that perform mundane tasks will be taken over by AI, while sparing some jobs that require a high level of skill.
In his interview with the FT, he also dismissed OpenAI CEO Sam Altman’s idea to pay a universal basic income as AI disrupts the economy and reduce demand for workers, saying it “won’t deal with human dignity” and the value people derive from having jobs.
Hinton has long warned about the dangers of AI without guardrails, estimating a 10% to 20% chance of the technology wiping out humans after the development of superintelligence.
In his view, the dangers of AI fall into two categories: the risk the technology itself poses to the future of humanity, and the consequences of AI being manipulated by people with bad intent.
In his FT interview, he warned AI could help someone build a bioweapon and lamented the Trump administration’s unwillingness to regulate AI more closely, while China is taking the threat more seriously. But he also acknowledged potential upside from AI amid its immense possibilities and uncertainties.
“We don’t know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly,” Hinton said. “We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad. We can make guesses, but things aren’t going to stay like they are.”
Meanwhile, he told the FT how he uses AI in his own life, saying OpenAI’s ChatGPT is his product of choice. While he mostly uses the chatbot for research, Hinton revealed that a former girlfriend used ChatGPT “to tell me what a rat I was” during their breakup.
“She got the chatbot to explain how awful my behavior was and gave it to me. I didn’t think I had been a rat, so it didn’t make me feel too bad . . . I met somebody I liked more, you know how it goes,” he quipped.
Hinton also explained why he left Google in 2023. While media reports have said he quit so he could speak more freely about the dangers of AI, the 77-year-old Nobel laureate denied that was the reason.
“I left because I was 75, I could no longer program as well as I used to, and there’s a lot of stuff on Netflix I haven’t had a chance to watch,” he said. “I had worked very hard for 55 years, and I felt it was time to retire . . . And I thought, since I am leaving anyway, I could talk about the risks.”
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi