Connect with us

AI Insights

Donald Trump blames AI for video showing items thrown from White House. Is it the new ‘fake news’?

Published

on



ADVERTISEMENT

Artificial intelligence, apparently, is the new “fake news.”

Blaming AI is an increasingly popular strategy for politicians seeking to dodge responsibility for something embarrassing — among others. AI isn’t a person, after all. It can’t leak or file suit. It does make mistakes, a credibility problem that makes it hard to determine fact from fiction in the age of mis- and disinformation.

And when truth is hard to discern, the untruthful benefit, analysts say. The phenomenon is widely known as “the liar’s dividend”.

On Tuesday, President Donald Trump endorsed the practice. Asked about viral footage showing someone tossing something out an upper-story White House window, the president replied: “No, that’s probably AI” — after his press team had indicated to reporters that the video was real.

But Trump, known for insisting the truth is what he says it is, declared himself all in on the AI-blaming phenomenon.

“If something happens that’s really bad,” he told reporters, “maybe I’ll have to just blame AI”.

He’s not alone.

AI is getting blamed — sometimes fairly, sometimes not

On the same day in Caracas, Venezuelan Communications Minister Freddy Ñáñez questioned the veracity of a Trump administration video it said showed a US strike on a vessel in the Caribbean that targeted Venezuela’s Tren de Aragua gang and killed 11.

A video of the strike posted to Truth Social shows a long, multi-engine speedboat at sea when a bright flash of light bursts over it. The boat is then briefly seen covered in flames.

“Based on the video provided, it is very likely that it was created using Artificial Intelligence,” Ñáñez said on his Telegram account, describing “almost cartoonish animation”.

Blaming AI can at times be a compliment. (“He’s like an AI-generated player,” tennis player Alexander Bublik said of his US Open opponent Jannik Sinner’s talent on ESPN ). But when used by the powerful, the practice, experts say, can be dangerous.

Digital forensics expert Hany Farid warned for years about the growing capabilities of AI “deepfake” images, voices and video to aid in fraud or political disinformation campaigns, but there was always a deeper problem.

“I’ve always contended that the larger issue is that when you enter this world where anything can be fake, then nothing has to be real,” said Farid, a professor at the University of California, Berkeley. “You get to deny any reality because all you have to say is, ‘It’s a deepfake.”.

That wasn’t so a decade or two ago, he noted. Trump issued a rare apology (“if anyone was offended”) in 2016 for his comments about touching women without their consent on the notorious “Access Hollywood” tape. His opponent, Democrat Hillary Clinton, said she was wrong to call some of his supporters “a basket of deplorables”.

Toby Walsh, chief scientist and professor of AI at the University of New South Wales in Sydney, said blaming AI leads to problems not just in the digital world but in the real world as well.

“It leads to a dark future where we no longer hold politicians (or anyone else) accountable,” Walsh said in an email. “It used to be that if you were caught on tape saying something, you had to own it. This is no longer the case”.

Contemplating the ‘liar’s dividend’

Danielle K. Citron of the Boston University School of Law and Robert Chesney of the University of Texas foresaw the issue in research published in 2019. In it, they describe what they call “the liar’s dividend”.

“If the public loses faith in what they hear and see and truth becomes a matter of opinion, then power flows to those whose opinions are most prominent—empowering authorities along the way,” they wrote in the California Law Review.

“A sceptical public will be primed to doubt the authenticity of real audio and video evidence”.

Polling suggests many Americans are wary of AI. About half of US adults said the increased use of AI in daily life made them feel “more concerned than excited,” according to a Pew Research Center poll from August 2024. Pew’s polling indicates that people have become more concerned about the increased use of AI in recent years.

They have reason, and Trump has played a sizable role in muddying trust and truth.

Trump’s history of misinformation and even lies to suit his narrative predates AI. He’s famous for the use of “fake news,” a buzz term now widely known to denote scepticism about media reports. Leslie Stahl of CBS’ “60 Minutes” has said that Trump told her off camera in 2016 that he tries to “discredit” journalists so that when they report negative stories, they won’t be believed.

Trump’s claim on Tuesday that AI was behind the White House window video wasn’t his first attempt to blame AI. In 2023, he insisted that the anti-Trump Lincoln Project used AI in a video to make him “look bad”.

In the spot titled ” Feeble,” a female narrator taunts Trump. “Hey Donald … you’re weak. You seem unsteady. You need help getting around.” She questions his ”manhood,” accompanied by an image of two blue pills. The video continues with footage of Trump stumbling over words.

“The perverts and losers at the failed and once-disbanded Lincoln Project, and others, are using AI (Artificial Intelligence) in their Fake television commercials in order to make me look as bad and pathetic as Crooked Joe Biden,” Trump posted on Truth Social.

The Lincoln Project told The Associated Press at the time that AI was not used in the spot.



Source link

AI Insights

Artificial Intelligence event Oct 21st in Suffern – Rockland News

Published

on


Suffern, NY – Edie Haughney, Ameriprise Financial Advisor, will be hosting a complimentary dinner and discussion on the current market and artificial intelligence on Tuesday, October 21st at the Crowne Plaza in Suffern.

The program will be begin at 6pm and advanced registration is required by October 10th.



Source link

Continue Reading

AI Insights

Melania Trump hosts AI meeting with Google CEO, tech leaders

Published

on


WASHINGTON — First lady Melania Trump declared “the robots are here” as she urged a deliberate but careful embrace of artificial intelligence during a convening Thursday of a White House task force on AI in education. 


What You Need To Know

  • First lady Melania Trump declared “the robots are here” as she urged a deliberate but careful embrace of artificial intelligence during a convening Thursday of a White House task force on AI in education. 
  • The first lady was joined on stage for the roundtable-style event by members of the task force, which includes several of President Donald Trump’s Cabinet secretaries, as well as major names in the technology sector, such as Google CEO Sundar Pichai
  • The convening was meant to shine a light on what the White House says is more than 135 commitments from leaders in the field to support education in artificial intelligence across the country
  • It follows the first lady announcing a nationwide competition challenging students in grades K-12 to use artificial intelligence to address a community issue, which will conclude with an event at the White House with the winners

“I predict AI will represent the single largest growth category in our nation during this administration, and I won’t be surprised if AI becomes known as the greatest engine of progress in the history of the United States of America,” Trump said. “But, as leaders and parents, we must manage AI’s growth responsibly.”

The first lady was joined on stage for the roundtable-style event by members of the task force, which includes several of President Donald Trump’s Cabinet secretaries, as well as major names in the technology sector, such as Google CEO Sundar Pichai. Dozens of other Big Tech and private sector leaders, including OpenAI CEO Sam Altman, were present in the East Room for the event, where they were lauded by White House Office of Science and Technology Policy Director Michael Kratsios for their “generous pledges.”

The meeting was meant to shine a light on what the White House says is more than 135 commitments from leaders in the field to support education in artificial intelligence across the country.   

Arvind Krishna, the CEO of IBM, for instance, noted in his remarks to the room that the technology company he leads is signing on to train 2 million American workers in “cutting-edge AI skills” over the next three years through a newly launched program. 

Pichai, meanwhile, announced that of the $1 billion the company recently committed for education and job training programs, $150 million would specifically go toward AI-focused grants. 

“This is all in the service of helping the next generation to solve problems, fuel innovation and build an incredible future,” he said. “These are all goals we all share. We are incredibly thankful for the partnership and the leadership from the first lady, the president and the administration and for showing us the way.” 

It was the second meeting of the task force since the president created it via an executive order in April meant to boost AI literacy and proficiency in America by better incorporating it into education. The event marked the first since the Melania Trump announced a nationwide competition challenging students in grades K-12 to use artificial intelligence to address a community issue, which will conclude with an event at the White House with the winners. 

“We must ensure America’s talent, our workforce, is prepared to sustain AI’s progress, and the Presidential AI Challenge is our first major step to galvanize America’s parents, educators, and students with this mission,” the first lady said at the task force meeting. 

She added that AI will “serve as the underpinning of every business sector in our nation.”

Several of the president’s Cabinet secretaries who sit on the task force, including Education Secretary Linda McMahon, Agriculture Secretary Brooke Rollins and Energy Secretary Chris Wright, gave updates on how they are utilizing the technology in their work and seeking to ensure Americans are educated in how to use it in the fields related to their departments. 

McMahon, for instance, pointed to a recent letter to those receiving grants from the Education Department letting them know that AI tools and technologies are an “allowable use” of federal funds in a bid, she said, to empower schools to explore how to best integrate the technology in teaching. Wright used his remarks to warn that the U.S. “will not win at AI if we don’t massively grow our electricity production.” 

The president’s AI and crypto czar David Sacks, meanwhile, highlighted Trump’s other executive orders related to AI, including one seeking to make it easier to build new data centers and energy infrastructure.

“Some people think that AI is going to take all of our jobs,” Sacks said. “I really don’t think that’s going to happen.” 

Instead, he argued AI would unleash a “boom like we’ve never seen.” 

The first lady, who has not frequently taken the spotlight and participated in selective events since her husband’s return to the Oval Office, has touched what she sees as both the positives and negatives of the rapidly developing technology. The White House has highlighted how she utilized AI for the narration of the audio version of her 2024 memoir “Melania.” At the same time, the first lady actively supported and pushed for a bill — ultimately passed by Congress and signed by the president — imposing harsher penalties for the spread of non-consensual sexual images, including those created using artificial intelligence. 



Source link

Continue Reading

AI Insights

Can artificial intelligence start a nuclear war?

Published

on


Stanford University simulations have shown that current artificial intelligence models are prone to escalating conflicts to the point of nuclear weapons. The study raises serious questions about the risks of automating military decisions and the role of AI in future wars.

This is reported by Politico .

The results of war games conducted by researcher Jacqueline Schneider from Stanford indicate that artificial intelligence could become a dangerous factor in modern wars if it gains influence over military decision-making.

According to the scientist, during simulations, the latest AI models consistently chose aggressive escalation scenarios, including the use of nuclear weapons. Schneider compared the behavior of the algorithms to the approach of Cold War general Curtis LeMay, who was known for his willingness to use nuclear force on minimal pretext.

“Artificial intelligence models understand perfectly well how to escalate a conflict, but are actually unable to offer options for its de-escalation,” the researcher explained.

In her opinion, this is due to the fact that most of the military literature used to train AI describes escalation scenarios, not those that avoided war.

The Pentagon assures that AI will not have the right to make decisions about launching nuclear missiles, and emphasizes the preservation of “human control.” At the same time, modern warfare is increasingly dependent on automated systems. Already today, projects like Project Maven rely entirely on machine-generated intelligence data, and in the future, algorithms will even be able to advise on countermeasures.

In addition, there are already examples of automation in the field of nuclear weapons in the world. Russia has the Perimeter system, capable of delivering a strike without human intervention, and China is investing huge resources in the development of military artificial intelligence.

Journalists also recall the case of 1979, when US President Jimmy Carter’s advisor Zbigniew Brzezinski received a message about the alleged launch of 200 Soviet missiles. Only a moment before the decision to retaliate was made, it turned out that this was a system error. The question is whether artificial intelligence, which works “reflexively”, would have been able to wait for more detailed information, or would have pressed the “red button” automatically.

Thus, the discussion about the role of AI in the military sphere is becoming increasingly relevant, because not only the outcome of the battle, but also the fate of all humanity may be at stake.

It was previously reported that the Thwaites Glacier in Antarctica, nicknamed the “Doomsday Glacier,” is losing stability and could trigger a rapid rise in sea levels by several meters.

Recall that Hollywood actor and musician Will Smith was involved in a scandal. In particular, the star of “Men in Black” was suspected of using artificial intelligence.

Also follow “Pryamim” on Facebook , Twitter , Telegram , and Instagram.





Source link

Continue Reading

Trending