Connect with us

Tools & Platforms

From ‘woke’ AI to data centers, can Trump’s plan help America win the AI race?

Published

on


The White House released its action plan to help America win the race for artificial intelligence innovation.

President Donald Trump gave a speech Wednesday and signed three executive orders: one pushing back against “woke” AI, one to fast-track the construction of data centers, and one to promote the export of American AI technology.

Trump called AI “one of the most important technological revolutions in the history of the world.”

And he said America must remain the AI leader.

Though the president told the crowd at the AI summit that he wasn’t a fan of the name “artificial intelligence.”

“I don’t like the name ‘artificial’ anything, because it’s not artificial. It’s genius. It’s pure genius,” Trump said. “And its potential to transform every type of human endeavor and domain of human knowledge from medicine to manufacturing to warfare and national defense. Whether we like it or not, we’re suddenly engaged in a fast-paced competition to build and define this groundbreaking technology that will determine so much about the future of civilization itself.”

Daniel Schiff, a policy scientist and the co-director of the Governance and Responsible AI Lab at Purdue University, called the White House AI action plan a mixed bag.

“There are some pillars that are in the right direction but not fully thought through or could even backfire,” he said.

American leadership in AI is certainly good, Schiff said.

And having AI infrastructure, such as data centers, on American soil is generally good.

But he’s concerned that Trump’s desire to cut red tape will cut corners on environmental protections.

“I think we can take time to be careful about where we’re putting data centers, where we’re putting packaging plants, what the energy usage is, what’s the impact on energy costs in the region, and what are impacts on water and agriculture and a bunch of other things,” Schiff said.

Danny Weiss, the chief advocacy officer for Common Sense Media, said Trump appears to be letting the AI industry write its own rules.

Weiss noted the participation of David Sacks, a tech investor who was named as Trump’s AI and cryptocurrency czar.

“At a high level, the action plan and the executive orders are exactly what the AI industry is looking for, which is sort of the full weight and support of the federal government on behalf of whatever the AI industry wants,” Weiss said.

Common Sense Media fought against a provision in Trump’s “One Big Beautiful Bill” that would have stopped states from enforcing AI laws and regulations for a decade in the spirit of fostering innovation.

Common Sense Media, an organization that advocates for online protections for children and teens, was concerned that the ban on state-level AI laws would leave kids vulnerable.

Ultimately, the provision was stripped from the massive bill.

But Trump on Wednesday voiced his opposition to state AI laws.

“You can’t have three or four states holding you up. You can’t have a state with standards that are so high that it’s going to hold you up,” Trump said.

Instead, Trump said AI companies need a single federal standard they can follow.

The White House action plan also said federal funding should be withheld from states “with burdensome AI regulations.”

Weiss said the threat of federal dollars being withheld was vague, so the full effects remain to be seen.

But, he said, “We remain on high alert about potential erosion to the rights of states, and we will continue to be vigilant about that.”

Weiss previously said the state laws can range from required disclosures of AI-generated content to laws that try and stop AI from being used to create child sexual abuse material.

Nearly all states have introduced legislation on AI this year, while 28 states adopted or enacted more than 75 new measures, according to the National Conference of State Legislatures.

AI expert Anton Dahbura said there are a lot of emerging AI applications that need to be addressed right away with safety guardrails.

“Until the federal government undergoes a sea change in the way they think about AI and its impact, positive and negative on all people, I’m thankful for the patchwork of state regulations,” said Dahbura, the co-director of the Johns Hopkins Institute for Assured Autonomy.

Dahbura said many people think of AI as only ChatGPT or chatbots.

But AI has been around for decades in some form.

And he said the federal government needs to maintain robust science and technology funding if it really wants the U.S. to win the AI race.

Many technologies, not just computer science, move the ball forward for AI.

And university researchers play a critical role, not just Silicon Valley firms, Dahbura said.

How does Trump’s new action plan differ from former President Joe Biden’s approach to AI?

“At the top level, these approaches, they never have as much detail as we would like,” Dahbura said. “And this is no exception. Clearly the Biden administration was more concerned about the risks that go along with the benefits of AI and how to address them. Not just to squash AI, but to say, ‘Hey, you know, let’s look into these things. Let’s figure out solutions.'”

Weiss said the current administration is clearly focused more on growth over safety.

“It’s always been our position that you have to have trust in order to have growth,” Weiss said. “And if people don’t trust the product because there’s no safety or guardrails or anything, then that will impede growth.”

One of Trump’s new executive orders is designed to promote “the export of full-stack American AI technology packages to allies and partners worldwide.”

Schiff said Biden pushed for more export controls.

And he said exporting American AI tech comes with a “dual-use problem.”

“There are countries that are allies. There are countries that are adversaries. There are countries that will work with adversaries,” he said.

The administration might be risking American AI tech ending up in the wrong hands without a more refined scope of what can be exported, he said.

Schiff described Trump’s plans as “shots across the bow, high-level aspirations,” not detailed policy.

And he, too, said Trump might be catering to the AI industry.

“This is certainly consistent with what some corners, not even all corners of the big AI companies, have been sort of asking for,” Schiff said.

Dahbura and Schiff said Trump’s order against “woke” AI might ignore the complexities of how bias can plague the technology.

“It’s not just two directions. It’s not the political. There are all kinds of biases,” Dahbura said.

And he said biases can go in unpredictable directions.

AI has inherent risks that can be reduced, though maybe not eliminated, he said. That’s the “nature of the beast,” he said.

And biases are one of those risks.

Dahbura said there are both algorithmic and societal biases.

A dangerous example of a nonpolitical bias could be one in a health care AI system that misdiagnoses a patient, Dahbura said.

“I think that politicizing the technology at this point is a big mistake,” he said.

Schiff said the White House is reasonable to seek AI systems that are objective.

But he said the administration is only calling out certain types of perceived biases.

There are obvious mistakes, but there are also forms of possible bias that are more subjective.

And Weiss said the administration’s focus on eliminating “woke” AI systems could leave in place AI systems that are discriminatory.

“It should be a concern to people, kids, families, consumers in general,” Weiss said. “People who are looking to rent apartments, people who are looking to make purchases, people who are applying for jobs, AI is going to be playing a role in all of those things. There can be all kinds of bias built into that.”



Source link

Tools & Platforms

AI will reshape internet, create jobs in West Virginia says High Technology Foundation's Estep – WV News

Published

on



AI will reshape internet, create jobs in West Virginia says High Technology Foundation’s Estep  WV News



Source link

Continue Reading

Tools & Platforms

Nvidia says GAIN AI Act would restrict competition, likens it to AI Diffusion Rule

Published

on

By


Nvidia said on Friday the AI GAIN Act would restrict global competition for advanced chips, with similar effects on the US leadership and economy as the AI Diffusion Rule, which put limits on the computing power countries could have.

Short for Guaranteeing Access and Innovation for National Artificial Intelligence Act, the GAIN AI Act was introduced as part of the National Defense Authorization Act and stipulates that AI chipmakers prioritize domestic orders for advanced processors before supplying them to foreign customers.

“We never deprive American customers in order to serve the rest of the world. In trying to solve a problem that does not exist, the proposed bill would restrict competition worldwide in any industry that uses mainstream computing chips,” an Nvidia spokesperson said.

If passed into law, the bill would enact new trade restrictions mandating exporters obtain licenses and approval for the shipments of silicon exceeding certain performance caps.

“It should be the policy of the United States and the Department of Commerce to deny licenses for the export of the most powerful AI chips, including such chips with total processing power of 4,800 or above and to restrict the export of advanced artificial intelligence chips to foreign entities so long as United States entities are waiting and unable to acquire those same chips,” the legislation reads.

The rules mirror some conditions under former U.S. President Joe Biden’s AI diffusion rule, which allocated certain levels of computing power to allies and other countries.

The AI Diffusion Rule and AI GAIN Act are attempts by Washington to prioritize American needs, ensuring domestic firms gain access to advanced chips while limiting China’s ability to obtain high-end tech amid fears that the country would use AI capabilities to supercharge its military.

Last month, President Donald Trump made an unprecedented deal with Nvidia to give the government a cut of its sales in exchange for resuming exports of banned AI chips to China.



Source link

Continue Reading

Tools & Platforms

Apple sued by authors over use of books in AI training

Published

on

By


Technology giant Apple was accused by authors in a lawsuit on Friday of illegally using their copyrighted books to help train its artificial intelligence systems, part of an expanding legal fight over protections for intellectual property in the AI era.

The proposed class action, filed in the federal court in Northern California, said Apple copied protected works without consent and without credit or compensation.

“Apple has not attempted to pay these authors for their contributions to this potentially lucrative venture,” according to the lawsuit, filed by authors Grady Hendrix and Jennifer Roberson.

Apple and lawyers for the plaintiffs did not immediately respond to requests for comment on Friday.

The lawsuit is the latest in a wave of cases from authors, news outlets and others accusing major technology companies of violating legal protections for their works.

Artificial intelligence startup Anthropic on Friday disclosed in a court filing in California that it agreed to pay $1.5 billion to settle a class action from a group of authors who accused the company of using their books to train its AI chatbot Claude without permission.

Anthropic did not admit any liability in the accord, which lawyers for the plaintiffs called the largest publicly reported copyright recovery in history.

In June, Microsoft was hit with a lawsuit by a group of authors who claimed the company used their books without permission to train its Megatron artificial intelligence model. Meta Platforms and Microsoft-backed OpenAI also have faced claims over the alleged misuse of copyrighted material in AI training.

The lawsuit against Apple accused the company of using a known body of pirated books to train its “OpenELM” large language models.

Hendrix, who lives in New York, and Roberson in Arizona, said their works were part of the pirated dataset, according to the lawsuit.



Source link

Continue Reading

Trending