Connect with us

Tools & Platforms

Nvidia’s CEO says the US should ‘reduce’ dependency on other countries and onshore technology manufacturing

Published

on




CNN
 — 

America’s plan to “re-industrialize” technology manufacturing is “exactly the right thing,” said Jensen Huang, CEO of the world’s leading AI chipmaker.

In an interview with CNN’s Fareed Zakaria, Huang, who heads the Santa Clara, California-based Nvidia, said the United States should invest in manufacturing and is currently “missing that entire band in our industries.”

“That passion, the skill, the craft of making things; the ability to make things is valuable for economic growth — it’s value for a stable society with people who can create a wonderful life and a wonderful career without having to get a PhD in physics,” Huang said.

The Trump administration has instituted a slew of policies, including sweeping tariffs, in an effort to revive America’s declining manufacturing industries. It has been in part to boost the automotive and energy sectors, as well as investments in technologies.

“President Trump has made it clear America cannot rely on China to manufacture critical technologies such as semiconductors, chips, smartphones, and laptops,” White House press secretary Karoline Leavitt said in a statement in April after a temporary tariff pause was instituted on smartphones and other electronics.

Huang said that onshoring manufacturing would take the pressure off of Taiwan, where the world’s largest semiconductor maker, Taiwan Semiconductor Manufacturing Company (TSMC), is based. Trump announced in March that the chipmaking giant would invest at least $100 billion in US manufacturing.

“Having a rich ecosystem of industries and manufacturing so that we could, on the one hand, make the United States better but also reduce our dependency — sole dependency — on other countries, is a smart move,” Huang said.

The increase in AI investments, which fueled a massive technology boom in recent years, has raised concerns about whether the technology will threaten jobs in the future. A survey released in January from the World Economic Forum showed 41% of employers plan to downsize their workforce by 2030 because of AI automation.

Nvidia, which briefly reached $4 trillion in market value, has created technology to power data centers that companies like Microsoft, Amazon and Google use to operate their AI models and cloud services.

“Everybody’s jobs will be affected. Some jobs will be lost. Many jobs will be created and what I hope is that the productivity gains that we see in all the industries will lift society,” Huang said.

He explained that every software engineer and chip designer at Nvidia uses AI, and he encourages it “to the point of mandating it.”

Artificial intelligence tools, especially generative response platforms like Elon Musk’s Grok and OpenAI’s ChatGPT, have faced their fair share of controversies recently.

Just last week, Grok began responding with posts after Musk’s xAI tweaked the chatbot to allow it to offer users more “politically incorrect” answers. It began creating antisemitic hate posts, among other graphic descriptions.

xAI posted a statement Saturday that an update of “deprecated code” made Grok susceptible to existing user posts on X, including extremist views. That code has since been removed, according to the X statement.

Huang commented on Grok, saying it’s probably because the chatbot is “younger” but that Musk “has made so much progress with Grok in 18 months.”

“Of course there’s the fine tuning, there’s the guardrailing, and that just takes time of polish,” he said.

There have also been concerns about AI models being prone to “hallucinations,” meaning AI models go off-script and spew inaccurate information. Similarly, because they can be susceptible to manipulation, some experts have expressed concerns about losing control of powerful AI models.

But Huang believes that “borderline scares people” who do not know how AI systems are interconnected to keep the technology safe. He explained that most AI models use other AI tools to provide resources and fact-check. He added that global standards and safety practices should be in place.

“It will be overwhelmingly positive. Some harm will be done. The world has to jump on top of it when it happens, but it will be overwhelmingly, incredibly powerful,” he said.

Using AI in healthcare and real-world use

Huang said AI models could be used to cure diseases by teaching the tools about proteins and chemicals, including the meanings of chemicals and how they interact.

It would be similar to the process of drug discovery, but it’s more complicated than teaching AI about language because of the data required, Huang noted.

“Not only will we accelerate the discovery of drugs, we’ll improve our understanding of disease. But over time, we’re going to have virtual assistant researchers and scientists to help us essentially cure all disease,” he said. “I think that day is coming.”

There will also be real-world, physical use cases of AI. Generative models today, like Google’s Veo 3, can generate videos of physical actions. The next step is creating a robot that can complete similar tasks, like picking up a glass. That process would be a vision-language-action (VLA) model, which differs from large-language models (LLMs).

“The technology exists today. It works today,” Huang said, adding there will be lots of the technology in “three to five years.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Anthropic Taps Higher Education Leaders for Guidance on AI

Published

on


The artificial intelligence company Anthropic is working with six leaders in higher education to help guide how its AI assistant Claude will be developed for teaching, learning and research. The new Higher Education Advisory Board, announced in August, will provide regular input on educational tools and policies.

According to a news release from Anthropic, the board is tasked with ensuring that AI “strengthens rather than undermines learning and critical thinking skills” through policies and products that support academic integrity and student privacy.

As teachers adapt to AI, ed-tech leaders have called for educators to play an active role in aligning AI to educational standards.


“Teachers and educators and administrators should be in the decision-making seat at every critical decision-making point when AI is being used in education,” Isabella Zachariah, formerly a fellow at the U.S. Department of Education’s Office of Educational Technology, said at the EDUCAUSE conference in October 2024. The Office of Educational Technology has since been shuttered by the Trump administration.

To this end, advisory boards or councils involving educators have emerged in recent years among ed-tech companies and institutions seeking to ground AI deployments in classroom experiences. For example, the K-12 software company Otus formed an AI advisory board earlier this year with teachers, principals, instructional technology specialists and district administrators representing more than 20 school districts across 11 states. Similarly, software company Frontline Education launched an AI advisory council last month to allow district leaders to participate in pilots and influence product design choices.

The Anthropic board taps experts in the education, nonprofit and technology sectors, including two former university presidents and three campus technology leaders. Rick Levin, former president of Yale University and CEO of Coursera, will serve as board chair. Other members include:

  • David Leebron, former president of Rice University
  • James DeVaney, associate vice provost for academic innovation at the University of Michigan
  • Julie Schell, assistant vice provost of academic technology at the University of Texas at Austin
  • Matthew Rascoff, vice provost for digital education at Stanford University
  • Yolanda Watson Spiva, president of Complete College America

The board contributed to a recent trio of AI fluency courses for colleges and universities, according to the news release. The online courses aim to give students and faculty a foundation in the function, limitations and potential uses of large language models in academic settings.

Schell said she joined the advisory board to explore how technology can address persistent challenges in learning.

“Sometimes we forget how cognitively taxing it is to really learn something deeply and meaningfully,” she said. “Throughout my career, I’ve been excited about the different ways that technology can help accentuate best practices in teaching or pedagogy. My mantra has always been pedagogy first, technology second.”

In her work at UT Austin, Schell has focused on responsible use of AI and engaged with faculty, staff, students and the general public to develop guiding principles. She said she hopes to bring the feedback from the community, as well as education science, to regular meetings. She said she participated in vetting existing Anthropic ed-tech tools, like Claude Learning mode, with this in mind.

In the weeks since the board’s announcement, the group has met once, Schell said, and expects to meet regularly in the future.

“I think it’s important to have informed people who understand teaching and learning advising responsible adoption of AI for teaching and learning,” Schell said. “It might look different than other industries.”

Abby Sourwine is a staff writer for the Center for Digital Education. She has a bachelor’s degree in journalism from the University of Oregon and worked in local news before joining the e.Republic team. She is currently located in San Diego, California.





Source link

Continue Reading

Tools & Platforms

Duke AI program emphasizes critical thinking for job security :: WRAL.com

Published

on


Duke’s AI program is spearheaded by a professor who is not just teaching, he also built his own AI model. 

Professor Jon Reifschneider says we’ve already entered a new era of teaching and learning across disciplines.

He says, “We have folks that go into healthcare after they graduate, go into finance, energy, education, etc. We want them to bring with them a set of skills and knowledge in AI, so that they can figure out: ‘How can I go solve problems in my field using AI?'”

He wants his students to become literate in AI, which is a challenge in a field he describes as a moving target. 

“I think for most people, AI is kind of a mysterious black box that can do somewhat magical things, and I think that’s very risky to think that way, because you don’t develop an appreciation of when you should use it and when you shouldn’t use it,” Reifschneider told WRAL News.

Student Harshitha Rasamsetty said she is learning the strengths and shortcomings of AI.

“We always look at the biases and privacy concerns and always consider the user,” she said.

The students in Duke’s engineering master’s programs come from all backgrounds, countries, even ages. Jared Bailey paused his insurance career in Florida to get a handle on the AI being deployed company-wide. 

He was already using AI tools when he wondered, “What if I could crack them open and adjust them myself and make them better?”

John Ernest studied engineering in undergrad, but sought job security in AI.

“I hear news every day that AI is replacing this job, AI is replacing that job,” he said. “I came to a conclusion that I should be a part of a person building AI, not be a part of a person getting replaced by AI.”

Reifschneider thinks warnings about AI taking jobs are overblown. 

In fact, he wants his students to come away understanding that humans have a quality AI can’t replace. That’s critical thinking. 

Reifschneider says AI “still relies on humans to guide it in the right direction, to give it the right prompts, to ask the right questions, to give it the right instructions.”

“If you can’t think, well, AI can’t take you very far,” Bailey said. “It’s a car with no gas.”

Reifschneider told WRAL that he thinks children as young as elementary school students should begin learning how to use AI, when it’s appropriate to do so, and how to use it safely.

WRAL News went inside Wake County schools to see how it is being used and what safeguards the district is using to protect students. Watch that story Wednesday on WRAL News.



Source link

Continue Reading

Tools & Platforms

WA state schools superintendent seeks $10M for AI in classrooms

Published

on


This article originally appeared on TVW News.

Washington’s top K-12 official is asking lawmakers to bankroll a statewide push to bring artificial intelligence tools and training into classrooms in 2026, even as new test data show slow, uneven academic recovery and persistent achievement gaps.

Superintendent of Public Instruction Chris Reykdal told TVW’s Inside Olympia that he will request about $10 million in the upcoming supplemental budget for a statewide pilot program to purchase AI tutoring tools — beginning with math — and fund teacher training. He urged legislators to protect education from cuts, make structural changes to the tax code and act boldly rather than leaving local districts to fend for themselves. “If you’re not willing to make those changes, don’t take it out on kids,” Reykdal said.

The funding push comes as new Smarter Balanced assessment results show gradual improvement but highlight persistent inequities. State test scores have ticked upward, and student progress rates between grades are now mirroring pre-pandemic trends. Still, higher-poverty communities are not improving as quickly as more affluent peers. About 57% of eighth graders met foundational math progress benchmarks — better than most states, Reykdal noted, but still leaving four in 10 students short of university-ready standards by 10th grade.

Reykdal cautioned against reading too much into a single exam, emphasizing that Washington consistently ranks near the top among peer states. He argued that overall college-going rates among public school students show they are more prepared than the test suggests. “Don’t grade the workload — grade the thinking,” he said.

Artificial intelligence, Reykdal said, has moved beyond the margins and into the mainstream of daily teaching and learning: “AI is in the middle of everything, because students are making it in a big way. Teachers are doing it. We’re doing it in our everyday lives.”

OSPI has issued human-centered AI guidance and directed districts to update technology policies, clarifying how AI can be used responsibly and what constitutes academic dishonesty. Reykdal warned against long-term contracts with unproven vendors, but said larger platforms with stronger privacy practices will likely endure. He framed AI as a tool for expanding customized learning and preparing students for the labor market, while acknowledging the need to teach ethical use.

Reykdal pressed lawmakers to think more like executives anticipating global competition rather than waiting for perfect solutions. “If you wait until it’s perfect, it will be a decade from now, and the inequalities will be massive,” he said.

With test scores climbing slowly and AI transforming classrooms, Reykdal said the Legislature’s next steps will be decisive in shaping whether Washington narrows achievement gaps — or lets them widen.

TVW News originally published this article on Sept. 11, 2025.


Paul W. Taylor is programming and external media manager at TVW News in Olympia.



Source link

Continue Reading

Trending