Connect with us

AI Research

Nvidia’s Jensen Huang says AI could lead to job losses ‘if the world runs out of ideas’

Published

on




CNN
 — 

The chief executive of the world’s leading chipmaker warned that while artificial intelligence will significantly boost workplace productivity, it could lead to job loss if industries lack innovation.

“If the world runs out of ideas, then productivity gains translates to job loss,” said Nvidia CEO Jensen Huang in an interview with CNN’s Fareed Zakaria when asked about comments made by fellow tech leader Dario Amodei, who suggested AI will cause mass employment disruptions.

Amodei, the head of Anthropic, warned last month that the technology could cause a dramatic spike in unemployment in the very near future. He told Axios that AI could eliminate half of entry-level, white-collar jobs and spike unemployment to as much as 20% in the next five years.

Huang believes that as long as companies come up with fresh ideas, there’s room for productivity and employment to thrive. But without new ambitions, “productivity drives down,” he said, potentially resulting in fewer jobs.

“The fundamental thing is this, do we have more ideas left in society? And if we do, if we’re more productive, we’ll be able to grow,” he said.

The increase in AI investments, which fueled a massive technology boom in recent years, has raised concerns about whether the technology will threaten jobs in the future. Roughly 41% of chief executives have said AI will reduce the number of workers at thousands of companies over the next five years, according to a 2024 survey from staffing firm Adecco Group. A survey released in January from the World Economic Forum showed 41% of employers plan to downsize their workforce by 2030 because of AI automation.

“Everybody’s jobs will be affected. Some jobs will be lost. Many jobs will be created and what I hope is that the productivity gains that we see in all the industries will lift society,” Huang said.

Nvidia, which briefly reached $4 trillion in market value, is among the companies leading the AI revolution. The Santa Clara, California-based chipmaker’s technology has been used to power data centers that companies like Microsoft, Amazon and Google use to operate their AI models and cloud services.

Huang defended the development of AI, saying that “over the course of the last 300 years, 100 years, 60 years, even in the era of computers,” both employment and productivity increased. He added that technological advancements can facilitate the realization of “an abundance of ideas” and “ways that we could build a better future.”

Artificial intelligence is also likely to change the way work is done. More than half of large US firms said they plan to automate tasks previously done by employees, such as paying suppliers or doing invoices, according to a 2024 survey by Duke University and the Federal Reserve Banks of Atlanta and Richmond.

Huang said that even his job has changed as a result of the AI revolution, “but I’m still doing my job.”

Some companies also use AI tools, like ChatGPT and chatbots, for creative tasks including drafting job posts, press releases and building marketing campaigns.

“AI is the greatest technology equalizer we’ve ever seen,” said Huang. “It lifts the people who don’t understand technology.”

Fareed Zakaria’s interview with Nvidia CEO Jensen Huang can be seen on “Fareed Zakaria GPS” on Sunday 10 a.m. ET/PT.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment – The Standard (HK)

Published

on



(Policy Address 2025) HK earmarks HK$3B for AI research and talent recruitment  The Standard (HK)



Source link

Continue Reading

AI Research

Spatially-Aware Image Focus for Visual Reasoning


View a PDF of the paper titled SIFThinker: Spatially-Aware Image Focus for Visual Reasoning, by Zhangquan Chen and 6 other authors

View PDF
HTML (experimental)

Abstract:Current multimodal large language models (MLLMs) still face significant challenges in complex visual tasks (e.g., spatial understanding, fine-grained perception). Prior methods have tried to incorporate visual reasoning, however, they fail to leverage attention correction with spatial cues to iteratively refine their focus on prompt-relevant regions. In this paper, we introduce SIFThinker, a spatially-aware “think-with-images” framework that mimics human visual perception. Specifically, SIFThinker enables attention correcting and image region focusing by interleaving depth-enhanced bounding boxes and natural language. Our contributions are twofold: First, we introduce a reverse-expansion-forward-inference strategy that facilitates the generation of interleaved image-text chains of thought for process-level supervision, which in turn leads to the construction of the SIF-50K dataset. Besides, we propose GRPO-SIF, a reinforced training paradigm that integrates depth-informed visual grounding into a unified reasoning pipeline, teaching the model to dynamically correct and focus on prompt-relevant regions. Extensive experiments demonstrate that SIFThinker outperforms state-of-the-art methods in spatial understanding and fine-grained visual perception, while maintaining strong general capabilities, highlighting the effectiveness of our method. Code: this https URL.

Submission history

From: Zhangquan Chen [view email]
[v1]
Fri, 8 Aug 2025 12:26:20 UTC (5,223 KB)
[v2]
Thu, 14 Aug 2025 10:34:22 UTC (5,223 KB)
[v3]
Sun, 24 Aug 2025 13:04:46 UTC (5,223 KB)
[v4]
Tue, 16 Sep 2025 09:40:13 UTC (5,223 KB)



Source link

Continue Reading

AI Research

[2506.08171] Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models


View a PDF of the paper titled Worst-Case Symbolic Constraints Analysis and Generalisation with Large Language Models, by Daniel Koh and 4 other authors

View PDF
HTML (experimental)

Abstract:Large language models (LLMs) have demonstrated strong performance on coding tasks such as generation, completion and repair, but their ability to handle complex symbolic reasoning over code still remains underexplored. We introduce the task of worst-case symbolic constraints analysis, which requires inferring the symbolic constraints that characterise worst-case program executions; these constraints can be solved to obtain inputs that expose performance bottlenecks or denial-of-service vulnerabilities in software systems. We show that even state-of-the-art LLMs (e.g., GPT-5) struggle when applied directly on this task. To address this challenge, we propose WARP, an innovative neurosymbolic approach that computes worst-case constraints on smaller concrete input sizes using existing program analysis tools, and then leverages LLMs to generalise these constraints to larger input sizes. Concretely, WARP comprises: (1) an incremental strategy for LLM-based worst-case reasoning, (2) a solver-aligned neurosymbolic framework that integrates reinforcement learning with SMT (Satisfiability Modulo Theories) solving, and (3) a curated dataset of symbolic constraints. Experimental results show that WARP consistently improves performance on worst-case constraint reasoning. Leveraging the curated constraint dataset, we use reinforcement learning to fine-tune a model, WARP-1.0-3B, which significantly outperforms size-matched and even larger baselines. These results demonstrate that incremental constraint reasoning enhances LLMs’ ability to handle symbolic reasoning and highlight the potential for deeper integration between neural learning and formal methods in rigorous program analysis.

Submission history

From: Daniel Koh [view email]
[v1]
Mon, 9 Jun 2025 19:33:30 UTC (1,462 KB)
[v2]
Tue, 16 Sep 2025 10:35:33 UTC (1,871 KB)



Source link

Continue Reading

Trending