Connect with us

AI Research

How a once-tiny research lab helped Nvidia become a $4 trillion-dollar company

Published

on


When Bill Dally joined Nvidia’s research lab in 2009, it employed only about a dozen people and was focused on ray tracing, a rendering technique used in computer graphics.

That once-small research lab now employs more than 400 people, who have helped transform Nvidia from a video game GPU startup in the nineties to a $4 trillion-dollar company fueling the artificial intelligence boom.

Now, the company’s research lab has its sights set on developing the tech needed to power robotics and AI. And some of that lab work is already showing up in products. The company unveiled Monday a new set of world AI models, libraries, and other infrastructure for robotics developers.

Dally, now Nvidia’s chief scientist, started consulting for Nvidia in 2003 while he was working at Stanford. When he was ready to step down from being the department chair of Stanford’s computer science department a few years later, he planned to take a sabbatical. Nvidia had a different idea.

Bill DallyImage Credits:Nvidia

David Kirk, who was running the research lab at the time, and Nvidia CEO Jensen Huang, thought a more permanent position at the research lab was a better idea. Dally told TechCrunch the pair put on a “full-court press” on why he should join Nvidia’s research lab and eventually convinced him.

“It wound up being kind of a perfect fit for my interests and my talents,” Dally said. “I think everybody’s always searching for the place in life where they can make the biggest contribution to the world. And I think for me, it’s definitely Nvidia.”

When Dally took over the lab in 2009, expansion was first and foremost. Researchers started working on areas outside of ray tracing right away, including circuit design and VLSI, or very large-scale integration, a process that combines millions of transistors on a single chip.

The research lab hasn’t stopped expanding since.

Techcrunch event

San Francisco
|
October 27-29, 2025

“We try to figure out what will make the most positive difference for the company because we’re constantly seeing exciting new areas, but some of them, they do great work, but we have trouble saying if [we’ll be] wildly successful at this,” Dally said.

For a while that was building better GPUs for artificial intelligence. Nvidia was early to the future AI boom and started tinkering with the idea of AI GPUs in 2010 — more than a decade before the current AI frenzy.

“We said this is amazing, this is gonna completely change the world,” Dally said. “We have to start doubling down on this and Jensen believed that when I told him that. We started specializing our GPUs for it and developing lots of software to support it, engaging with the researchers all around the world who were doing it, long before it was clearly relevant.”

Physical AI focus

Now, as Nvidia holds a commanding lead in the AI GPU market, the tech company has started to seek out new areas of demand beyond AI data centers. That search has led Nvidia to physical AI and robotics.

“I think eventually robots are going to be a huge player in the world and we want to basically be making the brains of all the robots,” Dally said. “To do that we need to start developing the key technologies.”

That’s where Sanja Fidler, the vice president of AI research at Nvidia, comes in. Fidler joined Nvidia’s research lab in 2018. At the time, she was already working on simulation models for robots with a team of students at MIT. When she told Huang about what they were working on at a researchers’ reception, he was interested.

“I could not resist joining,” Fidler told TechCrunch in an interview. “It’s just such a great topic fit and at the same time was also such a great culture fit. Jensen told me, come work with me, not with us, not for us.”

She joined Nvidia and got to work creating a research lab in Toronto called Omniverse, an Nvidia platform, that was focused on building simulations for physical AI.

Sanja FidlerImage Credits:Nvidia

The first challenge to building these simulated worlds was finding the necessary 3D data, Fidler said. This included finding the proper volume of potential images to use and building the technology needed to turn these images into 3D renditions the simulators could use.

“We invested in this technology called differentiable rendering, which essentially makes rendering amendable to AI,” Fidler said. “You go [from] rendering means from 3D to image or video. And we want it to go the other way.”

World models

Omniverse released the first version of its model that turns images into 3D models, GANverse3D, in 2021. Then it got to work on figuring out the same process for video. Fidler said they used videos from robots and self-driving cars to create these 3D models and simulations through its Neural Reconstruction Engine, which the company first announced in 2022.

She added these technologies were the backbone of the company’s Cosmos family of world AI models that were announced at CES in January.

Now, the lab is focused on making these models faster. When you play a video game or simulation you want the tech to be able to respond in real time, Fidler said, for robots they are working to make the reaction time even faster.

“The robot doesn’t need to watch the world in the same time, in the same way as the world works,” Fidler said. “It can watch it like 100x faster. So if we can make this model significantly faster than they are today, they’re going to be tremendously useful for robotic or physical AI applications.”

The company continues to make progress on this goal. Nvidia announced at the SIGGRAPH computer graphics conference on Monday a fleet of new world AI models designed for creating synthetic data that can be used to train robots. Nvidia also announced new libraries and infrastructure software aimed at robotics developers too.

Despite the progress — and the current hype about robots, especially humanoids — the Nvidia research team remains realistic.

Both Dally and Fidler said the industry is still at least a few years off from having a humanoid in your home, with Fidler comparing it to the hype and timeline regarding autonomous vehicles.

“We’re making huge progress and I think AI has really been the enabler here,” Dally said. “Starting with visual AI for the robot perception, and then generative AI, that’s being hugely valuable for task and motion planning and manipulation. As we solve each of these individual little problems and as the amount of data we have to train our networks grows, these robots are going to grow.”

We’re always looking to evolve, and by providing some insight into your perspective and feedback into TechCrunch and our coverage and events, you can help us! Fill out this survey to let us know how we’re doing and get the chance to win a prize in return!



Source link

AI Research

OpenAI Projects $115 Billion Cash Burn by 2029

Published

on


OpenAI has sharply raised its projected cash burn through 2029 to $115 billion, according to The Information. This marks an $80 billion increase from previous estimates, as the company ramps up spending to fuel the AI behind its ChatGPT chatbot.

The company, which has become one of the world’s biggest renters of cloud servers, projects it will burn more than $8 billion this year, about $1.5 billion higher than its earlier forecast. The surge in spending comes as OpenAI seeks to maintain its lead in the rapidly growing artificial intelligence market.


To control these soaring costs, OpenAI plans to develop its own data center server chips and facilities to power its technology.


The company is partnering with U.S. semiconductor giant Broadcom to produce its first AI chip, which will be used internally rather than made available to customers, as reported by The Information.


In addition to this initiative, OpenAI has expanded its partnership with Oracle, committing to a 4.5-gigawatt data center capacity to support its growing operations.


This is part of OpenAI’s larger plan, the Stargate initiative, which includes a $500 billion investment and is also supported by Japan’s SoftBank Group. Google Cloud has also joined the group of suppliers supporting OpenAI’s infrastructure.


OpenAI’s projected cash burn will more than double in 2024, reaching over $17 billion. It will continue to rise, with estimates of $35 billion in 2027 and $45 billion in 2028, according to The Information.

Tags





Source link

Continue Reading

AI Research

PromptLocker scared ESET, but it was an experiment

Published

on


The PromptLocker malware, which was considered the world’s first ransomware created using artificial intelligence, turned out to be not a real attack at all, but a research project at New York University.

On August 26, ESET announced that detected the first sample of artificial intelligence integrated into ransomware. The program was called PromptLocker. However, as it turned out, it was not the case: researchers from the Tandon School of Engineering at New York University were responsible for creating this code.

The university explained that PromptLocker — is actually part of an experiment called Ransomware 3.0, which was conducted by a team from the Tandon School of Engineering. A representative of the school told the publication that a sample of the experimental code was uploaded to the VirusTotal platform for malware analysis. It was there that ESET specialists discovered it, mistaking it for a real threat.

According to ESET, the program used Lua scripts generated on the basis of strictly defined instructions. These scripts allowed the malware to scan the file system, analyze the contents, steal selected data, and perform encryption. At the same time, the sample did not implement destructive capabilities — a logical step, given that it was a controlled experiment.

Nevertheless, the malicious code did function. New York University confirmed that their AI-based simulation system was able to go through all four classic stages of a ransomware attack: mapping the system, identifying valuable files, stealing or encrypting data, and creating a ransomware message. Moreover, it was able to do this on various types of systems — from personal computers and corporate servers to industrial controllers.

Should you be concerned? Yes, but with an important caveat: there is a big difference between an academic proof-of-concept demonstration and a real attack carried out by malicious actors. However, such research can be a good opportunity for cybercriminals, as it shows not only the principle of operation but also the real costs of its implementation.



Source link

Continue Reading

AI Research

Deutsche Bank on ‘the summer AI turned ugly’: ‘more sober’ than the dotcom bubble, with som troubling data-center math

Published

on


Deutsche Bank analysts have been watching Amazon Prime, it seems. Specifically, the “breakout” show of the summer, “The Summer I Turned Pretty.” In the AI sphere, analysts Adrian Cox and Stefan Abrudan wrote, it was the summer AI “turned ugly,” with several emerging themes that will set the course for the final quarter of the year. Paramount among them: The rising fear over whether AI has driven Big Tech stocks into the kind of frothy territory that precedes a sharp drop.

The AI news cycle of the summer captured themes including the challenge of starting a career, the importance of technology in the China/U.S. trade war, and mounting anxiety about the impact of the technology. But in terms of finance and investing, Deutsche Bank sees markets “on edge” and hoping for a soft landing amid bubble fears. In part, it blames tech CEOs for egging on the market with overpromises, leading to inflated hopes and dreams, many spurred on by tech leaders’ overpromises. It also sees a major impact from the venture capital space, boosting startups’ valuations, and from the lawyers who are very busy filing lawsuits for all kinds of AI players. It’s ugly out there. But the market is actually “more sober” in many ways than the situation from the late 1990s, the German bank argues.

Still, Wall Street is not Main Street, and Deutsche Bank notes troubling math about the data centers sprouting up on the outskirts of your town. Specifically, the bank flags a back-of-the-envelope analysis from hedge fund Praetorian Capital that suggests hyperscalers’ massive data center investments could be setting up the market for negative returns, echoing past cycles of “capital destruction.”

AI hype and market volatility

AI has captured the market’s imagination, with Cox and Abrudan noting, “it’s clear there is a lot of hype.” Web searches for AI are 10 times as high as they ever were for crypto, the bank said, citing Google Trends data, while it also finds that S&P 500 companies mentioned “AI” over 3,300 times in their earnings calls this past quarter.

Stock valuations overall have soared alongside the “Magnificent Seven” tech firms, which collectively comprise a third of the S&P 500’s market cap. (The most magnificent: Nvidia, now the world’s most valuable company at a market cap exceeding $4 trillion.) Yet Deutsche Bank points out that today’s top tech players have healthier balance sheets and more resilient business models than the high flyers of the dotcom era.

By most ratios, the bank said, valuations “still look more sober than those for hot stocks at the height of the dot-com bubble,” when the Nasdaq more than tripled in less than 18 months to March 2000, then lost 75% of its value by late 2002. By price-to-earnings ratio, Alphabet and Meta are in the mid-20x range, while Amazon and Microsoft trade in the mid-30x range. By comparison, Cisco surpassed 200x during the dotcom bubble, and even Microsoft reached 80x. Nvidia is “only” 50x, Deutsche Bank noted.

Those data centers, though

Despite the relative restraint in share prices, AI’s real risk may be lurking away from its stock-market valuations, in the economics of its infrastructure. Deutsche Bank cites a blog post by Praetorian Capital “that has been doing the rounds.” The post in “Kuppy’s Korner,” named for the fund’s CEO Harris “Kuppy” Kupperman, estimates that hyperscalers’ total data-center spending for 2025 could hit $400 billion, and the bank notes that is roughly the size of the GDP of Malaysia or Egypt. The problem, according to the hedge fund, is that the data centers will depreciate by roughly $40 billion per year, while they currently generate no more than $20 billion of annual revenue. How is that supposed to work?

“Now, remember, revenue today is running at $15 to $20 billion,” the blog post says, explaining that revenue needs to grow at least tenfold just to cover the depreciation. Even assuming future margins rise to 25%, the blog post estimates that the sector would require a stunning $160 billion in annual revenue from the AI powered by those data centers just to break even on depreciation—and nearly $480 billion to deliver a modest 20% return on invested capital. For context, even giants like Netflix and Microsoft Office 365 at their peaks brought in less than a fraction of that figure. Even at that level, “you’d need $480 billion of AI revenue to hit your target return … $480 billion is a LOT of revenue for guys like me who don’t even pay a monthly fee today for the product.” Going from $20 billion to $480 billion could take a long time, if ever, is the implication, and sometime before the big AI platforms reach those levels, their earnings, and presumably their shares, could take a hit.

Deutsche Bank itself isn’t as pessimistic. The bank notes that the data-center buildout is producing a greatly reduced cost for each use of an AI model, as startups are reaching “meaningful scale in cloud consumption.” Also, consumer AI such as ChatGPT and Gemini is growing fast, with OpenAI saying in August that ChatGPT had over 700 million weekly users, plus 5 million paying business users, up from 3 million three months earlier. The cost to query an AI model (subsidized by the venture capital sector, to be sure) has fallen by around 99.7% in the two years since the launch of ChatGPT and is still headed downward.

Echoes of prior bubbles

Praetorian Capital draws two historical parallels to the current situation: the dotcom era’s fiber buildout, which led to the bankruptcy of Global Crossing, and the more recent capital bust of shale oil. In each case, the underlying technology is real and transformative—but overzealous spending with little regard for returns could leave investors holding the bag if progress stalls.

The “arms race” mentality now gripping the hyperscalers’ massive capex buildout mirrors the capital intensity of those past crises, and as Praetorian notes, “even the MAG7 will not be immune” if shareholder patience runs out. Per Kuppy’s Korner, “the megacap tech names are forced to lever up to keep buying chips, after having outrun their own cash flows; or they give up on the arms race, writing off the past few years of capex … Like many things in finance, it’s all pretty obvious where this will end up, it’s the timing that’s the hard part.”

This cycle, Deutsche Bank argues, is being sustained by robust earnings and more conservative valuations than the dotcom era, but “periodic corrections are welcome, releasing some steam from the system and guarding against complacency.” If revenue growth fails to keep up with depreciation and replacement needs, investors may force a harsh reckoning—one characterized not by spectacular innovation but by a slow realization of negative returns.

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.



Source link

Continue Reading

Trending