Targeting the cloud market with TPU that is faster and more efficient than GPUs AI development SW ecosystem is also established. Cracks in Nvidia’s dominant system
Google has challenged Nvidia’s monopoly in the artificial intelligence (AI) semiconductor market. Google, which has purchased Nvidia’s GPUs in large quantities, loaded them into Google Cloud data centers, and leased them to customers, has now started supplying its chip, Tensor Processing Unit (TPU), to external data centers.
According to the IT media The Information on the 3rd (local time), Google recently contacted small and medium-sized cloud companies and proposed introducing TPU. It has signed a contract with London-based ‘Floydstack’ to set up a TPU at a new data center in New York. This is expected to be the first time that Google deploys TPUs in external data centers rather than its own facilities.
There are two interpretations from the market regarding this move. Analysts say that the expansion of its own data center is not keeping up with demand, so it uses external facilities, and that it is a strategy to newly expand its TPU customer base through external cloud companies. In the latter case, Google will emerge as a “compete” for chips supply with Nvidia.
TPU, which Google introduced in 2016, is an AI chip that was applied to AlphaGo, which won the Go match against Lee Se-dol. Like the name “Tensor (multidimensional data), it is an AI-only chip designed to handle complex mathematical calculations (matrix operations) required for deep learning better than GPUs. If GPUs were originally developed for game graphics processing and then used for AI learning, the difference is that TPU is a custom chip made only for AI operations from the beginning. Thanks to this, it has the advantage of low power consumption and high speed in certain tasks. Until now, Google has not sold TPU directly, but has only provided it through its cloud. This is due to strategic reasons such as optimizing internal services such as search and YouTube and reducing GPU purchase costs.

However, this year, this trend is changing. In April, Google unveiled its model pipeline solution “Pathway,” which was used to train large language models (LLM) to external developers. Using it as a necessary manual for training models such as LLM allows researchers to develop LLM such as Gemini without having to redesign them. Google unveiled TPU (hardware) and JAX (JAX), software that allows users to use LLM as services such as chat, and even Pathway, signaling that the entire process required for AI development can be done within Google’s ecosystem. In other words, it has declared that competition with Nvidia is possible.
The number of TPU users is also increasing. According to a report released by U.S. investment firm D.A. Davidson on the 2nd, developer activities centered on TPU in Google Cloud have increased by about 96% over the past six months. The report analyzed that not only the demand for the sixth-generation TPU, “Trillium,” which was released in December last year, but also the interest in the large-scale inference TPU “Ironwood,” which is scheduled to be released later this year, is increasing. “Increasingly, interest and demand are increasing in research institutes and companies,” D.A. Davidson wrote. “Google’s TPU is now emerging as the best alternative to Nvidia chips than Chinese companies.”
![NVIDIA'S GPU 'Blackwell'. [N.B.Dia]](https://aistoriz.com/wp-content/uploads/2025/09/news-p.v1.20250904.2f732899ce7e466ab74846800d415192_P1.png)
Nvidia accounts for 80-90% of the AI training GPU market, effectively establishing a dominant system. If the data center market is high only, it is overwhelming competitors such as AMD (4%) with a 92% share as of March this year.
GPUs operate on “Cuda,” a dedicated software (SW) developed by Nvidia, and as this software has become the industry standard, many companies and developers are tied to Nvidia products. Large operators such as AWS, Microsoft Azure, and Google Cloud have also purchased Nvidia GPUs in bulk and provided them to customers.
As TPU’s external supply begins in earnest and major companies are trying to reduce their dependence on Nvidia, it is predicted that the game of the data center semiconductor market may change.
■ What is TPU? AI semiconductor developed by Google for machine learning. It is a type of processor responsible for an arithmetic processing function and is distinguished from a memory semiconductor that stores data.