Connect with us

Business

Secured new orders for AI chips worth $10 billion, significantly raised growth expectations for 2026.

Published

on


Broadcom (AVGO.US) The company is consolidating its core position in the customized AI chip market. With a new order exceeding $10 billion from a prominent client, the company has significantly raised its key long-term growth forecasts, indicating that its already booming AI business will accelerate further.

According to news from the Wind Trading Desk, on September 4 local time, during the latest earnings call, Hock Tan, the company’s President and CEO, announced two major pieces of news: first, he will continue to lead the company at least until 2030; second, the company has converted a potential client into its fourth formal customer for customized AI accelerators (XPU), securing a production order valued at over $10 billion.

An article from Wall Street Journal mentioned that this client with the billion-dollar order is reportedly OpenAI.

This substantial new order has pushed the company’s total backlog to a record $110 billion. Coupled with the sustained strong demand from three existing major clients, Broadcom has announced that its AI revenue growth rate for the fiscal year 2026 will “significantly improve” and exceed the growth levels seen in fiscal year 2025.

Driven by the strong performance of its AI business and VMware software segment, Broadcom reported a record third-quarter revenue of $16 billion and provided fourth-quarter guidance that exceeded market expectations, anticipating revenue of approximately $17.4 billion. Additionally, the outlook for AI revenue in fiscal year 2026 has significantly improved compared to the previous quarter’s forecast. However, the recovery in demand for non-AI semiconductor business is slow, and it is expected to exhibit a “U-shaped” recovery trend.

The core driver for this upward revision in performance outlook stems from the confirmation of a new client. Hock Tan revealed during the conference call that a previous potential client has placed a production order with Broadcom, becoming the company’s fourth XPU major client. Unlike the ‘GPU’ designed by suppliers such as NVIDIA and AMD, Broadcom’s customized AI chips are referred to as ‘XPU.’

We have received over $10 billion in AI rack orders based on our XPU, Hock Tan stated. He added that this new “direct and substantial demand,” combined with the growing orders from our existing three customers, “has indeed changed our outlook for the 2026 fiscal year.”

According to Hock Tan, this $10 billion order is expected to begin delivery in the second half of the 2026 fiscal year, likely in the third quarter. In addition to the contributions from new customers, Broadcom’s share among its original three XPU customers is also “gradually” increasing, as these clients are increasingly shifting towards customized solutions on their respective paths to computing self-sufficiency with each generation of products. As a result, Hock Tan anticipates that by 2026, the share of XPU in AI revenue will continue to rise.

While discussing the strong business outlook, Hock Tan announced a crucial piece of news for market confidence: he has reached an agreement with the board to continue serving as CEO at least until 2030.

“This is an exciting moment for Broadcom, and I am very passionate about continuing to create value for our shareholders,” Hock Tan said. Amid the company’s efforts to seize the historic opportunities presented by AI, this announcement alleviates external uncertainties regarding potential leadership changes, providing essential stability for the execution of the company’s long-term strategy.

In stark contrast to the booming AI business, Broadcom’s non-AI semiconductor business remains sluggish. In the third fiscal quarter, revenue from this segment was $4 billion, unchanged from the previous quarter, indicating weak demand recovery.

Hock Tan described this recovery as “U-shaped,” rather than the “V-shaped” rebound expected by the market. He noted that while the guidance for the fourth fiscal quarter anticipates low double-digit sequential growth in the non-AI semiconductor business, this is primarily driven by seasonal factors in the wireless and server storage sectors. Broadband remains the only area experiencing “sustained strong growth.”

“I expect the non-AI business to experience a more U-shaped recovery, and perhaps we will only start to see some meaningful recovery by the middle to late part of 2026,” acknowledged Hock Tan.

As the scale of AI clusters exceeds 100,000 nodes, networking has become a bottleneck. Hock Tan emphasized that Broadcom, with its decades of accumulation in the Ethernet field, is well-positioned to address this challenge. The company recently launched new generation switches and routers such as Tomahawk 6 and Jericho 4, aimed at supporting cross-data center ‘scale-across’ hyperscale clusters by reducing network layers and lowering latency.

Hock Tan expressed great confidence in open Ethernet standards, believing they are the inevitable choice for addressing AI network challenges. “Ethernet is the way to go,” he stated, “there is no need to create new protocols that you have to get people to accept now.” He believes that the openness of Ethernet and its mature ecosystem provide a significant advantage over proprietary protocols.

Since the acquisition of VMware, Broadcom’s software business integration has continued to make progress. In the third fiscal quarter, the infrastructure software division achieved revenues of $6.8 billion, a year-on-year increase of 17%.

An important milestone is the release of VMware Cloud Foundation (VCF) version 9.0, a fully integrated cloud platform designed to provide enterprises with a true alternative to public cloud. Hock Tan stated that the current focus is on driving successful deployment and operation of the platform for the top 10,000 large clients who have purchased VCF licenses, and subsequently selling advanced services such as security and disaster recovery.

The transcript of Broadcom’s third quarter fiscal year 2025 earnings call is as follows, translated by AI tools:

Event Date: September 4, 2025
Company Name: Broadcom
Event Description: Third Quarter Fiscal Year 2025 Earnings Conference Call
Source: Broadcom

Operator:
Welcome to Broadcom’s conference call regarding the financial performance for the third quarter of fiscal year 2025. Now, regarding the opening remarks and introduction, I would like to hand the call over to Ji Yoo, the Head of Investor Relations at Broadcom. Please go ahead.

Head of Investor Relations Ji Yoo:
Thank you, Shree. Good afternoon, everyone. Joining me on the call today are Hock Tan, President and CEO; Kirsten Spears, Chief Financial Officer and Chief Accounting Officer; and Charlie Kawwas, President of the Semiconductor Solutions Division.

Broadcom issued a press release and financial tables after the market close, detailing our financial performance for the third quarter of fiscal year 2025. If you did not receive a copy, you can obtain relevant information from the Investor Relations section of the Broadcom website broadcom.com. This conference call is being broadcast live over the internet, and the audio replay of the meeting will be accessible on the Investor Relations section of the Broadcom website for one year.

In prepared remarks, Hock and Kirsten will provide a detailed overview of our performance for the third quarter of fiscal year 2025, guidance for the fourth quarter of fiscal year 2025, and commentary on the business environment. Following the conclusion of our prepared remarks, we will address questions. Please refer to our press release today and the recent filings submitted to the U.S. Securities and Exchange Commission (SEC) for specific risk factors that could cause our actual results to differ materially from the forward-looking statements made during this conference call.

In addition to reporting in accordance with U.S. Generally Accepted Accounting Principles (GAAP), Broadcom also reports certain financial metrics on a non-GAAP basis. The reconciliation between GAAP and non-GAAP metrics is included in the tables attached to today’s press release. The comments made during today’s conference call will primarily reference our non-GAAP financial performance.

I will now turn the call over to Hock.

Hock E. Tan, President and Chief Executive Officer:
Thank you, Ji. And thank you all for joining our meeting today.

In our third quarter of fiscal year 2025, total revenue reached a record $16 billion, representing a year-over-year increase of 22%. The revenue growth was driven by stronger-than-expected AI semiconductor demand and our continued growth at VMware. Adjusted consolidated EBITDA (Earnings Before Interest, Taxes, Depreciation, and Amortization) for the third quarter reached a record $10.7 billion, up 30% year-over-year.

Now, looking beyond our quarterly report, the order volume is extremely large due to strong AI demand. Our company’s current consolidated backlog has reached a record $110 billion.

In the third quarter, semiconductor revenue reached $9.2 billion, with a year-on-year growth accelerating to 26%. This accelerated growth was driven by $5.2 billion in AI semiconductor revenue, which grew by 63% year-on-year, continuing a strong growth trajectory for the tenth consecutive quarter.

Now, let me provide you with more information about our XPU business, which accelerated growth this quarter, accounting for 65% of our AI revenue. Our three customers have a continuously growing demand for custom AI accelerators as they are progressing towards computational self-sufficiency at their own pace. Additionally, we are gradually gaining more share from these customers.

Apart from these three customers, as we mentioned earlier, we have been collaborating with other potential clients regarding their own AI accelerators. In the last quarter, one of the potential customers placed a production order with Broadcom, thus qualifying them as an eligible customer for XPU. In fact, we have already secured over $10 billion in AI accelerator (regs) orders based on our XPU. Reflecting this, we now expect a significantly improved outlook for AI revenue in fiscal year 2026 compared to what we indicated in the previous quarter.

Speaking of AI networks, demand remains strong as computing clusters must grow larger with the continuous evolution of large language models (LLMs) in intelligence, making networks critical. Networking equals computing, and our customers are facing challenges as they scale up clusters to over 100,000 computing nodes.

For example, we all know that creating substantial bandwidth for memory sharing among multiple GPUs or XPUs within a single accelerator (reg) poses a daunting challenge for vertical scaling. Today’s AI accelerators employ proprietary NVLink, which can only scale up 72 GPUs with a bandwidth of 28.8 terabits per second. In contrast, earlier this year, we collaborated with OpenAI to launch a solution based on open Ethernet (apologies), which allows customers using XPU to vertically scale 512 computing nodes.

Let’s discuss cross-rack horizontal scaling (scale-out). Currently, existing architectures using 51.2 terabits per second require three layers of network switches. In June, we released a 102 terabits per second Ethernet switch (6th generation) that flattens the network to two layers, thereby reducing latency and significantly decreasing power consumption.

When scaling to clusters that exceed the boundaries of a single data center, you now need to extend computing across data centers. Over the past two years, we have deployed our Jericho 3 Ethernet routers to hyperscale customers to achieve this. Today, we are launching the next-generation Jericho 4 Ethernet architecture router, which provides 51.2 terabits per second of deep bandwidth and intelligent congestion control to manage clusters of over 200,000 computing nodes spanning multiple data centers.

We recognize that the greatest challenge in deploying larger generative AI computing clusters will be the network. Over the past 20 years, the technologies developed by Broadcom for Ethernet networks are fully applicable to the challenges of vertical scaling, horizontal scaling, and scaling across domains in generative AI.

Regarding our forecasts, as I mentioned earlier, we continue to make steady progress in growing AI revenue. For the fourth quarter of 2025, we project AI semiconductor revenue to be approximately $6.2 billion, representing a year-on-year increase of 66%.

Now, let’s talk about non-AI semiconductors. The recovery in demand continues to be slow, with third-quarter revenue at $4 billion, remaining flat quarter-on-quarter. While the broadband business showed strong sequential growth, enterprise networking and service storage declined sequentially. The wireless and industrial sectors remained flat quarter-on-quarter as we expected.

In contrast, for the fourth quarter, driven by seasonal factors, we anticipate non-AI semiconductor revenue to increase in the low double digits quarter-on-quarter, reaching approximately $4.6 billion. Improvements are expected in broadband, server storage, and wireless businesses, while enterprise networking is projected to decline year-on-year.

Now let me discuss our infrastructure software segment. Third-quarter infrastructure software revenue was $6.8 billion, reflecting a year-on-year increase of 17%, surpassing our outlook of $6.7 billion, as bookings for the quarter remained strong. In fact, we booked a total contract value exceeding $8.4 billion in the third quarter.

But here’s the most exciting part for me. After two years of engineering development by over 5,000 developers, we have delivered on our commitment made during the acquisition of VMware. We launched VMware Cloud Foundation version 9.0, a fully integrated cloud platform that enterprise customers can deploy on-premises or migrate to the cloud. It enables enterprises to run any application workload, including AI workloads on virtual machines and modern containers. This provides a true public cloud alternative.

In the fourth quarter, we expect infrastructure software revenue to be approximately $6.7 billion, which represents a year-on-year growth of 15%.

In summary, the continued strength of AI and VMware will drive our consolidated revenue guidance for the fourth quarter to approximately $17.4 billion, reflecting a year-on-year increase of 24%. We expect adjusted EBITDA for the fourth quarter to reach 67% of revenue.

With that, let me hand the call over to Kirsten.

Chief Financial Officer and Chief Accounting Officer Kirsten Spears:
Thank you, Hock. Now let me provide more details about our financial performance for the third quarter.

This quarter, consolidated revenue reached a record high of $16 billion, representing a 22% increase compared to the same period last year. The gross margin for this quarter was 78.4% of revenue, exceeding our initial guidance, driven by an increase in software revenue and an improvement in the semiconductor product mix. Consolidated operating expenses amounted to $2 billion, of which $1.5 billion was R&D expenses.

Operating income for the third quarter hit a record $10.5 billion, a 32% increase year-over-year. On a quarter-over-quarter basis, although the gross margin decreased by 100 basis points due to the revenue mix, the operating profit margin grew by 20 basis points sequentially to 65.5%, supported by operational leverage. The adjusted EBITDA was $10.7 billion, accounting for 67% of revenue, higher than our guidance of 66%. This figure excludes $142 million in depreciation.

Now let’s review the profit and loss statements of our two divisions, starting with semiconductors. Our semiconductor solutions division generated revenue of $9.2 billion, with a year-over-year growth acceleration to 26%, driven by AI. Semiconductor revenue accounted for 57% of total revenue this quarter. The gross margin for our semiconductor solutions division was approximately 67%, a decrease of 30 basis points year-over-year due to product mix. Operating expenses grew by 9% year-over-year to $961 million, due to increased investments in cutting-edge AI semiconductors. The operating profit margin for semiconductors was 57%, an increase of 130 basis points year-over-year, remaining flat sequentially.

Now let’s talk about infrastructure software. Revenue from infrastructure software was $6.8 billion, a 17% increase year-over-year, accounting for 43% of revenue. The gross margin for infrastructure software this quarter was 93%, up from 90% a year ago. This quarter, operating expenses were $1.1 billion, leading to an operating profit margin for infrastructure software of approximately 77%. In comparison, the operating profit margin a year ago was 67%, reflecting the completion of VMware integration.

Next, let’s discuss cash flow. Free cash flow for this quarter was $7 billion, representing 44% of revenue. We spent $142 million on capital expenditures. The days sales outstanding (DSO) for the third quarter was 37 days, compared to 32 days a year ago. Our inventory at the end of the third quarter stood at $2.2 billion, an 8% increase quarter-over-quarter, in anticipation of revenue growth in the next quarter. The inventory holding days (DIO) for the third quarter was 66 days, down from 69 days in the second quarter, as we continued to maintain strict inventory management across the ecosystem.

At the end of the third quarter, we had $10.7 billion in cash and total debt principal of $66.3 billion. The weighted average coupon rate and maturity of our $65.8 billion fixed-rate debt were 3.9% and 6.9 years, respectively. The weighted average rate and maturity of our $5 billion floating-rate debt were 4.7% and 0.2 years, respectively.

Speaking of capital allocation. In the third quarter, we paid shareholders $2.8 billion in cash dividends, based on a quarterly cash dividend of $0.59 per common share. In the fourth quarter, we expect the non-GAAP diluted share count to be approximately 4.97 billion shares, excluding any potential impact from stock repurchases.

Now, looking at the guidance. Our guidance for the fourth quarter is consolidated revenue of $17.4 billion, a 24% year-over-year increase. We anticipate semiconductor revenue of approximately $10.7 billion, a 30% year-over-year increase. Of this, we expect AI semiconductor revenue in the fourth quarter to be $6.2 billion, a 66% year-over-year increase. We expect infrastructure software revenue to be approximately $6.7 billion, a 15% year-over-year increase.

For your modeling convenience, we expect the consolidated gross margin in the fourth quarter to decrease by approximately 70 basis points quarter-over-quarter, primarily reflecting the increase in the proportion of XPU and wireless revenue. As a reminder, the annual consolidated gross margin will be influenced by the revenue mix of infrastructure software and semiconductors, as well as the internal product mix of semiconductors.

We expect the adjusted EBITDA for the fourth quarter to be 67%. We anticipate that the non-GAAP tax rate for the fourth quarter and the fiscal year 2025 will remain at 14%.

I will now hand the call back to Hock to share some more exciting news.

Hock E. Tan, President and Chief Executive Officer:
I may not be as excited as Kirsten, but I am excited. I want to share some breaking news before we enter the Q&A session. The board and I have agreed that I will continue to serve as CEO of Broadcom at least until 2030. This is an exciting time for Broadcom, and I am very eager to continue driving value for our shareholders.

Operator, please begin the Q&A session.

Q&A Session

Operator:
Thank you. (Operator’s note)
Our first question comes from Ross Seymore of Deutsche Bank. Your line is open.

Analyst Ross Seymore:
Hi, everyone. Thank you for allowing me to ask a question. Hock, thank you for staying a few more years. I just want to talk about the AI business, particularly XPU. When you mentioned that the growth rate will significantly exceed that of the previous quarter, what has changed? Is it simply the impressive potential client becoming a defined customer? The $10 billion backlog you mentioned? Or is it that the demand from the existing three clients has strengthened? Any details would be helpful.

Hock E. Tan, President and Chief Executive Officer:
I think it’s both, Ross, but largely it is the addition of that new customer on our list. We are expecting to ship in significant volumes starting in early 2026. So, the increase in demand from the existing three clients (we are making steady progress on that) combined with the addition of the fourth client and its direct and quite substantial demand has indeed changed our outlook for how 2026 will start.

Analyst Ross Seymore:
Thank you.

Hock E. Tan, President and Chief Executive Officer:
Thank you.

Operator:
Please hold for the next question. The next question comes from Harlan Sur of JPMorgan. Your line is open.

Analyst Harlan Sur:
Hi, good afternoon. Congratulations on the strong quarterly performance and robust free cash flow. I know everyone will be asking many questions about AI, Hock. I want to ask about the non-AI semiconductor business. If I look at your guidance for the fourth quarter, it appears that if the midpoint of the fourth quarter guidance is achieved, non-AI business will decline by approximately 7%-8% year-over-year in fiscal year 2025. The good news is that the trend of negative year-over-year growth has been improving throughout this year. In fact, I believe you will achieve year-over-year positive growth in the fourth quarter. You described it as being relatively close to the cycle bottom, with recovery being relatively slow.

However, we have already seen some positive signs, right? Broadband, server storage, enterprise networks, you are still pushing for the DOCSIS 4 upgrade of broadband cables, and next-generation PON upgrades are also ahead in China and the United States. Corporate spending on network upgrades is accelerating. So, from the recent cyclical bottom, how should we think about the magnitude of the cyclical rebound? Given your 30 to 40-week delivery cycle, are you seeing sustained order improvements in non-AI sectors that would point to continued cyclical recovery in the next fiscal year?

Hock E. Tan, President and Chief Executive Officer:
Well, if you look at that non-AI area, I mean, you are right, looking at the year-over-year guidance from the fourth quarter, we are actually up, as you said, slightly above last year’s same period by a few percentage points (1% or 2%). At this point, there’s really nothing to write home about. The biggest issue is that there are both increases and decreases. The end result of all this is that, aside from the seasonal factors we perceive, if you look at it in the short term, we see year-over-year comparisons, but looking at it on a quarter-over-quarter basis, we see some seasonality in areas like wireless, and even now we are starting to see some seasonality in server storage as well. These… so far, these factors seem to offset each other.

The only sustained upward trend we have seen over the past three or four quarters is in broadband. There is no other area that seems to maintain an upward trend from a cyclical perspective so far. I don’t think it will — but as a whole, as you pointed out, Harlan, they haven’t gotten worse, but as a whole, they haven’t shown the V-shaped recovery that we hope to see and expect to see in the semiconductor cycle. The only thing that gives us some hope right now is broadband, which is recovering very strongly. But it is also the business most severely affected by the sharp decline expected in ’24 and early ’25. So again, we should be cautious about this.

But to give you the best answer, the recovery in non-AI semiconductors is slow, and as I mentioned, the best description of the year-over-year performance in the fourth quarter is low single-digit growth. So I expect non-AI to experience more of a U-shaped recovery, perhaps by mid-’26 or late ’26, we will start to see meaningful recovery. But for now, it remains unclear.

Analyst Harlan Sur:
Yes. Are you starting to see this in the order trends and order book, just because your delivery cycles are around 40 weeks, right?

Hock E. Tan, President and Chief Executive Officer:
We have been misled before, but we do see it now. The booking volume is on the rise, increasing by more than 20% year-on-year. Although it is not at the level of AI booking volumes, 23% is still quite impressive, right?

Operator:
Thank you. Please hold for the next question. The next question comes from Vivek Arya of Bank of America. Your line is open.

Analyst Vivek Arya:
Thank you for answering my question, and I wish you all the best in your next term. My question is about—could you help us quantify the new FY2026 AI guidance? Because I believe during the last conference call you mentioned that FY2026 could see a growth rate of 60%. So what are the updated figures? Is it 60% plus the $10 billion you mentioned? Related to this, do you expect the mix of custom chips and networking products to remain roughly at last year’s levels, or will it lean more towards custom chips? Any quantifiable information regarding the mix of networking and custom chips for FY2026 would be very helpful.

Hock E. Tan, President and Chief Executive Officer:
Okay. Let’s address the first part. If I may, during our last quarterly report, I hinted to you that the growth trend for ’26 would mirror that of ’25, which is a year-on-year growth of 50%-60%. I really only said that. I didn’t— but of course, it manifests as 50%-60%, because that’s what ’25 was. I merely stated that if you want to look at it another way, perhaps more accurately, we are seeing the growth rate accelerating rather than just stabilizing at 50%-60%. We expect and are seeing that the growth rate for 2026 will be higher than what we experienced in 2025.

I know you would like me to give you a number, but you know what, we shouldn’t provide you with a forecast for ’26, but the best way to describe it is that it will be a quite significant improvement.

Analyst Vivek Arya:
What about networking and custom chips?

Hock E. Tan, President and Chief Executive Officer:
Good question. Thank you for the reminder. As we have observed, the main driver of this growth will be XPU. As for—repeating what I mentioned in my speech, the reason is that we continue to gain market share from our initial three customers. They must—they are on a journey, with each new generation of products, they are increasingly turning to XPU. Therefore, we are gaining share from these three customers. We are now benefiting from the addition of a fourth very important customer. I mean a fourth and very important customer. This combination will mean more XPU.

As I mentioned, as we accumulate more experience with these four customers, we will also gain network business with these four customers, but now, the share of network business from other customers outside of these four will shrink and become a smaller share. Therefore, I actually expect that by 2026, the percentage of network business in the total pool will decline.

Analyst Vivek Arya:
Thank you.

Operator:
Please hold for the next question. The next question comes from Stacy Rasgon of Bernstein Research. Your line is now open.

Analyst Stacy Rasgon:
Hi, everyone. Thank you for answering my question. I would like to know if you could help me break down this $110 billion backlog of orders. I hope I didn’t mishear that figure? Could you provide us with an overview of its composition? For example, what is the time frame it covers? Also, how much of this $110 billion is related to AI versus non-AI versus software?

Hock E. Tan, President and Chief Executive Officer:
Well, I think, Stacy, we usually do not break down the backlog—I’m providing a total number to give you a sense of how strong our business is overall, which is primarily driven by AI-related growth—software continues to grow steadily. Non-AI, as I pointed out, has grown in double digits but is not significant compared to the very strong growth in AI. To give you a sense, at least 50% of it is semiconductors.

Analyst Stacy Rasgon:
Okay. So, it can be said that within that portion of semiconductors, AI will far exceed non-AI.

President and CEO Hock E. Tan: Correct.

Analyst Stacy Rasgon:
Okay, understood. This is very helpful. Thank you.

Operator:
Please hold for the next question. The next question comes from Ben Reitzes of Melius Research. Your line is open.

Analyst Ben Reitzes:
Hey everyone, thank you very much. Hock, congratulations on being able to guide a revenue growth for AI that exceeds 60% next year. So I want to be a bit greedy and ask you about maybe fiscal year 2027 and the situation with the other three potential customers. Besides these four customers, how is the dialogue progressing with other clients? In the past, you mentioned there were seven, and now we have added the fourth into production. So there are still three. Have you heard any news from the other customers, and what are the trends with the other three, perhaps beyond ’26 into ’27 and beyond? How do you think this momentum will develop? Thank you very much.

Hock E. Tan, President and Chief Executive Officer:
You are indeed quite greedy, and you are definitely overthinking it for me. Thank you. However, to be honest, I do not wish to provide subjective qualifications. I am rather reluctant to provide that because sometimes our timelines for entering production can be unexpectedly quick. Similarly, there can also be delays. Therefore, I would prefer not to give you any more information about potential customers, but just to tell you that these potential customers are real potential customers and are continuing to be very closely involved in developing their respective XPUs, with each of them intending to enter mass production like our four existing custom customers today.

Analyst Ben Reitzes:
Yes, you still believe that the million-unit target set for these seven companies remains intact.

Hock E. Tan, President and Chief Executive Officer:
As for those three, I now say there are four. That is just for clients, which is silly—no comment or position on potential clients. But for our four, three of them, now four clients, yes.

Analyst Ben Reitzes:
Okay. Thank you very much. Congratulations.

Operator:
Please hold for the next question. The next question comes from Jim Schneider of Goldman Sachs. Your line is open.

Analyst Jim Schneider:
Good afternoon, thank you for answering my question. Hock, I would like to know if you could provide us with a bit more information, not necessarily about the potential clients remaining in your pipeline, but rather how you view the universe of additional potential clients beyond the seven confirmed clients and prospects. Do you still believe there are additional potential clients worth creating custom chips for? I know you have been relatively cautious and selective regarding the number of clients in this field, the volume they can provide, and the opportunities you are interested in. So perhaps you could frame for us the additional potential clients you see beyond the V7. Thank you.

Hock E. Tan, President and Chief Executive Officer:
That is a very good question, let me—let me answer it on a broader basis. Well, as I mentioned before, perhaps to reiterate a bit, we view this market as two large segments. You know, one is those vendors developing their own LLMs, and I think the other part of the market collectively is enterprises. This market is for enterprises running AI workloads, whether on-premises or in any form of GPU or XPU or as a service. Frankly, we are not targeting that market. We are not targeting it because it is a market we find difficult to address, and we are not set up to tackle it. Instead, we focus on this LLM market.

As I have said multiple times, this is a very narrow market with only a few players driving cutting-edge models towards a very accelerated trend towards superintelligence—or in others’ words, pursuing happiness, but you understand my point. And those others who need to initially invest a lot of capital for training, my view is that training larger clusters and more powerful accelerators is becoming increasingly necessary. But for these companies, they also have to be accountable to shareholders, or to be able to generate cash flow that sustains their growth path, they are also starting to invest heavily in inference to monetize their models. These are the participants we collaborate with.

These are individuals or participants who are spending substantial amounts of money on large computing capacities, but such individuals are too few. And I have—I’ve pointed out that we have identified seven, four of which are now our clients, and three continue to be potential clients we are engaging with. We are very selective and still cautious, I should say, selectively and cautiously determining who qualifies for this. I have pointed that out. They are building a platform or have a platform, and they are heavily investing in leading LLM models. I think that’s about it.

We may also see one as a potential client. But again, we are very thoughtful and cautious even when conducting this qualification. But what is certain at this point is that we have seven. For now, that is basically what we have.

Analyst Jim Schneider:
Thank you.

Operator:
Please hold for the next question. The next question comes from Tom O’Malley of Barclays. Your line is now connected.

Analyst Thomas O’Malley:
Hi, everyone. Thank you for answering my question, and congratulations on the excellent performance. I would like to ask about—comments on Jericho 4. NVIDIA has talked about the XGF switch, and now they are discussing scale across. You are talking about Jericho 4. It sounds like this market is really starting to develop. Perhaps you could discuss when you expect significant revenue increases in that area? And why is it important to start thinking about those types of switches as we shift more towards inference? Thank you, Hock.

Hock E. Tan, President and Chief Executive Officer:
Well, thank you for pointing that out. Yes, scale across is now a new term, right? Distinct from scale-up (vertical scaling) (within a rack, in—rack computing) and scale-out (horizontal scaling) (across racks, but within a data center). But now when you reach cluster scale—I am not 100% sure where the boundary is, but for example, above 100,000 GPUs or XPUs, in many cases, due to power constraints, you will not place more than 100,000 such XPUs within one data center footprint. Power may not be readily available, and land may not be as convenient. So what we are seeing is that many—some of the results, most of our customers are now creating multiple data center sites that are relatively close together, within a range of 100 kilometers. This is a certain degree, but being able to then place homogeneous XPUs or GPUs in multiple locations (three or four) and connect them via networking makes them actually operate as a single cluster. That is the coolest part.

And this technology, due to distance reasons, requires deep buffering and very intelligent congestion control, which has been a technology used by telecom companies like AT&T and Verizon for network routing for years, but this is for even more challenging workloads, though the principle is the same. Over the past two years, we have been shipping Jericho 3 to some hyperscale customers to address this cluster scale as well as the bandwidth expansion needed for AI training. We are now releasing this Jericho 4, at 51 terabits per second, to handle more bandwidth, but it uses the same technology that we have tested and validated over the past 10 or 20 years; there is nothing new about it. There is no need to create something new for this. It runs on Ethernet, is very mature, and very stable. As I mentioned, over the past two years on Jericho 3 (operating 256 computers*?), we have been selling to several hyperscale customers.

Operator:
Please hold for the next question. The next question comes from Carl Ackerman of BNP Paribas. Your line is open.

Analyst Karl Ackerman:
Yes, thank you. Hock, have you fully transitioned your top 10,000 accounts from vSphere to the entire vSphere Cloud Foundation (VCF) virtualization stack? I ask this because I want to know how the adoption rate has grown from the previous quarter where 87% of accounts have adopted it, which is certainly a significant increase compared to less than 10% of customers who purchased the entire suite before the transaction. I wonder if, as you answer this question, you are seeing interest levels among the longer-tail segment of enterprise customers regarding the adoption of BCF? Additionally, as these customers adopt VMware, have you observed tangible cross-selling benefits in your commercial semiconductor, storage, and networking businesses? Thank you.

Hock E. Tan, President and Chief Executive Officer:
Okay. To answer the first part of your question, yes, nearly over 90% have purchased VCF. Now, I like to be— I am cautious in my wording. Because we have sold them, and they have purchased the licenses to deploy it, but that does not mean they have fully deployed it. This leads to another part of our work, which is to take these 10,000 customers or a significant portion of them who have already accepted—have already purchased their vision of building private clouds on-premises and work with them to enable them to successfully deploy and run it on their local infrastructure. This is the hard work we see happening over the next two years.

As we do this, we see VCF expanding within their IT coverage, with private clouds operating on their data centers within their datasets. This is a key part of it. We see this continuing. This is the second phase of the VMware story. The first phase was convincing people to transition from perpetual licenses to purchase VCF. The second phase now is to enable them to realize the private cloud value they seek locally, in their IT data centers, based on their purchases on VCF. That is what is happening.

Then—this will continue for quite some time, because on this basis, we will begin to sell advanced services, security, disaster recovery, and even run AI workloads on top of it. All of this is very exciting.

Your second question is whether this will help me sell more hardware. No, well, this is completely independent. In fact, as they virtualize data centers, we consciously accept the reality that we are commoditizing the underlying hardware in data centers, commoditizing servers, commoditizing storage, and even commoditizing networking. That’s fine. By commoditizing in this way, we are actually reducing the investment costs for enterprises in data center hardware.

Now, aside from the top 10,000, have we seen a lot of success? We have seen some. But again, there are two reasons why we do not expect it to be necessarily as successful. One is that the value brought by what they call TCO (Total Cost of Ownership) will be much less, but the more important thing is that not only does deployment require skills (you can get services and our own help), but the skills required for ongoing operations may not be something they can afford. We will have to wait and see. This is a field we are still learning about, and it will be interesting to see how it unfolds—while VMware has 300,000 customers, we believe that the top 10,000 are where deploying private clouds using VCF is very meaningful and can yield significant value.

We are now observing whether the next 20,000 to 30,000 mid-sized companies see it the same way. Stay tuned. I will let you know.

Analyst Karl Ackerman:
Very clear. Thank you.

Operator:
Please hold for the next question. The next question comes from C.J. Muse of Cantor Fitzgerald. Your line is open.

Analyst C.J. Muse:
Yes, good afternoon. Thank you for answering this question. I would like to focus on the gross margin. I understand the guidance has decreased by 70 basis points, particularly due to the sequential decline in software revenue and the increased contributions from wireless and XPUs. However, to reach that 77 percent (77%+), I either need to model the semiconductor gross margin as flat (which I initially thought would be lower) or achieve a 95% gross margin for software, increasing by 200 basis points. So could you help me better understand the variations that allow for only a 70 basis point decline?

Chief Financial Officer and Chief Accounting Officer Kirsten Spears:
Yes. I mean, TPUs (which should be XPUs) will increase, and wireless will also grow. As I mentioned in the conference call, our software revenue will also increase slightly. You mean we indicated, yes. Wireless is typically our heaviest quarter of the year, right? So you have wireless and TPUs (XPUs), which usually have a lower gross margin, right? And then our software revenue is rising.

Operator:
Please hold for the next question. The next question is from Jo Moore (Joseph Moore) at Morgan Stanley. Your line is open.

Analyst Joseph Moore:
Okay. Thank you. Regarding the fourth client, I believe you previously mentioned that potential clients 4 and 5 are more like hyperscale enterprises, while 6 and 7 resemble LLM manufacturers themselves. Could you provide us with some information, if possible, to help us categorize them? If it’s inconvenient, that’s fine. Then regarding that $10 billion order, could you provide us with a timeframe? Thank you.

Hock E. Tan, President and Chief Executive Officer:
Okay. Yes, no, ultimately, all seven companies are involved in LLMs, but not all currently have what we refer to as a substantial platform. However, it is conceivable that they will all eventually possess or create a platform. Thus, it is difficult to distinguish between the two. But returning to the delivery of that $10 billion, it will likely occur in the second half of our fiscal year 2026. I would say more precisely, it is likely to be in the third quarter of our fiscal year 2026.

Analyst Joseph Moore:
Okay, does the third quarter start, or how long does it take to deploy the $10 billion order?

Hock E. Tan, President and Chief Executive Officer:
Our delivery will conclude at the end of the third quarter.

Analyst Joseph Moore:
Okay. Okay. Thank you.

Operator:
Please hold for the next question. The next question comes from Joshua Buchalter of TD Cowen. Your line is open.

Analyst Joshua Buchalter:
Hey everyone. Thank you for addressing my question and congratulations on the performance achieved. I would like you to comment on the momentum of your first vertical scale-up Ethernet solution and how it compares to the UALINK and PCDIE solutions. How significant is it to have the Tomahawk Ultra product with lower latency? Considering your AI networking business, what do you think the vertical scale-up Ethernet opportunities will be in the coming year? Thank you.

Hock E. Tan, President and Chief Executive Officer:
Well, that’s a good question. We are thinking about this ourselves, because first of all, Ethernet—our Ethernet solutions are very distinct from any AI accelerators anyone else is making. They are separate. We treat them as independent. Even if you are right that the network is the computer. We have always believed that Ethernet is open source. Anyone should be able to have a choice, and we have separated it from our XPUs, but the fact is that for customers using XPUs, we have developed and optimized our network switches and other components related to connecting signals in any cluster that work closely with it.

In fact, all these XPUs have developed interfaces for processing Ethernet, very much so. So this, to some extent, for our customers using XPUs, we have openly made Ethernet the preferred—preferred network protocol, very, very openly. It may not be our Ethernet switches. It could be from any other company, but it just happens that someone else is making Ethernet. It just happens that we are the leaders in this space, so we are getting the orders.

But apart from that, particularly with regard to closed systems involving GPUs, we see less of this, except in hyperscale enterprises that are able to very distinctly separate GPU cluster architectures from the network edge, especially in scale-out scenarios. In this case, we are selling a significant amount—large amounts of these Ethernet switches for scale-out to those hyperscale enterprises. We suspect that when it evolves into scale across, there will even be more Ethernet that is separated from GPU placement locations. As for XPUs, it is certain that it is all Ethernet.

Analyst Joshua Buchalter:
Thank you.

Operator:
Please hold for the next question. The next question comes from Christopher Rolland of Susquehanna. Your line is open.

Analyst Christopher Rolland:
Hi, thank you for the question, and congratulations on the contract extension, Hock. So yes, my question is about competition, both in terms of networking and ASICs. You addressed some of this in the last question, but how do you view any competition in the ASIC space, particularly from U.S. or Asian suppliers? Or do you think this is diminishing? In terms of networking, do you see UALink or PCIe having the opportunity to replace Sue (Ethernet) when it is expected to ramp up in 2027? Thank you.

Hock E. Tan, President and Chief Executive Officer:
Thank you for acknowledging Sue (Ethernet). Thank you. I didn’t expect to say that, and I appreciate it. Well, you know I have my biases, to be honest, but it’s so obvious that I can’t help but have biases because Ethernet is tried and true. Ethernet is so familiar to the engineers and architects designing AI data centers and AI infrastructure for all these hyperscale enterprises. It makes logical sense for them to use it, and they are using it and focusing on it. As for developing separate, individualized protocols, to be honest, I can’t imagine why they would bother. Ethernet is right there; it’s widely adopted and has proven its scalability. The only issue people talk about may be latency, especially in vertical scaling, and that’s where NVLink comes into play.

Even so, as I pointed out, it’s not difficult for us, and we are not the only ones who can do it. In the Ethernet switch space, there are quite a number of other companies that can do it. You just need to tune the switches to achieve super low latency, better than NVLink, better than InfiniBand, easily under 250 nanoseconds. That’s what we do. So it’s not that hard. Maybe that’s why I say this, because we have been doing it for the past 25 years, and Ethernet has been around. So it’s there, the technology is there, and there is no need to create some protocol that now has to be accepted by people. Ethernet is the direction we are heading— and the competition is fierce, as it is an open-source system. So I believe Ethernet is the direction, and certainly when developing accelerators (XPUs) for our customers, all these accelerators are made compatible with Ethernet at the customers’ consent, rather than with some peculiar interface that must constantly keep up with increasing bandwidth.

Moreover, I assure you that we have competition, which is one of the reasons why ultra-large enterprises favor the internet (Ethernet). It is not just us. If for some reason they do not prefer us, they can find alternatives, and we are open to that. This is always a good approach. It is an open-source system, and there are participants in that market, that ecosystem.

Moving on to XPU competition. Yes, you hear about it, we hear about competition, etc. This is a field where we always see competition, and the only way we can ensure our position is by trying to invest more and innovate more in this game than anyone else. We are fortunate to be the first company to create this silicon-based A6 XPU model. We are also fortunate to possibly be one of the largest semiconductor IP developers there, developing the best packaging and redesigning very low-power solutions, such as serializers/deserializers (SerDes). So, we just need to keep investing in this, and we indeed have done so to outpace competitors in this field. I believe we are doing quite well in this regard at the moment.

Analyst Christopher Rolland:
Very clear. Thank you, Hock.

President and CEO Hock E. Tan: Of course.

Operator:
Thank you. We have time for one last question, and the final question comes from Harsh Kumar of Piper Sandler. Your line is open.

Analyst Harsh Kumar:
Hey everyone. Thank you for adding me. Hock, congratulations on all the exciting AI indicators and thank you for everything you do for Broadcom and for staying. Hock, my question is, you have three to four existing clients that are ramping up. As the data centers for AI clusters grow larger, having differentiation, efficiency, etc., makes sense. Therefore, the rationale for XPU is valid. Why shouldn’t I think that your XPU share at these three or four existing clients will be greater than your GPU share in the long run?

Hock E. Tan, President and Chief Executive Officer:
Yes, it will. That is a logical conclusion. Yes, Harsh, you are right. We are gradually seeing this. As I said, it is a journey, a journey over many years, because it is multi-generational, as these accelerators (XPU) are not static. We are creating multiple versions for each customer, at least two versions, two generations of products. With each new generation of products, they are increasing their consumption and usage of XPU, as they gain confidence, and with improvements in the models, they deploy even more.

So, it is a logical trend that among our few customers, XPU will continue to grow, and as they successfully deploy and stabilize their software, the software specifications and libraries on these chips will also stabilize and prove themselves. They will have confidence in continuing to use an increasingly higher percentage of their own XPU within their computing coverage, that is certain, and we have seen this as well. That is why I say we are gradually gaining share.

Analyst Harsh Kumar:
Thank you, Hock.

Operator:
Thank you. Now I would like to hand the call back to the head of investor relations, Ji Yoo, for any closing remarks.

Head of Investor Relations Ji Yoo:
Thank you, Shree. This quarter, Broadcom will participate in the Goldman Sachs Communacopia and Technology Conference in San Francisco on Tuesday, September 9, and the JPMorgan U.S. All-Star Conference in London on Tuesday, September 16.

Broadcom currently plans to report its fourth quarter and full-year earnings for fiscal year 2025 after the market closes on Thursday, December 11, 2025. The public webcast of Broadcom’s earnings conference call will take place at 2:00 PM Pacific Time.

This concludes our earnings conference call for today. Thank you all for your participation. Shree, you may now end the meeting.

Operator:
Today’s program is now concluded. Thank you all for your participation. You may now disconnect.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Imperfect AI creates more editing business opportunities – DIGITIMES Asia

Published

on



Imperfect AI creates more editing business opportunities  DIGITIMES Asia



Source link

Continue Reading

Business

OpenAI forecasts $115 billion business spend on AI rollout by 2029

Published

on


OpenAI has elevated its cash burn forecast this year through 2029 to a total of $115 billion. The company’s recent cash burn expectation is also $80 billion higher than it previously projected. 

According to a report by The Information, the surge in cash burn for OpenAI comes at a time when it’s ramping up spending to power the artificial intelligence behind its popular ChatGPT chatbot. The tech firm has also become one of the world’s biggest renters of cloud services.

OpenAI plans to develop its chips and data center facilities

The source revealed that the AI company expects to burn over $8 billion this year. OpenAI had forecasted early in the year that it would only burn around $1.5 billion.

According to the report, OpenAI doubled its cash burn expectations for 2026 to more than $17 billion, surpassing its previous forecast of $10 billion. The firm also projects a $35 billion cash burn in 2027 and $45 billion in 2028. 

The FT also disclosed on Thursday that the Silicon Valley startup plans to develop its data center server chips and facilities to power its technology. According to the report, the initiative aims to control the tech company’s surging operational costs.

The firm relies on substantial computing power to train and run its systems. The company’s CEO, Sam Altman, has also advocated the need for increased computing power to accommodate the growing demand for AI products such as ChatGPT.

Deloitte’s 2025 AI infrastructure Survey revealed that the energy demands of AI are straining traditional power grids. According to the study, 79% of executives anticipate increased power demand through the next decade, with grid stress emerging as a top challenge.

The source added that U.S. semiconductor giant Broadcom will partner with OpenAI to produce the first set of chips and start shipping them by next year. Also, OpenAI allegedly plans to use the chips internally rather than selling them for external clients. 

Broadcom’s CEO, Hock Tan, hinted the company had partnered with an undisclosed customer that committed to $10 billion in orders. During a call with analysts, he revealed the firm had secured a fourth customer to boost its custom AI chip division. Tan stated the collaboration with OpenAI has enhanced its growth outlook for fiscal 2026 by generating immediate and substantial demand. 

OpenAI partners with Broadcom to produce chips

OpenAI also partnered with Broadcom and Taiwan Semiconductor Manufacturing Co. (TSMC) nearly a year ago to develop its first in-house chip. The firm was also planning to add AMD chips alongside Nvidia chips to meet its surging infrastructure demands.

OpenAI revealed in February plans to reduce its reliance on Nvidia’s chips. The firm said it will finalize the design of the new chip in the next few months and then send it to TSMC for fabrication. OpenAI’s initiative also builds on its ambitious plans to increase its semiconductor production at the Taiwanese company next year.

According to the report, OpenAI hopes to use the new chips to strengthen its negotiating leverage with other chip suppliers, including Nvidia. The company’s in-house team, led by Richard Ho, will design the chip to produce advanced processors with broader capabilities with each new iteration.

OpenAI collaborated with Oracle in July to launch a 4.5-gigawatt data center. The initiative also complements the firm’s $500 billion Stargate project, including investments from Japanese firm SoftBank Group. The tech giant has also collaborated with Google Cloud to supply computing capacity.

Get $50 free to trade crypto when you sign up to Bybit now



Source link

Continue Reading

Business

Need to Rank in AI Overviews? These SEO Agency Specializes in them

Published

on


Search is changing. AI-generated answers now appear in nearly half of all queries, creating new conditions for brand visibility. This affects how people find and trust brands online. Agencies with a narrow focus and unique capabilities have stepped in to meet this need. Among them, Growing Search manages both classical and AI-driven search with all work in-house. Their approach uses internal tools to track, analyze, and improve citations and sentiment across search platforms powered by artificial intelligence, such as ChatGPT and Perplexity. This article examines the tactics, technology, and reported impact of Growing Search in the context of this new search environment.

 

AI Overviews: A New Standard for Brand Visibility

Recent data shows that AI-generated summaries now appear in over 42% of all search results as of 2025. These AI Overviews usually display above regular listings and paid ads. The impact is straightforward. When a brand appears in these boxes, it commands user attention at the first step of a search. The positioning affects both user awareness and perceived authority.

Users often interact with AI-generated content before considering the rest of the results page. As a result, the context and credibility of source content have become essential. Brands must be discoverable and present information in a way that artificial intelligence can interpret as both trustworthy and relevant.

Unique Challenges in AI Search

AI search systems, including those behind Gemini, ChatGPT, and Perplexity, work differently from traditional engines. They scan many sources but look for signals beyond repeating keywords. They prefer content that is context-aware, semantically precise, and written by authorities in the subject. This approach changes what is needed to earn citations.

Old methods, which relied heavily on repeating high-traffic keywords, now hold less value. Instead, there are new requirements:

  •     Content must match the detailed intent of the user’s query, not only broad or superficial keyword strings.
  •         The writing must signal expertise and context that AI can detect and validate.
  •     Brand mentions and topic clusters, called entities, must be visible and consistent to help artificial intelligence select credible sources.

Citations in AI answers have grown more valuable. When users see these automated answers, the sources cited enjoy increased trust, even if most people do not click through to the original page. This influence shapes user decisions and can drive tangible results, such as more inquiries or conversions.

How Growing Search Approaches AI-Driven SEO

Growing Search focuses on both established and AI-powered search platforms. They manage audits, keyword research, content structuring, and link building. Their added strength is tracking brand mentions and sentiment within artificial intelligence tools, not only on classic search engines.

With everything handled in-house, the agency ensures accuracy and keeps processes efficient. They do not depend on third parties to handle data or make changes. This direct control is reported to result in faster responses and stronger data privacy.

Proprietary Tools: StakeView and BrandLens

Growing Search’s approach centers on two internal analytics platforms: StakeView and BrandLens.

StakeView gives brands ongoing analysis of their organic market share against competitors. It shows results for both standard search engines and AI answer engines. This lets clients see shifts in their presence and make quick decisions if needed.

BrandLens tracks sentiment and citation occurrences for a brand in AI-powered platforms. The software measures both the volume of mentions and the tone. It shows whether a brand is being named as an authority, described in a neutral voice, or mentioned with negative intent. This feedback is vital because search engines and users both react to subtle shifts in brand reputation triggered by AI answer summaries.

Industry commentary points to this kind of tracking as a necessity. As AI Overviews become more common, exposure in these features offers advantages even for those not at the top of traditional ranking pages. If a brand signals high expertise and authority, it may be named by the artificial intelligence, even without ranking high in the organic results.

In-House Analytics and Direct Control

Running all operations and tooling internally gives Growing Search certain benefits. The company can update its analytics to adapt to changes in AI algorithms quickly. When AI search behavior shifts, there are no delays caused by waiting for outside vendors. This keeps its clients aligned with the latest ranking practices.

By owning all data pipelines and analytics platforms, Growing Search also reduces risk related to data privacy and quality. Insights from StakeView and BrandLens can be passed quickly to the consulting or editorial teams.

Recent industry studies confirm that agencies using custom tools report better tracking of AI citations and sentiment. This allows campaigns to shift as needed, sometimes before competitor actions or algorithmic updates would otherwise affect visibility.

Performance in Traditional and AI Search

Classical search engines still send many users to brand websites, but new data shows accelerated gains for those featured in AI answers. Quickly summarized AI Overviews cater to users looking for direct, authoritative responses. Brands mentioned or cited in these summaries attract more inquiries and an improved reputation.

Examples from ongoing research:

  •         Sites displayed in AI Overviews see upwards of 30% more brand mentions and a reported 20% increase in positive sentiment from users.
  •         Market share in these answer engines may be higher for brands with well-signaled trust markers, even over established competitors with better traditional ranks.
  •         Tools that track live citations and sentiment enable brands to respond to shifts within days or even hours, rather than weeks, as was often the case with older systems.

Why Sentiment and Citations Now Matter Most

AI-driven search engines have updated how they measure and rank authority. Recognition of entities and positive context is now a ranking factor. More weight is put on the sentiment AI models detect within content, and the strength of authority signals a brand projects.

Unlike older analytic tools that only tracked clicks or position in rank, BrandLens records both citation frequency and how the brand is discussed in AI answers. This approach tracks fine details. For example, a drop in positive mentions can be seen quickly. The client can respond immediately by adjusting how their brand or content appears. This rapid response helps manage risk during sensitive launches, events, or crises.

Evidence and Reported Results

Some results published by brands using Growing Search’s proprietary toolset show the effect of this detail-oriented approach:

  •         Brand mentions in AI Overviews can rise by 18% to 35% within six months once targeted content and authority adjustments are implemented.
  •         Early alerts show competitor names or products entering AI answer boxes before they reach traditional search rankings. This helps preemptively adjust marketing efforts.
  •         When sentiment in AI answers shifts negative or neutral, corrective steps can be applied and tracked for impact, supporting brand reputation at critical times.

Several reports from the first part of 2025 mark this approach as outperforming typical search optimization alone. Agencies not focused on tracking live AI mentions and sentiment are reported to miss emerging opportunities and suffer a loss of market share, especially now that over four in ten initial search interactions occur within AI-powered boxes.

What Brands Should Do Now

Brands that want reliable performance have clear steps, as seen from both case data and expert commentary:

  •         Use in-house analytics to monitor live share across regular and AI-driven search. Delayed or sampled data is less useful once AI results are updated frequently.
  •         Review and adjust internal content and authority markers. Ensure that expert signals and entities are consistently projected.
  •     Invest in tools that provide both quantitative insights (such as mention count) and qualitative feedback (such as sentiment) so that reputational risk can be managed directly.
  •         Focus on direct execution and internal expertise. Owning the data and workflow allows for prompt action as algorithms and platforms update their requirements.

Summary of Industry Findings

Agencies working with both search systems and artificial intelligence tools, supported by exclusive in-house analytics, are now driving the most measurable gains for brands. Platforms like StakeView and BrandLens provide timely, specific feedback. This helps manage brand presence in new AI Overviews as well as classic organic search. In-house execution remains essential for keeping pace with ongoing shifts in how answers are created and displayed.

Growing Search is an example of this approach. All work, from research to technical implementation, is handled by its own teams. The result is direct feedback, faster corrective cycles, and verifiable improvements in visibility and reputation.

As artificial intelligence systems mediate an increasing share of user discovery, brands need to focus on facts, measurement, and timely action. Agencies prepared for this with the right expertise and technology will keep their clients positioned at key points in the search journey.



Source link

Continue Reading

Trending