Connect with us

AI Research

OpenAI Weighs Letting Other Companies Tap Its Data Centers

Published

on


OpenAI could someday let other businesses tap into data centers needed for artificial intelligence (AI).

That’s according to a report Wednesday (Aug. 21) from Bloomberg News, citing an interview with OpenAI Chief Financial Officer Sarah Friar.

Such a system would be loosely based on Amazon’s practice of renting spare cloud computing capacity to other businesses, Friar said, adding that OpenAI is not “actively looking” at a similar arrangement now as it focuses on boosting computing capacity for its own operations.

“I do think about it as a business down the line, for sure,” Friar added.

After years of building expertise in designing and establishing data centers, the company now sees a way to profit from that skill and rely less on third-party vendors.

“If all we do is buy from others, all we’re doing is giving them our IP because they’re learning how to build AI infrastructure,” Friar said.

After years of turning to partners like Microsoft to fund its data center projects, the company is now seeing banks and private equity groups “come to the table” with debt financing to bolster its infrastructure work, Friar said.

“That’s the next path we’re going down,” Friar said. Beyond that, she added, the company is “trying to be thoughtful” about whether there are “other interesting, novel ways we could do that beyond debt.”

OpenAI is unprofitable, hindering its ability to build data centers without outside investment, though the company has enjoyed revenue growth due to demand for its ChatGPT model. Friar said the company generated $1 billion in revenue in July, the first time it has done so.

Also Wednesday, Friar told CNBC that she believed the AI boom was just beginning, following news reports that the industry had reached a bubble.

“It’s more like the railroads or the buildout of electricity than anything I’ve seen,” Friar said. “The internet, it turns out in hindsight, was actually a relatively capex-light buildout. I think we are just getting started.

Her comments came after a drop in tech stocks, driven in part over concerns that the AI sector is overhyped. Declines of high-profile tech firms like Nvidia, Arm and Palantir were fueled by a new study by researchers at MIT that found that most organizations are getting “zero returns” on their investments in the generative AI space.

OpenAI CEO Sam Altman had said last week Friday that the AI market is in a bubble, while adding that he thought the industry was nonetheless still strong.



Source link

AI Research

1 Brilliant Artificial Intelligence (AI) Stock Down 30% From Its All-Time High That’s a No-Brainer Buy

Published

on


ASML is one of the world’s most critical companies.

Few companies’ products are as critical to the modern world’s technological infrastructure as those made by ASML (ASML 3.75%). Without the chipmaking equipment the Netherlands-based manufacturer provides, much of the world’s most innovative technology wouldn’t be possible. That makes it one of the most important companies in the world, even if many people have never heard of it.

Over the long term, ASML has been a profitable investment, but the stock has struggled recently — it’s down by more than 30% from the all-time high it touched in July 2024. I believe this pullback presents an excellent opportunity to buy shares of this key supporting player for the AI sector and other advanced technologies.  

Image source: Getty Images.

ASML has been a victim of government policies around the globe

ASML makes lithography machines, which trace out the incredibly fine patterns of the circuits on silicon chips. Its top-of-the-line extreme ultraviolet (EUV) lithography machines are the only ones capable of printing the newest, most powerful, and most feature-dense chips. No other companies have been able to make EUV machines thus far. They are also highly regulated, as Western nations don’t want this technology going to China, so the Dutch and U.S. governments have put strict restrictions on the types of machines ASML can export to China or its allies. In fact, even tighter new regulations were put in place last year that prevented ASML from servicing some machines that it previously was allowed to sell to Chinese companies.

As a result of these export bans, ASML’s sales to one of the world’s largest economies have been curtailed. This led to investors bidding the stock down in 2024 — a drop it still hasn’t recovered from.

2025 has been a relatively strong year for ASML’s business, but tariffs have made it challenging to forecast where matters are headed. Management has been cautious with its guidance for the year as it is unsure of how tariffs will affect the business. In its Q2 report, management stated that tariffs had had a less significant impact in the quarter than initially projected. As a result, ASML generated 7.7 billion euros in sales, which was at the high end of its 7.2 billion to 7.7 billion euro guidance range. For Q3, the company says it expects sales of between 7.4 billion and 7.9 billion euros, but if tariffs have a significantly negative impact on the economic picture, it could come up short.

Given all the planned spending on new chip production capacity to meet AI-related demand, investors would be wise to assume that ASML will benefit. However, the company is staying conservative in its guidance even as it prepares for growth. This conservative stance has caused the market to remain fairly bearish on ASML’s outlook even as all signs point toward a strong 2026.

This makes ASML a buying opportunity at its current stock price.

ASML’s valuation hasn’t been this low since 2023

Compared to the last five years, ASML trades at a historically low price-to-earnings (P/E) ratio and a forward P/E ratio.

ASML PE Ratio Chart

ASML PE Ratio data by YCharts.

With expectations for ASML at low levels, investors shouldn’t be surprised if its valuation rises sometime over the next year, particularly if management’s commentary becomes more bullish as demand increases in line with chipmakers’ efforts to expand their production capacity.

This could lift ASML back into its more normal valuation range in the mid-30s, which is perfectly acceptable given its growth level, considering that it has no direct competition.

ASML is a great stock to buy now and hold for several years or longer, allowing you to reap the benefits of chipmakers increasing their production capacity. Just because the market isn’t that bullish on ASML now, that doesn’t mean it won’t be in the future. This rare moment offers an ideal opportunity to load up on shares of a stock that I believe is one of the best values in the market right now.



Source link

Continue Reading

AI Research

AI’s not ‘reasoning’ at all – how this team debunked the industry hype

Published

on


Pulse/Corbis via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • We don’t entirely know how AI works, so we ascribe magical powers to it.
  • Claims that Gen AI can reason are a “brittle mirage.”
  • We should always be specific about what AI is doing and avoid hyperbole.

Ever since artificial intelligence programs began impressing the general public, AI scholars have been making claims for the technology’s deeper significance, even asserting the prospect of human-like understanding. 

Scholars wax philosophical because even the scientists who created AI models such as OpenAI’s GPT-5 don’t really understand how the programs work — not entirely. 

Also: OpenAI’s Altman sees ‘superintelligence’ just around the corner – but he’s short on details

AI’s ‘black box’ and the hype machine

AI programs such as LLMs are infamously “black boxes.” They achieve a lot that is impressive, but for the most part, we cannot observe all that they are doing when they take an input, such as a prompt you type, and they produce an output, such as the college term paper you requested or the suggestion for your new novel.

In the breach, scientists have applied colloquial terms such as “reasoning” to describe the way the programs perform. In the process, they have either implied or outright asserted that the programs can “think,” “reason,” and “know” in the way that humans do. 

In the past two years, the rhetoric has overtaken the science as AI executives have used hyperbole to twist what were simple engineering achievements. 

Also: What is OpenAI’s GPT-5? Here’s everything you need to know about the company’s latest model

OpenAI’s press release last September announcing their o1 reasoning model stated that, “Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem,” so that “o1 learns to hone its chain of thought and refine the strategies it uses.”

It was a short step from those anthropomorphizing assertions to all sorts of wild claims, such as OpenAI CEO Sam Altman’s comment, in June, that “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence.”

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The backlash of AI research

There is a backlash building, however, from AI scientists who are debunking the assumptions of human-like intelligence via rigorous technical scrutiny. 

In a paper published last month on the arXiv pre-print server and not yet reviewed by peers, the authors — Chengshuai Zhao and colleagues at Arizona State University — took apart the reasoning claims through a simple experiment. What they concluded is that “chain-of-thought reasoning is a brittle mirage,” and it is “not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching.” 

Also: Sam Altman says the Singularity is imminent – here’s why

The term “chain of thought” (CoT) is commonly used to describe the verbose stream of output that you see when a large reasoning model, such as GPT-o1 or DeepSeek V1, shows you how it works through a problem before giving the final answer.

That stream of statements isn’t as deep or meaningful as it seems, write Zhao and team. “The empirical successes of CoT reasoning lead to the perception that large language models (LLMs) engage in deliberate inferential processes,” they write. 

But, “An expanding body of analyses reveals that LLMs tend to rely on surface-level semantics and clues rather than logical procedures,” they explain. “LLMs construct superficial chains of logic based on learned token associations, often failing on tasks that deviate from commonsense heuristics or familiar templates.”

The term “chains of tokens” is a common way to refer to a series of elements input to an LLM, such as words or characters. 

Testing what LLMs actually do

To test the hypothesis that LLMs are merely pattern-matching, not really reasoning, they trained OpenAI’s older, open-source LLM, GPT-2, from 2019, by starting from scratch, an approach they call “data alchemy.”

arizona-state-2025-data-alchemy

Arizona State University

The model was trained from the beginning to just manipulate the 26 letters of the English alphabet, “A, B, C,…etc.” That simplified corpus lets Zhao and team test the LLM with a set of very simple tasks. All the tasks involve manipulating sequences of the letters, such as, for example, shifting every letter a certain number of places, so that “APPLE” becomes “EAPPL.”

Also: OpenAI CEO sees uphill struggle to GPT-5, potential for new kind of consumer hardware

Using the limited number of tokens, and limited tasks, Zhao and team vary which tasks the language model is exposed to in its training data versus which tasks are only seen when the finished model is tested, such as, “Shift each element by 13 places.” It’s a test of whether the language model can reason a way to perform even when confronted with new, never-before-seen tasks. 

They found that when the tasks were not in the training data, the language model failed to achieve those tasks correctly using a chain of thought. The AI model tried to use tasks that were in its training data, and its “reasoning” sounds good, but the answer it generated was wrong. 

As Zhao and team put it, “LLMs try to generalize the reasoning paths based on the most similar ones […] seen during training, which leads to correct reasoning paths, yet incorrect answers.”

Specificity to counter the hype

The authors draw some lessons. 

First: “Guard against over-reliance and false confidence,” they advise, because “the ability of LLMs to produce ‘fluent nonsense’ — plausible but logically flawed reasoning chains — can be more deceptive and damaging than an outright incorrect answer, as it projects a false aura of dependability.”

Also, try out tasks that are explicitly not likely to have been contained in the training data so that the AI model will be stress-tested. 

Also: Why GPT-5’s rocky rollout is the reality check we needed on superintelligence hype

What’s important about Zhao and team’s approach is that it cuts through the hyperbole and takes us back to the basics of understanding what exactly AI is doing. 

When the original research on chain-of-thought, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” was performed by Jason Wei and colleagues at Google’s Google Brain team in 2022 — research that has since been cited more than 10,000  times — the authors made no claims about actual reasoning. 

Wei and team noticed that prompting an LLM to list the steps in a problem, such as an arithmetic word problem (“If there are 10 cookies in the jar, and Sally takes out one, how many are left in the jar?”) tended to lead to more correct solutions, on average. 

google-2022-example-chain-of-thought-prompting

Google Brain

They were careful not to assert human-like abilities. “Although chain of thought emulates the thought processes of human reasoners, this does not answer whether the neural network is actually ‘reasoning,’ which we leave as an open question,” they wrote at the time. 

Also: Will AI think like humans? We’re not even close – and we’re asking the wrong question

Since then, Altman’s claims and various press releases from AI promoters have increasingly emphasized the human-like nature of reasoning using casual and sloppy rhetoric that doesn’t respect Wei and team’s purely technical description. 

Zhao and team’s work is a reminder that we should be specific, not superstitious, about what the machine is really doing, and avoid hyperbolic claims. 





Source link

Continue Reading

AI Research

‘Existential crisis’: how Google’s shift to AI has upended the online news model | Newspapers & magazines

Published

on


When the chief executive of the Financial Times suggested at a media conference this summer that rival publishers might consider a “Nato for news” alliance to strengthen negotiations with artificial intelligence companies there was a ripple of chuckles from attendees.

Yet Jon Slade’s revelation that his website had seen a “pretty sudden and sustained” decline of 25% to 30% in traffic to its articles from readers arriving via internet search engines quickly made clear the serious nature of the threat the AI revolution poses.

Queries typed into sites such as Google, which accounts for more than 90% of the search market, have been central to online journalism since its inception, with news providers optimising headlines and content to ensure a top ranking and revenue-raising clicks.

But now Google’s AI Overviews, which sit at the top of the results page and summarise responses and often negate the need to follow links to content, as well as its recently launched AI Mode tab that answers queries in a chatbot format, have prompted fears of a “Google zero” future where traffic referrals dry up.

“This is the single biggest change to search I have seen in decades,” says one senior editorial tech executive. “Google has always felt like it would always be there for publishers. Now the one constant in digital publishing is undergoing a transformation that may completely change the landscape.”

Last week, the owner of the Daily Mail revealed in its submission to the Competition and Markets Authority’s consultation on Google’s search services that AI Overviews have fuelled a drop in click-through traffic to its sites by as much as 89%.

DMG Media and other leading news organisations, including Guardian Media Group and the magazine trade body the Periodical Publishers Association (PPA), have urged the competition watchdog to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers as part of its investigation into the tech firm’s search dominance.

Publishers – already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news – argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or “drop out of all search results”, according to several sources.

On top of the threat to funding, there are concerns about AI’s impact on accuracy. While Google has improved the quality of its overviews since earlier iterations advised users to eat rocks and add glue to pizza, problems with “hallucinations” – where AI presents incorrect or fabricated information as fact – remain, as do issues with in-built bias, when a computer rather than a human decides how to summarise sources.

Google Discover has replaced search as the main source of traffic click-throughs to content. Photograph: Samuel Gibbs/The Guardian

In January, Apple promised to update an AI feature that issued untrue summaries of BBC news alerts, stamped with the corporation’s logo, on its latest iPhones; alerts incorrectly claimed that the man accused of killing a US insurance boss had shot himself and that tennis star Rafael Nadal had come out as gay.

In a blogpost last month, Liz Reid, Google’s head of search, said the introduction of AI in search was “driving more queries and quality clicks”.

“This data is in contrast to third-party reports that inaccurately suggest dramatic declines in aggregate traffic,” she said. “[These reports] are often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in search.”

However, she also said that while overall traffic to all websites is “relatively stable” she admitted that the “vast” web means that user trends are shifting traffic to different sites “resulting in decreased traffic to some sites and increased traffic to others”.

In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content.

However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers’ overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies.

“Google Discover is of zero product importance to Google at all,” he says. “It allows Google to funnel more traffic to publishers as traffic from search declines … Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want.”

Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models.

The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the “value being scraped” out of the £125bn sector.

skip past newsletter promotion

The Make It Fair campaign in February focused on the threat to the creative industries from generative AI. Photograph: Geoffrey Swaine/Rex

Some publishers have struck bilateral licensing deals with AI companies – such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI – while others such as the BBC have taken action against AI companies alleging copyright theft.

“It is a two-pronged attack on publishers, a sort of pincer movement,” says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. “Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis.”

While publishers are pursuing action on multiple fronts – from dealmaking and legal action to regulatory lobbying – they are also implementing AI tools into newsrooms and creating their own query-answering tools. The Washington Post and the FT have launched their own AI-powered chatbots, Climate Answers and Ask FT, that source results only from their own content.

Christoph Zimmer, chief product officer at Germany’s Der Spiegel, says that while its traffic is currently stable he expects referrals from all platforms to decline.

“This is a continuation of a longstanding trend,” he says. “However, this affects brands that have not focused on building direct relationships and subscriptions in recent years even more strongly. Instead, they have relied on reach on platforms and sometimes generic content.

“What has always been true remains true – a focus on quality and distinct content, and having a human in charge rather than just in the loop.”

One publishing industry executive says the battle to strike deals to help train AI models to aggregate and summarise stories is rapidly being superseded by advances that are seeing models interpret live news.

“The first focus has been on licensing deals for training AI, to ‘speak English’, but that is becoming less important over time,” says the executive. “It is becoming about delivering the news, and for that you need accurate live sources. That is a potentially really lucrative market which publishers are thinking about negotiating next.”

Saj Merali, chief executive of the PPA, says a fair balance needs to be struck between a tech-driven change in consumers’ digital habits and the fair value of trusted news.

“What doesn’t seem to be at the heart of this is what consumers need,” she says. “AI needs trustworthy content. There is a shift in how consumers want to see information, but they have to have faith in what they are reading.

“The industry has been very resilient through quite major digital and technological changes, but it is really important we make sure there is a route to sustain models. At the moment the AI and tech community are showing no signs of supporting publisher revenue.”



Source link

Continue Reading

Trending