Connect with us

AI Research

A.I. is making your writing worse—but not in the way you think.

Published

on


Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.

The other week, I was reading an email I’d written when a strange notion occurred to me. Upon seeing a small typo, I hesitated for a moment before correcting it. Would it perhaps be better, an unsettling new voice suddenly whispered, to leave it in?

This is a thought that would’ve appalled me a year ago. As a professional writer, I have long prided myself on impeccable grammar, judiciously wielded punctuation, and (at times indulgent) verbosity. But in the age of A.I. paranoia—when the comment sections of social media posts and online articles are littered with accusations decrying the dehumanizing warp of ChatGPT—suddenly, writing that appears too polished, too bedecked with literary devices, not to mention a dubious affinity for the word delve, now arouses suspicion.

In personal communications, I began to experience a newfound self-consciousness. Was that em dash really necessary? Was the voice perhaps a little impersonal? It went the other way too; any writing I encountered digitally was now subject to the same interrogation. Was there something about it that felt just a little bit uncanny? A little bit … ChatGPT?

And I’m not alone. A collective paranoia has people purging so-called A.I. tells from their prose, even if they penned it entirely with their human brains. Along with the now-infamous em dash, many are renouncing words for which ChatGPT is known to have a mysterious penchant, such as delve, nestled, boast, and meticulous. The structural device “It’s not just X, it’s Y”—now considered a flashing A.I. warning sign—is being surrendered en masse.

Describing his oversensitive “AI radar,” University of Illinois English professor John Gallagher said he had found himself convinced an academic article had been penned by A.I.—before realizing that it was published in 2019, predating the ChatGPT boom. Across different media, senses sharpened by TikToks detailing the latest “tells,” A.I. inquisitors are ready to pounce. “Writing online in 2025 feels like performing keyhole surgery while people scream ‘ROBOT! ROBOT! ROBOT!’ into your ear,” writer and coder Jack McNamara wrote recently of the phenomenon on Medium.

A growing number of people are forgoing formal writing conventions altogether for an unadulterated stream of consciousness—anything to preserve and insist upon their humanity. But as A.I. continues to improve, are we destined to lose this linguistic arms race? And are we at risk of sacrificing good writing along the way?

An unlikely consequence of all this has been the elevation of the once embarrassing typo. Until recently considered a sign of carelessness or even stupidity, the error is now seen by some to be the indelible fingerprint of a human wordsmith. Writer and entrepreneur Thomas Smith told me that although he refuses to sacrifice the em dash, these days he is inclined to leave typos in his Medium posts because he believes they’re a reassuring sign of human authorship. His audience seems to agree. Readers used to email Smith to point out minor typos. They’ve begun to add “But maybe you want to keep it in,” he says with a laugh.

Content strategist Larissa McCarty, whose role involves ghostwriting on LinkedIn, has undergone a similar shift in thinking. She told me she’s now heartened to encounter a light speckling of human error in copy and would go so far as to advise professionals to leave small mistakes in public posts to emphasize their authenticity.

Smith, who is also a part-time copywriter, says brands have requested he jettison all em dashes from his work, due to fears that anything that “looks like A.I.” could result in a downgrade from Google’s opaque SEO ranking system. Meanwhile, lists of “ChatGPT words to avoid” circulate in web publishing communities, and advice on how not to sound like a chatbot abounds on LinkedIn, TikTok, and Reddit.

Worryingly, this can have the effect of undermining otherwise strong writing. Some of the supposed tells are simply well-established writing conventions that ChatGPT was trained to ape: things like “lists of three examples” or the use of transition words like however.

“I know that there are techniques and methods to make your writing more engaging, but that’s also what ChatGPT uses,” said McCarty. “When I’m typing something up, I’m like, Oh, that’s a good idea for how to word that. But then, in the back of my mind, I’m like, It’s almost too good.” She now avoids writing her own metaphors, for example, because she thinks that this is something ChatGPT excels at.

An inherent suspicion of good writing is probably anathema to producing good writing. Although reflecting more deeply on how we write isn’t necessarily bad, this A.I.–fueled self-censorship has the potential to be corrosive. As with much of the ChatGPT fallout, students were among the first to encounter this A.I. paradox: where polished, proficient prose is demanded but can also raise suspicion, whether produced with the help of a chatbot or not.

A recent academic paper from a Hult International Business School researcher on this inherent tension noted that Montclair State University had instructed faculty to view good grammar as suspect. “A.I.-written essays tend to be atypically correct in grammar, usage, and editing,” the school had advised staff.

This all stems from a broader stigma around the use of A.I. For all the industry hype whirling around the technology, a growing number of academic papers highlight the phenomenon of “A.I. shaming.”

A 2025 Duke University study found that professionals believed that colleagues would consider them lazier and less competent if they used A.I., making them unlikely to disclose its use. This anticipated social penalty was real.

This is because, in contrast to older workplace productivity tools like Excel, generative A.I. isn’t seen as requiring specialized skills, Jessica Reif, the lead author of the study and a Ph.D. candidate at Duke’s Fuqua School of Business, told me. But even as we resist it, there are signs that our efforts to outrun ChatGPT’s incursion are doomed. While we might pluck “ChatGPT words” from our prose, there is evidence that A.I.’s empty lexicon is lodging itself somewhere more intimate: in our minds. A 2025 study from researchers at the Max Planck Institute for Human Development determined that podcasters and YouTubers have lately been parroting A.I.’s favorite words, including delve. This trend took off after the launch of ChatGPT and holds even for spontaneous, unscripted conversations.

What’s more, our tool bag of A.I. tells is likely to be only “transiently useful,” says Daphne Ippolito, an assistant computer science professor at Carnegie Mellon University. “Companies are constantly revising the recipes for their training data, so all of these trends are going to change,” she said.

Even as A.I. promises to improve, not everyone is fearful of the future. Smith has enjoyed seeing fellow writers embrace more personalized, stream-of-consciousness-style prose in recent months, with some even joking they can now get away without rigorous editing.

The imperatives of SEO are what helped popularize a slick and impersonal style in the first place, Smith points out. “Certainly, algorithms always elevated that kind of stuff.” He thinks the future will be about “trying to share ideas rather than worrying about form.”

And rather than undiscerningly scouring ChatGPT tics from our prose, we can use them as cues to introspect. Gallagher, the English professor, wrote that he has tried to reduce his reliance on lists, not just to avoid sounding like ChatGPT “but also to be diligent about my word choice.”

As for me, I couldn’t quite bring myself to leave a typo in that email. But the next time it happens organically, I don’t think it’ll bother me as much as it once would have.





Source link

AI Research

1 Brilliant Artificial Intelligence (AI) Stock Down 30% From Its All-Time High That’s a No-Brainer Buy

Published

on


ASML is one of the world’s most critical companies.

Few companies’ products are as critical to the modern world’s technological infrastructure as those made by ASML (ASML 3.75%). Without the chipmaking equipment the Netherlands-based manufacturer provides, much of the world’s most innovative technology wouldn’t be possible. That makes it one of the most important companies in the world, even if many people have never heard of it.

Over the long term, ASML has been a profitable investment, but the stock has struggled recently — it’s down by more than 30% from the all-time high it touched in July 2024. I believe this pullback presents an excellent opportunity to buy shares of this key supporting player for the AI sector and other advanced technologies.  

Image source: Getty Images.

ASML has been a victim of government policies around the globe

ASML makes lithography machines, which trace out the incredibly fine patterns of the circuits on silicon chips. Its top-of-the-line extreme ultraviolet (EUV) lithography machines are the only ones capable of printing the newest, most powerful, and most feature-dense chips. No other companies have been able to make EUV machines thus far. They are also highly regulated, as Western nations don’t want this technology going to China, so the Dutch and U.S. governments have put strict restrictions on the types of machines ASML can export to China or its allies. In fact, even tighter new regulations were put in place last year that prevented ASML from servicing some machines that it previously was allowed to sell to Chinese companies.

As a result of these export bans, ASML’s sales to one of the world’s largest economies have been curtailed. This led to investors bidding the stock down in 2024 — a drop it still hasn’t recovered from.

2025 has been a relatively strong year for ASML’s business, but tariffs have made it challenging to forecast where matters are headed. Management has been cautious with its guidance for the year as it is unsure of how tariffs will affect the business. In its Q2 report, management stated that tariffs had had a less significant impact in the quarter than initially projected. As a result, ASML generated 7.7 billion euros in sales, which was at the high end of its 7.2 billion to 7.7 billion euro guidance range. For Q3, the company says it expects sales of between 7.4 billion and 7.9 billion euros, but if tariffs have a significantly negative impact on the economic picture, it could come up short.

Given all the planned spending on new chip production capacity to meet AI-related demand, investors would be wise to assume that ASML will benefit. However, the company is staying conservative in its guidance even as it prepares for growth. This conservative stance has caused the market to remain fairly bearish on ASML’s outlook even as all signs point toward a strong 2026.

This makes ASML a buying opportunity at its current stock price.

ASML’s valuation hasn’t been this low since 2023

Compared to the last five years, ASML trades at a historically low price-to-earnings (P/E) ratio and a forward P/E ratio.

ASML PE Ratio Chart

ASML PE Ratio data by YCharts.

With expectations for ASML at low levels, investors shouldn’t be surprised if its valuation rises sometime over the next year, particularly if management’s commentary becomes more bullish as demand increases in line with chipmakers’ efforts to expand their production capacity.

This could lift ASML back into its more normal valuation range in the mid-30s, which is perfectly acceptable given its growth level, considering that it has no direct competition.

ASML is a great stock to buy now and hold for several years or longer, allowing you to reap the benefits of chipmakers increasing their production capacity. Just because the market isn’t that bullish on ASML now, that doesn’t mean it won’t be in the future. This rare moment offers an ideal opportunity to load up on shares of a stock that I believe is one of the best values in the market right now.



Source link

Continue Reading

AI Research

AI’s not ‘reasoning’ at all – how this team debunked the industry hype

Published

on


Pulse/Corbis via Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • We don’t entirely know how AI works, so we ascribe magical powers to it.
  • Claims that Gen AI can reason are a “brittle mirage.”
  • We should always be specific about what AI is doing and avoid hyperbole.

Ever since artificial intelligence programs began impressing the general public, AI scholars have been making claims for the technology’s deeper significance, even asserting the prospect of human-like understanding. 

Scholars wax philosophical because even the scientists who created AI models such as OpenAI’s GPT-5 don’t really understand how the programs work — not entirely. 

Also: OpenAI’s Altman sees ‘superintelligence’ just around the corner – but he’s short on details

AI’s ‘black box’ and the hype machine

AI programs such as LLMs are infamously “black boxes.” They achieve a lot that is impressive, but for the most part, we cannot observe all that they are doing when they take an input, such as a prompt you type, and they produce an output, such as the college term paper you requested or the suggestion for your new novel.

In the breach, scientists have applied colloquial terms such as “reasoning” to describe the way the programs perform. In the process, they have either implied or outright asserted that the programs can “think,” “reason,” and “know” in the way that humans do. 

In the past two years, the rhetoric has overtaken the science as AI executives have used hyperbole to twist what were simple engineering achievements. 

Also: What is OpenAI’s GPT-5? Here’s everything you need to know about the company’s latest model

OpenAI’s press release last September announcing their o1 reasoning model stated that, “Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem,” so that “o1 learns to hone its chain of thought and refine the strategies it uses.”

It was a short step from those anthropomorphizing assertions to all sorts of wild claims, such as OpenAI CEO Sam Altman’s comment, in June, that “We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence.”

(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

The backlash of AI research

There is a backlash building, however, from AI scientists who are debunking the assumptions of human-like intelligence via rigorous technical scrutiny. 

In a paper published last month on the arXiv pre-print server and not yet reviewed by peers, the authors — Chengshuai Zhao and colleagues at Arizona State University — took apart the reasoning claims through a simple experiment. What they concluded is that “chain-of-thought reasoning is a brittle mirage,” and it is “not a mechanism for genuine logical inference but rather a sophisticated form of structured pattern matching.” 

Also: Sam Altman says the Singularity is imminent – here’s why

The term “chain of thought” (CoT) is commonly used to describe the verbose stream of output that you see when a large reasoning model, such as GPT-o1 or DeepSeek V1, shows you how it works through a problem before giving the final answer.

That stream of statements isn’t as deep or meaningful as it seems, write Zhao and team. “The empirical successes of CoT reasoning lead to the perception that large language models (LLMs) engage in deliberate inferential processes,” they write. 

But, “An expanding body of analyses reveals that LLMs tend to rely on surface-level semantics and clues rather than logical procedures,” they explain. “LLMs construct superficial chains of logic based on learned token associations, often failing on tasks that deviate from commonsense heuristics or familiar templates.”

The term “chains of tokens” is a common way to refer to a series of elements input to an LLM, such as words or characters. 

Testing what LLMs actually do

To test the hypothesis that LLMs are merely pattern-matching, not really reasoning, they trained OpenAI’s older, open-source LLM, GPT-2, from 2019, by starting from scratch, an approach they call “data alchemy.”

arizona-state-2025-data-alchemy

Arizona State University

The model was trained from the beginning to just manipulate the 26 letters of the English alphabet, “A, B, C,…etc.” That simplified corpus lets Zhao and team test the LLM with a set of very simple tasks. All the tasks involve manipulating sequences of the letters, such as, for example, shifting every letter a certain number of places, so that “APPLE” becomes “EAPPL.”

Also: OpenAI CEO sees uphill struggle to GPT-5, potential for new kind of consumer hardware

Using the limited number of tokens, and limited tasks, Zhao and team vary which tasks the language model is exposed to in its training data versus which tasks are only seen when the finished model is tested, such as, “Shift each element by 13 places.” It’s a test of whether the language model can reason a way to perform even when confronted with new, never-before-seen tasks. 

They found that when the tasks were not in the training data, the language model failed to achieve those tasks correctly using a chain of thought. The AI model tried to use tasks that were in its training data, and its “reasoning” sounds good, but the answer it generated was wrong. 

As Zhao and team put it, “LLMs try to generalize the reasoning paths based on the most similar ones […] seen during training, which leads to correct reasoning paths, yet incorrect answers.”

Specificity to counter the hype

The authors draw some lessons. 

First: “Guard against over-reliance and false confidence,” they advise, because “the ability of LLMs to produce ‘fluent nonsense’ — plausible but logically flawed reasoning chains — can be more deceptive and damaging than an outright incorrect answer, as it projects a false aura of dependability.”

Also, try out tasks that are explicitly not likely to have been contained in the training data so that the AI model will be stress-tested. 

Also: Why GPT-5’s rocky rollout is the reality check we needed on superintelligence hype

What’s important about Zhao and team’s approach is that it cuts through the hyperbole and takes us back to the basics of understanding what exactly AI is doing. 

When the original research on chain-of-thought, “Chain-of-Thought Prompting Elicits Reasoning in Large Language Models,” was performed by Jason Wei and colleagues at Google’s Google Brain team in 2022 — research that has since been cited more than 10,000  times — the authors made no claims about actual reasoning. 

Wei and team noticed that prompting an LLM to list the steps in a problem, such as an arithmetic word problem (“If there are 10 cookies in the jar, and Sally takes out one, how many are left in the jar?”) tended to lead to more correct solutions, on average. 

google-2022-example-chain-of-thought-prompting

Google Brain

They were careful not to assert human-like abilities. “Although chain of thought emulates the thought processes of human reasoners, this does not answer whether the neural network is actually ‘reasoning,’ which we leave as an open question,” they wrote at the time. 

Also: Will AI think like humans? We’re not even close – and we’re asking the wrong question

Since then, Altman’s claims and various press releases from AI promoters have increasingly emphasized the human-like nature of reasoning using casual and sloppy rhetoric that doesn’t respect Wei and team’s purely technical description. 

Zhao and team’s work is a reminder that we should be specific, not superstitious, about what the machine is really doing, and avoid hyperbolic claims. 





Source link

Continue Reading

AI Research

‘Existential crisis’: how Google’s shift to AI has upended the online news model | Newspapers & magazines

Published

on


When the chief executive of the Financial Times suggested at a media conference this summer that rival publishers might consider a “Nato for news” alliance to strengthen negotiations with artificial intelligence companies there was a ripple of chuckles from attendees.

Yet Jon Slade’s revelation that his website had seen a “pretty sudden and sustained” decline of 25% to 30% in traffic to its articles from readers arriving via internet search engines quickly made clear the serious nature of the threat the AI revolution poses.

Queries typed into sites such as Google, which accounts for more than 90% of the search market, have been central to online journalism since its inception, with news providers optimising headlines and content to ensure a top ranking and revenue-raising clicks.

But now Google’s AI Overviews, which sit at the top of the results page and summarise responses and often negate the need to follow links to content, as well as its recently launched AI Mode tab that answers queries in a chatbot format, have prompted fears of a “Google zero” future where traffic referrals dry up.

“This is the single biggest change to search I have seen in decades,” says one senior editorial tech executive. “Google has always felt like it would always be there for publishers. Now the one constant in digital publishing is undergoing a transformation that may completely change the landscape.”

Last week, the owner of the Daily Mail revealed in its submission to the Competition and Markets Authority’s consultation on Google’s search services that AI Overviews have fuelled a drop in click-through traffic to its sites by as much as 89%.

DMG Media and other leading news organisations, including Guardian Media Group and the magazine trade body the Periodical Publishers Association (PPA), have urged the competition watchdog to make Google more transparent and provide traffic statistics from AI Overview and AI Mode to publishers as part of its investigation into the tech firm’s search dominance.

Publishers – already under financial pressure from soaring costs, falling advertising revenues, the decline of print and the wider trend of readers turning away from news – argue that they are effectively being forced by Google to either accept deals, including on how content is used in AI Overview and AI Mode, or “drop out of all search results”, according to several sources.

On top of the threat to funding, there are concerns about AI’s impact on accuracy. While Google has improved the quality of its overviews since earlier iterations advised users to eat rocks and add glue to pizza, problems with “hallucinations” – where AI presents incorrect or fabricated information as fact – remain, as do issues with in-built bias, when a computer rather than a human decides how to summarise sources.

Google Discover has replaced search as the main source of traffic click-throughs to content. Photograph: Samuel Gibbs/The Guardian

In January, Apple promised to update an AI feature that issued untrue summaries of BBC news alerts, stamped with the corporation’s logo, on its latest iPhones; alerts incorrectly claimed that the man accused of killing a US insurance boss had shot himself and that tennis star Rafael Nadal had come out as gay.

In a blogpost last month, Liz Reid, Google’s head of search, said the introduction of AI in search was “driving more queries and quality clicks”.

“This data is in contrast to third-party reports that inaccurately suggest dramatic declines in aggregate traffic,” she said. “[These reports] are often based on flawed methodologies, isolated examples, or traffic changes that occurred prior to the rollout of AI features in search.”

However, she also said that while overall traffic to all websites is “relatively stable” she admitted that the “vast” web means that user trends are shifting traffic to different sites “resulting in decreased traffic to some sites and increased traffic to others”.

In recent years, Google Discover, which feeds users articles and videos tailored to them based on their past online activity, has replaced search as the main source of click-throughs to content.

However, David Buttle, founder of the consultancy DJB Strategies, says the service, which is also tied to publishers’ overall search deals, does not deliver the quality traffic that most publishers need to drive their long-term strategies.

“Google Discover is of zero product importance to Google at all,” he says. “It allows Google to funnel more traffic to publishers as traffic from search declines … Publishers have no choice but to agree or lose their organic search. It also tends to reward clickbaity type content. It pulls in the opposite direction to the kind of relationship publishers want.”

Meanwhile, publishers are fighting a wider battle with AI companies seeking to plunder their content to train their large language models.

The creative industry is intensively lobbying the government to ensure that proposed legislation does not allow AI firms to use copyright-protected work without permission, a move that would stop the “value being scraped” out of the £125bn sector.

skip past newsletter promotion

The Make It Fair campaign in February focused on the threat to the creative industries from generative AI. Photograph: Geoffrey Swaine/Rex

Some publishers have struck bilateral licensing deals with AI companies – such as the FT, the German media group Axel Springer, the Guardian and the Nordic publisher Schibsted with the ChatGPT maker OpenAI – while others such as the BBC have taken action against AI companies alleging copyright theft.

“It is a two-pronged attack on publishers, a sort of pincer movement,” says Chris Duncan, a former News UK and Bauer Media senior executive who now runs a media consultancy, Seedelta. “Content is disappearing into AI products without serious remuneration, while AI summaries are being integrated into products so there is no need to click through, effectively taking money from both ends. It is an existential crisis.”

While publishers are pursuing action on multiple fronts – from dealmaking and legal action to regulatory lobbying – they are also implementing AI tools into newsrooms and creating their own query-answering tools. The Washington Post and the FT have launched their own AI-powered chatbots, Climate Answers and Ask FT, that source results only from their own content.

Christoph Zimmer, chief product officer at Germany’s Der Spiegel, says that while its traffic is currently stable he expects referrals from all platforms to decline.

“This is a continuation of a longstanding trend,” he says. “However, this affects brands that have not focused on building direct relationships and subscriptions in recent years even more strongly. Instead, they have relied on reach on platforms and sometimes generic content.

“What has always been true remains true – a focus on quality and distinct content, and having a human in charge rather than just in the loop.”

One publishing industry executive says the battle to strike deals to help train AI models to aggregate and summarise stories is rapidly being superseded by advances that are seeing models interpret live news.

“The first focus has been on licensing deals for training AI, to ‘speak English’, but that is becoming less important over time,” says the executive. “It is becoming about delivering the news, and for that you need accurate live sources. That is a potentially really lucrative market which publishers are thinking about negotiating next.”

Saj Merali, chief executive of the PPA, says a fair balance needs to be struck between a tech-driven change in consumers’ digital habits and the fair value of trusted news.

“What doesn’t seem to be at the heart of this is what consumers need,” she says. “AI needs trustworthy content. There is a shift in how consumers want to see information, but they have to have faith in what they are reading.

“The industry has been very resilient through quite major digital and technological changes, but it is really important we make sure there is a route to sustain models. At the moment the AI and tech community are showing no signs of supporting publisher revenue.”



Source link

Continue Reading

Trending