Connect with us

AI Research

CEOs Admit Millions Of White-Collar Jobs Will Be Replaced By Artificial Intelligence

Published

on


LONDON – The worst kept secret in the world of artificial intelligence is that yes, AI is coming for people’s jobs.

Warnings have been sounded over the last year that coders, writers and digital designers are at risk from new generative AI models like ChatGPT, Copilot and a slew of AI-powered productivity tools, and will likely become more common as entrepreneurs and deep-pocketed investors continue to pour money into the tools.

Now, middle managers may be on the chopping block, according to recent reports, and some CEOs are warning that millions of white-collar workers may be facing job oblivion sooner than later.

Middle managers — often the butt of cubicle humor, but an inevitable stop on the career ladder for aspirant executives — have been disappearing for the last half decade.

 

New Analysis Middle Managers Top Target

According to a new analysis from Gusto, which handles payroll for small and medium-sized companies, middle managers now oversee double the number of workers they did just five years ago.

In the world of Big Tech, the trend toward fewer managers has been called the “Great Flattening,” according to Axios. While it’s unclear if AI products are actually replacing these managers, there is indication that the reductions provide savings that companies can then pour into AI tools and products.

Earlier this year, Microsoft announced that it will lay off 9,000 employees — including managers — as it ramps up its AI strategy and development goals.

And Microsoft isn’t the only company cutting down on managers — Amazon released a memo last year announcing it planned to reduce its number of managers, and Google said it planned to cut vice president and manager roles by 10 percent last year, according to Business Insider. Meta has been working on reducing its managers since its 2023 “year of efficiency.”

AI tools will likely help drive further flattening efforts

According to an Axios report, managers have been increasingly turning to AI to help automate their tasks. This frees up their time, and communicates to CEOs that fewer are needed to manage their workers.

The report, citing a recent study from Resume Builder, found that managers are using AI tools to make decisions about hiring, firing, promotions, and raises.

Despite the presumed increase in productivity that AI tools promise, Gusto warned that — at least for now — industries that employed more human managers had better productivity, according to its analysis.

But that may be a temporary hiccup as businesses adjust to the new AI-tinged world of work.

Ford To Eliminate Half Of Its White-Collar Jobs In US

Ford’s CEO, Jim Farley, warned during the Aspen Ideas Festival last week that AI will eliminate half of the white collar jobs in the U.S.

He’s not the only CEO predicting an apocalypse for office workers; last month, Amazon CEO Andy Jassy said that the shipping giant would shrink its corporate work force over the next few years as a direct result of AI tech adoption.

“We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” Jassy wrote in a memo send to employees last month. “It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company.”

AI To Eliminate Half of Entry-Level White Collars Jobs 

Dario Amodei, the CEO of AI startup Anthropic, said in May that AI tech could destroy half of all entry-level white collar jobs and increase the unemployment rate to as high as 20 percent in the next five years. As of June, the jobless rate was at 4.1 percent.

Entry level and middle manger positions in white collar jobs are often stepping stones that workers take toward higher wages and better job security.

Aneesh Raman, the chief economic opportunity officer at LinkedIn, published a New York Times op-ed in May warning that AI is threatening to break the “bottom rung of the career ladder.”

“In tech, advanced coding tools are creeping into the tasks of writing simple code and debugging — the ways junior developers gain experience. In law firms, junior paralegals and first-year associates who once cut their teeth on document review are handing weeks of work over to A.I. tools to complete in a matter of hours,” he wrote. “And across retailers, A.I. chatbots and automated customer service tools are taking on duties once assigned to young associates.”

AI To Make It More Difficult To Land Entry-Level Jobs

Making it more difficult for workers to enter into the job market and rise into management positions in their companies can, according to Raman, “slow down workers’ careers for decades.”

Raman noted that, citing data from the Center for American Progress, young adults who experience six months of unemployment at age 22 are likely to earn $22,000 less than their employed peers over the following decade.

The view that AI will eat up opportunities for younger workers is not uncontested. In June, Brad Lightcap, the CEOO of OpenAI, told the New York Times that younger workers were more likely to adapt to AI and benefit from it, and that the technology instead might be a hurdle for “a class of worker that I think is more tenured, is more oriented toward a routine in a certain way of doing things.”

In other words, older workers

Danielle Li, an economist at MIT who studies the use of AI in the workplace, shared the view that more experienced workers were more likely to face hardships due to AI, but not for the same reasons as Brightcap. She told the New York Times that AI’s democratizing of specialized skill may make it easier for companies to lay off or stop hiring workers who’ve spent their careers specializing.

For example, she foresees a world where, thanks to AI tools, someone employed as a software engineer may no longer need a background in coding to hold that job, or law school to effectively write a legal brief.

Read more at The Independent 

 



Source link

AI Research

Hackers exploit hidden prompts in AI images, researchers warn

Published

on


Cybersecurity firm Trail of Bits has revealed a technique that embeds malicious prompts into images processed by large language models (LLMs). The method exploits how AI platforms compress and downscale images for efficiency. While the original files appear harmless, the resizing process introduces visual artifacts that expose concealed instructions, which the model interprets as legitimate user input.

In tests, the researchers demonstrated that such manipulated images could direct AI systems to perform unauthorized actions. One example showed Google Calendar data being siphoned to an external email address without the user’s knowledge. Platforms affected in the trials included Google’s Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Gemini’s web interface.

Read More: Meta curbs AI flirty chats, self-harm talk with teens

The approach builds on earlier academic work from TU Braunschweig in Germany, which identified image scaling as a potential attack surface in machine learning. Trail of Bits expanded on this research, creating “Anamorpher,” an open-source tool that generates malicious images using interpolation techniques such as nearest neighbor, bilinear, and bicubic resampling.

From the user’s perspective, nothing unusual occurs when such an image is uploaded. Yet behind the scenes, the AI system executes hidden commands alongside normal prompts, raising serious concerns about data security and identity theft. Because multimodal models often integrate with calendars, messaging, and workflow tools, the risks extend into sensitive personal and professional domains.

Also Read: Nvidia CEO Jensen Huang says AI boom far from over

Traditional defenses such as firewalls cannot easily detect this type of manipulation. The researchers recommend a combination of layered security, previewing downscaled images, restricting input dimensions, and requiring explicit confirmation for sensitive operations.

“The strongest defense is to implement secure design patterns and systematic safeguards that limit prompt injection, including multimodal attacks,” the Trail of Bits team concluded.



Source link

Continue Reading

AI Research

When AI Freezes Over | Psychology Today

Published

on


A phrase I’ve often clung to regarding artificial intelligence is one that is also cloaked in a bit of techno-mystery. And I bet you’ve heard it as part of the lexicon of technology and imagination: “emergent abilities.” It’s common to hear that large language models (LLMs) have these curious “emergent” behaviors that are often coupled with linguistic partners like scaling and complexity. And yes, I’m guilty too.

In AI research, this phrase first took off after a 2022 paper that described how abilities seem to appear suddenly as models scale and tasks that a small model fails at completely, a larger model suddenly handles with ease. One day a model can’t solve math problems, the next day it can. It’s an irresistible story as machines have their own little Archimedean “eureka!” moments. It’s almost as if “intelligence” has suddenly switched on.

But I’m not buying into the sensation, at least not yet. A newer 2025 study suggests we should be more careful. Instead of magical leaps, what we’re seeing looks a lot more like the physics of phase changes.

Ice, Water, and Math

Think about water. At one temperature it’s liquid, at another it’s ice. The molecules don’t become something new—they’re always two hydrogens and an oxygen—but the way they organize shifts dramatically. At the freezing point, hydrogen bonds “loosely set” into a lattice, driven by those fleeting electrical charges on the hydrogen atoms. The result is ice, the same ingredients reorganized into a solid that’s curiously less dense than liquid water. And, yes, there’s even a touch of magic in the science as ice floats. But that magic melts when you learn about Van der Waals forces.

The same kind of shift shows up in LLMs and is often mislabeled as “emergence.” In small models, the easiest strategy is positional, where computation leans on word order and simple statistical shortcuts. It’s an easy trick that works just enough to reduce error. But scale things up by using more parameters and data, and the system reorganizes. The 2025 study by Cui shows that, at a critical threshold, the model shifts into semantic mode and relies on the geometry of meaning in its high-dimensional vector space. It isn’t magic, it’s optimization. Just as water molecules align into a lattice, the model settles into a more stable solution in its mathematical landscape.

The Mirage of “Emergence”

That 2022 paper called these shifts emergent abilities. And yes, tasks like arithmetic or multi-step reasoning can look as though they “switch on.” But the model hasn’t suddenly “understood” arithmetic. What’s happening is that semantic generalization finally outperforms positional shortcuts once scale crosses a threshold. Yes, it’s a mouthful. But happening here is the computational process that is shifting from a simple “word position” in a prompt (like, the cat in the _____) to a complex, hyperdimensional matrix where semantic associations across thousands of dimensions create amazing strength to the computation.

And those sudden jumps? They’re often illusions. On simple pass/fail tests, a model can look stuck at zero until it finally tips over the line and then it seems to leap forward. In reality, it was improving step by step all along. The so-called “light-bulb moment” is really just a quirk of how we measure progress. No emergence, just math.

Why “Emergence” Is So Seductive

Why does the language of “emergence” stick? Because it borrows from biology and philosophy. Life “emerges” from chemistry as consciousness “emerges” from neurons. It makes LLMs sound like they’re undergoing cognitive leaps. Some argue emergence is a hallmark of complex systems, and there’s truth to that. So, to a degree, it does capture the idea of surprising shifts.

But we need to be careful. What’s happening here is still math, not mind. Calling it emergence risks sliding into anthropomorphism, where sudden performance shifts are mistaken for genuine understanding. And it happens all the time.

A Useful Imitation

The 2022 paper gave us the language of “emergence.” The 2025 paper shows that what looks like emergence is really closer to a high-complexity phase change. It’s the same math and the same machinery. At small scales, positional tricks (word sequence) dominate. At large scales, semantic structures (multidimensional linguistic analysis) win out.

No insight, no spark of consciousness. It’s just a system reorganizing under new constraints. And this supports my larger thesis: What we’re witnessing isn’t intelligence at all, but anti-intelligence, a powerful, useful imitation that mimics the surface of cognition without the interior substance that only a human mind offers.

Artificial Intelligence Essential Reads

So the next time you hear about an LLM with “emergent ability,” don’t imagine Archimedes leaping from his bath. Picture water freezing. The same molecules, new structure. The same math, new mode. What looks like insight is just another phase of anti-intelligence that is complex, fascinating, even beautiful in its way, but not to be mistaken for a mind.



Source link

Continue Reading

AI Research

MIT Researchers Develop AI Tool to Improve Flu Vaccine Strain Selection

Published

on


Insider Brief

  • MIT researchers have developed VaxSeer, an AI system that predicts which influenza strains will dominate and which vaccines will offer the best protection, aiming to reduce guesswork in seasonal flu vaccine selection.
  • Using deep learning on decades of viral sequences and lab data, VaxSeer outperformed the World Health Organization’s strain choices in 9 of 10 seasons for H3N2 and 6 of 10 for H1N1 in retrospective tests.
  • Published in Nature Medicine, the study suggests VaxSeer could improve vaccine effectiveness and may eventually be applied to other rapidly evolving health threats such as antibiotic resistance or drug-resistant cancers.

MIT researchers have unveiled an artificial intelligence tool designed to improve how seasonal influenza vaccines are chosen, potentially reducing the guesswork that often leaves health officials a step behind the fast-mutating virus.

The study, published in Nature Medicine, was authored by lead researcher Wenxian Shi along with Regina Barzilay, Jeremy Wohlwend, and Menghua Wu. It was supported in part by the U.S. Defense Threat Reduction Agency and MIT’s Jameel Clinic.

According to MIT, the system, called VaxSeer, was developed by scientists at MIT’s Computer Science and Artificial Intelligence Laboratory and the MIT Jameel Clinic for Machine Learning in Health. It uses deep learning models trained on decades of viral sequences and lab results to forecast which flu strains are most likely to dominate and how well candidate vaccines will work against them. Unlike traditional approaches that evaluate single mutations in isolation, VaxSeer’s large protein language model can capture the combined effects of multiple mutations and model shifting viral dominance more accurately.

“VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” Shi noted. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”

In retrospective tests covering ten years of flu seasons, VaxSeer’s strain recommendations outperformed those of the World Health Organization in nine of ten cases for H3N2 influenza, and in six of ten cases for H1N1, researchers said. In one notable example, the system correctly identified a strain for 2016 that the WHO did not adopt until the following year. Its predictions also showed strong correlation with vaccine effectiveness estimates reported by U.S., Canadian, and European surveillance networks.

The tool works in two parts: one model predicts which viral strains are most likely to spread, while another evaluates how effectively antibodies from vaccines can neutralize them in common hemagglutination inhibition assays. These predictions are then combined into a coverage score, which estimates the likely effectiveness of a candidate vaccine months before flu season begins.

“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” Barzilay noted.



Source link

Continue Reading

Trending