Connect with us

AI Research

Meta is trying to win the AI race with money — but not everyone can be bought

Published

on


Month after month, message after message, the AI engineer was hearing from Meta recruiters. The recruiters were pestering him to leave his employer and switch over to support the company’s AI efforts, and they were offering a sizable salary package to do so. But he wasn’t so sure.

The engineer, who works for a startup that was acquired by a leading AI company and requested anonymity from The Verge, said he had heard from friends that the company expected a lot of personal sacrifices in exchange for its high salaries, whether on employees’ value systems when it comes to Al or work-life balance. Engineers there, he heard, were working around the clock to catch up with rival companies like OpenAl, Anthropic, Google, and Microsoft.

With so many firms desperately scrambling for AI talent, Meta was offering between $1–1.4 million in total annual compensation (which is typically measured as a combination of salary, annual bonus, and amortized stock value) for many AI roles. But, he suspected, its offers might be less generous than they sounded — tied heavily to subjective performance metrics that could be weaponized against employees. And just as importantly, he wasn’t willing to give up a semblance of work-life balance and a healthy work environment to make a few hundred thousand dollars more. He didn’t pursue the opportunity.

In recent months, Meta has launched on an AI hiring spree after making its largest-ever external investment: a $14.3 billion acquisition of a 49 percent stake in Scale AI, an industry giant that provides training data to fuel the technology of companies like OpenAI, Google, Microsoft, and Meta. As part of the deal, Meta spun up a brand-new superintelligence lab led by Scale AI CEO Alexandr Wang — and to staff the lab, it started poaching.

Meta has reportedly poached as many as 10 of OpenAI’s top researchers and model developers, with some pay packages reportedly adding up to $300 million over four years, including equity. (Meta disputes this figure.) It’s also approached a slew of other top AI talent across the industry. Ruoming Pang, who heads up Apple’s foundation AI models team, reportedly departed for Meta, and at least two Anthropic employees and two DeepMind employees have reportedly joined the team as well. The goal is to secure Meta’s spot in the race to achieve artificial general intelligence, or AGI: a hypothetical AI system that equals or surpasses human cognitive abilities, and the moving target that almost every AI company is currently chasing at breakneck speed. Meta’s primary weapon is vast amounts of money. But some sources across the AI industry question whether that will be enough.

Meta, to date, has not been the most exciting destination for budding AI engineers

Meta, to date, has not been the most exciting destination for budding AI engineers. CEO Mark Zuckerberg has been trying to make up for lost ground in the AI race, having spent years and significant resources over-indexing on the metaverse while competitors like Google, Microsoft, and Amazon invested billions in AI startups and signed cloud contracts and other deals. The company’s Llama AI models often rank low on publicly maintained performance leaderboards; at time of writing, Meta’s first appearance on one such leaderboard, Chatbot Arena, was at No. 30. In May, it reportedly delayed the launch of its new flagship AI model as developers struggled to deliver performance upgrades, and executives have been public on earnings calls about the need to aggressively invest and shore up against competition. The Scale AI deal, and Meta’s subsequent sky-high budget for hiring AI talent, is Zuckerberg’s Hail Mary: paying a premium for some of the brightest minds in the AI world to safeguard Meta’s future.

But although Meta is plying AI workers with staggering salaries, a mountain of money can’t buy everyone. Anthropic and DeepMind have reportedly had far fewer defections to Meta than OpenAI has, and that’s been an ongoing trend. The reason, to those inside the field, is obvious: the AI world is filled with true believers, and even the biggest companies need more than a cash offer to get many of them on their side.

Industry insiders emphasized to The Verge that in a sector where almost any company will offer job security and a good salary, experienced AI engineers and researchers want to work somewhere that aligns with their values, whether their top priority is AI safety and the risks the tech poses for humanity’s future, the ethical considerations of AI’s impact on society today, or accelerating and advancing the tech quicker than anyone else. Some engineers, researchers, or scientists the company has approached have turned down Meta’s advances, industry sources tell The Verge.

Competition for AI researchers is stiff, and building loyalty is vital. “At this point, at least a few hundred top researchers and engineers in the field are what’s sometimes called ‘post-money’ — they could retire, and you’re only going to attract or retain them if they believe in your vision, leadership style, etc.,” one AI industry source says.

But especially at OpenAI, Meta seems to have found the dollar value of company loyalty — and exceeded it. OpenAI has been uniquely affected by Meta’s mission to poach leading AI talent. As many as 10 of its top researchers and model developers have reportedly joined Meta, with some receiving large signing bonuses and equity. While its size and talent make it an inevitable prime target, the company is also vulnerable due to a controversial restructuring from a nonprofit to for-profit venture and the departures of high-profile executives who went on to start competing AI companies. It underwent a huge upheaval during Sam Altman’s November 2023 ouster by OpenAI’s board and his subsequent Uno Reverse-style rehiring, which saw most of the board members who opposed him resign. Employees have also shared concerns about non-disparagement agreements and policies that raised questions about whether they would be able to access their equity, eroding some trust in leadership even when the policies were walked back.

“A lot of the people working on this are genuinely convinced they’re building transformative technology that will reshape the world,” one source familiar with the situation tells The Verge. “For the people that were that mission-driven, there’s already been so much organizational turbulence” — he mentioned Altman’s firing and rehiring, OpenAI employees defecting to Anthropic, and governance changes — “that people are less anchored to the institution itself, so it’s easier to poach [from OpenAI] than the other labs.”

In lieu of an official comment, OpenAI directed The Verge to a blog post from its global affairs team, which states, “Some eye-popping offers are being extended these days to a handful of terrifically talented researchers, including to folks at OpenAI. Some of these offers are coming with deadlines of just a few hours – literally ‘exploding offers’ – or with restrictions on whether or how they can be discussed.” The blog post goes on to say the company plans to cultivate talent across not just research, but also product, engineering, infrastructure, scaling, and safety. On Wednesday, the company announced it had hired four engineers away from companies like Tesla, xAI, and Meta.

Now, OpenAI’s best defense against the losses is its own financial leverage. People that joined OpenAI early on, or even before the end of 2023, have had significant stock appreciation — the unit price jumped from $67 in a May 2023 tender offer to $210 at the end of 2024, according to a source familiar with the situation. And during the end of 2023, around the time of the OpenAI board roller coaster, there was a window in which OpenAI rushed to make hires from other companies who would sign on at the $67-per-unit figure, since there was a near-immediate 2.5x multiplier expected, the source says.

With so many companies competing to hire AI talent, there have been high-level departures at many different companies. But even before this most recent hiring frenzy, OpenAI’s employees were being lured elsewhere at a higher-than-average rate.

A 2025 SignalFire report analyzed retention patterns in AI and found that Anthropic was best at keeping people around, with 80 percent of employees hired at least two years ago at Anthropic remaining at the company at the conclusion of their second year. DeepMind came next at 78 percent, while OpenAI’s retention rate was markedly lower, at 67 percent — comparable to Meta’s 64 percent. The report, which came out in May before Meta’s Scale deal occurred, also found that engineers were “8 times more likely to leave OpenAI for Anthropic than the reverse,” and 11 times more likely to defect from DeepMind to Anthropic than the reverse.

Anthropic was founded by ex-OpenAI research executives with the goal of carefully developing AI technology, describing itself as an “AI safety and research company that’s working to build reliable, interpretable, and steerable AI systems.” The company’s “mission-driven safety focus” is a convincing recruitment pitch and a key reason for its low turnover, one source familiar with the situation tells The Verge.

“The priorities of the companies become very different, and it’s not something that they’re happy with”

“I’ve seen, amongst people who are a little bit more seasoned in industry … they have watched tech as an industry change, and they’ve watched the people in the industry become very different, and the priorities of the companies become very different, and it’s not something that they’re happy with,” says Rumman Chowdhury, a longtime leader in the field of responsible AI at companies like Accenture and Twitter, who now heads up the AI nonprofit Humane Intelligence. When she’s hiring AI engineers, she says, they often say that salary hikes are less important to them than to “not be contributing to a worse world.”

AI engineers and researchers can afford that idealism, and concerns about safety and the rush to commercialize the technology have dogged most leading AI companies. At OpenAI, for instance, months of controversy and public pressure about its upcoming transition into a for-profit entity led to the company changing its plans, ceding some control to its nonprofit arm even after the restructuring. The decision followed a public letter written by ex-employees and civic leaders to the California and Delaware attorneys general, with one former employee writing, “OpenAI may one day build technology that could get us all killed.”

There are questions, too, about the pace at which Meta is moving and its research priorities. Meta’s Scale AI investment came on the heels of Joelle Pineau’s departure as Meta VP and head of its Fundamental AI Research (FAIR) division, a unit Meta folded into its larger AI efforts after previously describing it as “one of the only groups in the world with all the prerequisites for delivering true breakthroughs with some of the brightest minds in the industry.” Some saw FAIR’s restructuring as a sign that Meta was prioritizing products over research, an industry-wide concern for some AI safety experts.

For Meta, however, there are also pragmatic questions about its future in AI. In conversations with The Verge, industry insiders questioned whether Wang is the right choice to head the new lab, since Scale AI does not build frontier models and Wang himself doesn’t have an AI research background. And even Meta executives admit that catching up will be a challenge. During its most recent earnings call in April, Zuckerberg — who called one of Meta’s focuses for 2025 “making Meta AI the leading personal AI” — touched on the competition he was up against. “The pace of progress across the industry and the opportunities ahead for us are staggering. I want to make sure that we’re working aggressively and efficiently, and I also want to make sure that we are building out the leading infrastructure and teams we need to achieve our goals,” he said.

Just two months later, that team became Meta’s superintelligence lab. But the AI engineer who steered clear doesn’t regret his decision.

“You’re expected to give pretty much your whole self to Meta AI,” he says. The money simply wasn’t good enough for that.



Source link

AI Research

Endangered languages AI tools developed by UH researchers

Published

on

By


Reading time: < 1 minute

University of Hawaiʻi at Mānoa researchers have made a significant advance in studying how artificial intelligence (AI) understands endangered languages. This research could help communities document and maintain their languages, support language learning and make technology more accessible to speakers of minority languages.

The paper by Kaiying Lin, a PhD graduate in linguistics from UH Mānoa, and Department of Information and Computer Sciences Assistant Professor Haopeng Zhang, introduces the first benchmark for evaluating large language models (AI systems that process and generate text) on low-resource Austronesian languages. The study focuses on three Formosan (Indigenous peoples and languages of Taiwan) languages spoken in Taiwan—Atayal, Amis and Paiwan—that are at risk of disappearing.

Using a new benchmark called FORMOSANBENCH, Lin and Zhang tested AI systems on tasks such as machine translation, automatic speech recognition and text summarization. The findings revealed a large gap between AI performance in widely spoken languages such as English, and these smaller, endangered languages. Even when AI models were given examples or fine-tuned with extra data, they struggled to perform well.

“These results show that current AI systems are not yet capable of supporting low-resource languages,” Lin said.

Zhang added, “By highlighting these gaps, we hope to guide future development toward more inclusive technology that can help preserve endangered languages.”

The research team has made all datasets and code publicly available to encourage further work in this area. The preprint of the study is available online, and the study has been accepted into the 2025 Conference on Empirical Methods in Natural Language Processing in Suzhou, China, an internationally recognized premier AI conference.

The Department of Information and Computer Sciences is housed in UH Mānoa’s College of Natural Sciences, and the Department of Linguistics is housed in UH Mānoa’s College of Arts, Languages & Letters.



Source link

Continue Reading

AI Research

OpenAI reorganizes research team behind ChatGPT’s personality

Published

on


OpenAI is reorganizing its Model Behavior team, a small but influential group of researchers who shape how the company’s AI models interact with people, TechCrunch has learned.

In an August memo to staff seen by TechCrunch, OpenAI’s chief research officer Mark Chen said the Model Behavior team — which consists of roughly 14 researchers — would be joining the Post Training team, a larger research group responsible for improving the company’s AI models after their initial pre-training.

As part of the changes, the Model Behavior team will now report to OpenAI’s Post Training lead Max Schwarzer. An OpenAI spokesperson confirmed these changes to TechCrunch.

The Model Behavior team’s founding leader, Joanne Jang, is also moving on to start a new project at the company. In an interview with TechCrunch, Jang says she’s building out a new research team called OAI Labs, which will be responsible for “inventing and prototyping new interfaces for how people collaborate with AI.”

The Model Behavior team has become one of OpenAI’s key research groups, responsible for shaping the personality of the company’s AI models and for reducing sycophancy — which occurs when AI models simply agree with and reinforce user beliefs, even unhealthy ones, rather than offering balanced responses. The team has also worked on navigating political bias in model responses and helped OpenAI define its stance on AI consciousness.

In the memo to staff, Chen said that now is the time to bring the work of OpenAI’s Model Behavior team closer to core model development. By doing so, the company is signaling that the “personality” of its AI is now considered a critical factor in how the technology evolves.

In recent months, OpenAI has faced increased scrutiny over the behavior of its AI models. Users strongly objected to personality changes made to GPT-5, which the company said exhibited lower rates of sycophancy but seemed colder to some users. This led OpenAI to restore access to some of its legacy models, such as GPT-4o, and to release an update to make the newer GPT-5 responses feel “warmer and friendlier” without increasing sycophancy.

Techcrunch event

San Francisco
|
October 27-29, 2025

OpenAI and all AI model developers have to walk a fine line to make their AI chatbots friendly to talk to but not sycophantic. In August, the parents of a 16-year-old boy sued OpenAI over ChatGPT’s alleged role in their son’s suicide. The boy, Adam Raine, confided some of his suicidal thoughts and plans to ChatGPT (specifically a version powered by GPT-4o), according to court documents, in the months leading up to his death. The lawsuit alleges that GPT-4o failed to push back on his suicidal ideations.

The Model Behavior team has worked on every OpenAI model since GPT-4, including GPT-4o, GPT-4.5, and GPT-5. Before starting the unit, Jang previously worked on projects such as Dall-E 2, OpenAI’s early image-generation tool.

Jang announced in a post on X last week that she’s leaving the team to “begin something new at OpenAI.” The former head of Model Behavior has been with OpenAI for nearly four years.

Jang told TechCrunch she will serve as the general manager of OAI Labs, which will report to Chen for now. However, it’s early days, and it’s not clear yet what those novel interfaces will be, she said.

“I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” said Jang. “I’ve been thinking of [AI systems] as instruments for thinking, making, playing, doing, learning, and connecting.”

When asked whether OAI Labs will collaborate on these novel interfaces with former Apple design chief Jony Ive — who’s now working with OpenAI on a family of AI hardware devices — Jang said she’s open to lots of ideas. However, she said she’ll likely start with research areas she’s more familiar with.

This story was updated to include a link to Jang’s post announcing her new position, which was released after this story published. We also clarify the models that OpenAI’s Model Behavior team worked on.





Source link

Continue Reading

AI Research

Researchers can accurately tell someone’s age using AI and just a bit of DNA

Published

on


At the Hebrew University of Jerusalem, scientists created a new way to tell someone’s age using just a bit of DNA. This method uses a blood sample and a small part of your genetic code to give highly accurate results. It doesn’t rely on external features or medical history like other age tests often do. Even better, it stays accurate no matter your sex, weight, or smoking status.

Bracha Ochana and Daniel Nudelman led the team, guided by Professors Kaplan, Dor, and Shemer. They developed a tool called MAgeNet that uses artificial intelligence to study DNA methylation patterns. DNA methylation is a process that adds chemical tags to DNA as the body ages. By training deep learning networks on these patterns, they predicted age with just a 1.36-year error in people under 50.

How DNA Stores the Marks of Time

Time leaves invisible fingerprints on your cells. One of the most telling signs of age in your body is DNA methylation—the addition of methyl groups (CH₃) to your DNA. These chemical tags don’t change your genetic code, but they do affect how your genes behave. And over time, these tags build up in ways that mirror the passage of years.

450K/EPIC age-associated DNA methylation sites are often surrounded by additional CpGs correlated with age. (CREDIT: Cell Reports)

What makes the new method so effective is its focus. Instead of analyzing thousands of areas in the genome, MAgeNet zeroes in on just two short genomic regions. This tight focus, combined with high-resolution scanning at the single-molecule level, allows the AI to read the methylation patterns like a molecular clock. Professor Kaplan explains it simply: “The passage of time leaves measurable marks on our DNA. Our model decodes those marks with astonishing precision.”

Small Sample, Big Insights

The study, recently published in Cell Reports, used blood samples from more than 300 healthy individuals. It also included data from a 10-year follow-up of the Jerusalem Perinatal Study, which tracks health information across lifetimes. That long-term data, led by Professor Hagit Hochner from the Faculty of Medicine, helped the team confirm that MAgeNet works not just in the short term but also across decades.

Importantly, the model’s accuracy held up no matter the person’s sex, body mass index, or smoking history—factors that often throw off similar tests. That consistency means the tool could be widely used in both clinical and non-clinical settings.



From Medicine to Crime Scenes

The medical uses are easy to imagine. Knowing someone’s true biological age can help doctors make better decisions about care, especially when signs of aging don’t match the number of candles on a birthday cake. Personalized treatment plans could become more effective if based on what’s happening at the cellular level, not just what appears on a chart.

But this breakthrough also has major potential in the world of forensic science. Law enforcement teams could one day use this method to estimate the age of a suspect based solely on a few cells left behind. That’s a big step forward from current forensic DNA tools, which are good at identifying a person but struggle with age.

“This gives us a new window into how aging works at the cellular level,” says Professor Dor. “It’s a powerful example of what happens when biology meets AI.

A schematic view of targeted PCR sequencing following bisulfite conversion, facilitating concurrent mapping of multiple neighboring CpG sites at a depth >5,000×. (CREDIT: Cell Reports)

Ticking Clocks Inside Our Cells

As they worked with the data, the researchers noticed something else: DNA doesn’t just age randomly. Some changes happen in bursts. Others follow slow, steady patterns—almost like ticking clocks inside each cell. These new observations may help explain why people age differently, even when they’re the same age chronologically.

“It’s not just about knowing your age,” adds Professor Shemer. “It’s about understanding how your cells keep track of time, molecule by molecule.”

This could also impact the growing field of longevity research. Scientists are increasingly interested in how biological aging differs from the simple count of years lived. The ability to measure age so precisely from such a small DNA sample may become a key tool in developing future anti-aging therapies or drugs that slow down cellular wear and tear.

A deep neural network for age prediction from fragment-level targeted DNA methylation data. (CREDIT: Cell Reports)

Why This Research Changes Everything

The method created by the Hebrew University team marks a turning point in how we think about aging, identity, and health. In the past, DNA told us who we are. Now it can tell us how old we truly are—and possibly how long we’ll stay healthy. The implications stretch from hospital rooms to courtrooms.

As the world faces rising healthcare demands from aging populations, tools like MAgeNet offer a smarter, faster way to assess risk, track longevity, and understand what aging really means. It’s no longer just a number on your ID.

Thanks to AI and a deep dive into the chemistry of life, age has become something you can measure with stunning accuracy, from the inside out.





Source link

Continue Reading

Trending