Connect with us

AI Research

Meta AI chief Yann LeCun slams Elon Musk’s plan to replace researchers with engineers, says it will kill innovation

Published

on


Just a few days ago, Elon Musk announced that his AI startup, xAI, would eliminate the job title of “researcher.” Instead, there would now be “only engineers” at the company. He even called the term researcher a relic from academia. “This false nomenclature of ‘researcher’ and ‘engineer’, which is a thinly-masked way of describing a two-tier engineering system, is being deleted from @xAI today,” Musk wrote on X (formerly Twitter).

Musk’s announcement came after a post from xAI employee Aditya Gupta, who shared a job listing for “researchers and engineers for scaling up our RL environments with user feedback and preference in the loop.” However, Musk responded directly, declaring the terminology. Now Musk’s stance on removing the term “researchers” has not been settling well with many and has spanked a controversy in the industry about the importance of distinguishing researchers. Meta’s chief AI scientist, Yann LeCun, has even warned that Musk’s decision to remove the term “researchers” could harm long-term innovation.

Sharing the screenshot of Musk’s post, LeCun wrote a lengthy post on LinkedIn and said that if there is no distinction between the two titles, the company might risk killing innovation. “If you make no distinction between the two activities, if you don’t evaluate researchers and engineers with different criteria, you run the risk of killing breakthrough innovation,” LeCun wrote.

LeCun believes that both research and engineering serve very different purposes in the tech ecosystem. Research, he explained, is focused on discovering new principles and advancing knowledge, often years before commercial applications emerge. It’s judged by intellectual impact, peer recognition, and long-term influence. Engineering, by contrast, according to LeCun, is about building functional products with short-term impact.

“True breakthroughs require teams with a long horizon and minimal constraints from product development and management,” wrote LeCun. He also pointed to research labs such as Bell Labs, IBM Research, and Xerox PARC, which produced some of the most transformative technologies of the last century, as examples of why maintaining separate research divisions matters.

Meanwhile, Musk is not the only one blurring the lines between research and engineering in Silicon Valley. OpenAI and Anthropic have also dropped traditional job titles, and are now calling for its engineers and researchers “Members of Technical Staff” to highlight the hybrid roles of these employees. These companies argue the distinction is less relevant in the era of large-scale AI models.

However, contrary to them, Meta’s LeCun believes the change could backfire. He warned that without protected research roles, companies risk focusing on incremental improvements rather than fundamental breakthroughs.

– Ends

Published By:

Divya Bhati

Published On:

Aug 1, 2025



Source link

AI Research

Inside Austin’s Gauntlet AI, the Elite Bootcamp Forging “AI First” Builders

Published

on


In the brave new world of artificial intelligence, talent is the new gold, and companies are in a frantic race to find it. While universities work to churn out computer science graduates, a new kind of school has emerged in Austin to meet the insatiable demand: Gauntlet AI.

Gauntlet AI bills itself as an elite training program. It’s a high-stakes, high-reward process designed to forge “AI-first” engineers and builders in a matter of weeks.

“We’re closer to Navy SEAL bootcamp training than a school,” said Ash Tilawat, Head of Product and Learning. “We take the smartest people in the world. We bring them into the same place for a 1000 hours over ten weeks and we make them go all in with building with AI.”

Austen Allred, the co-founder and CEO of Gauntlet AI, says when they claim to be looking for the smartest engineers in the world, it’s no exaggeration. The selection process is intensely rigorous.

“We accept around 2 percent of the applicants,” Allred explained. “We accept 98th percentile and above of raw intelligence, 95th percentile of coding ability, and then you start on The Gauntlet.”

ALSO| The 60-Second Guardian: Can a Swarm of Drones Stop a School Shooter?

The price of admission isn’t paid in dollars—there are no tuition fees. Instead, the cost is a student’s absolute, undivided attention.

“It is pretty grueling, but it’s invigorating and I love doing this,” said Nataly Smith, one of the “Gauntlet Challengers.”

Smith, whose passions lie in biotech and space, recently channeled her love for bioscience to complete one of the program’s challenges. Her team was tasked with building a project called “Geno.”

“It’s a tool where a person can upload their genomic data and get a statistical analysis of how likely they are to have different kinds of cancers,” Smith described.

Incredibly, her team built the AI-powered tool in just one week.

The ultimate prize waiting at the end of the grueling 10-week gauntlet is a guaranteed job offer with a starting salary of at least $200,000 a year. And hiring partners are already lining up to recruit challengers like Nataly.

“We very intentionally chose to partner with everything from seed-stage startups all the way to publicly traded companies,” said Brett Johnson, Gauntlet’s COO. “So Carvana is a hiring partner. Here in Austin, we have folks like Function Health. We have the Trilogy organization; we have Capital Factory just around the corner. We’re big into the Austin tech community and looking to double down on that.”

In a world desperate for skilled engineers, Gauntlet AI isn’t just training people; it’s manufacturing the very talent pipeline it believes will power the next wave of technological innovation.



Source link

Continue Reading

AI Research

Endangered languages AI tools developed by UH researchers

Published

on

By


Reading time: < 1 minute

University of Hawaiʻi at Mānoa researchers have made a significant advance in studying how artificial intelligence (AI) understands endangered languages. This research could help communities document and maintain their languages, support language learning and make technology more accessible to speakers of minority languages.

The paper by Kaiying Lin, a PhD graduate in linguistics from UH Mānoa, and Department of Information and Computer Sciences Assistant Professor Haopeng Zhang, introduces the first benchmark for evaluating large language models (AI systems that process and generate text) on low-resource Austronesian languages. The study focuses on three Formosan (Indigenous peoples and languages of Taiwan) languages spoken in Taiwan—Atayal, Amis and Paiwan—that are at risk of disappearing.

Using a new benchmark called FORMOSANBENCH, Lin and Zhang tested AI systems on tasks such as machine translation, automatic speech recognition and text summarization. The findings revealed a large gap between AI performance in widely spoken languages such as English, and these smaller, endangered languages. Even when AI models were given examples or fine-tuned with extra data, they struggled to perform well.

“These results show that current AI systems are not yet capable of supporting low-resource languages,” Lin said.

Zhang added, “By highlighting these gaps, we hope to guide future development toward more inclusive technology that can help preserve endangered languages.”

The research team has made all datasets and code publicly available to encourage further work in this area. The preprint of the study is available online, and the study has been accepted into the 2025 Conference on Empirical Methods in Natural Language Processing in Suzhou, China, an internationally recognized premier AI conference.

The Department of Information and Computer Sciences is housed in UH Mānoa’s College of Natural Sciences, and the Department of Linguistics is housed in UH Mānoa’s College of Arts, Languages & Letters.



Source link

Continue Reading

AI Research

OpenAI reorganizes research team behind ChatGPT’s personality

Published

on


OpenAI is reorganizing its Model Behavior team, a small but influential group of researchers who shape how the company’s AI models interact with people, TechCrunch has learned.

In an August memo to staff seen by TechCrunch, OpenAI’s chief research officer Mark Chen said the Model Behavior team — which consists of roughly 14 researchers — would be joining the Post Training team, a larger research group responsible for improving the company’s AI models after their initial pre-training.

As part of the changes, the Model Behavior team will now report to OpenAI’s Post Training lead Max Schwarzer. An OpenAI spokesperson confirmed these changes to TechCrunch.

The Model Behavior team’s founding leader, Joanne Jang, is also moving on to start a new project at the company. In an interview with TechCrunch, Jang says she’s building out a new research team called OAI Labs, which will be responsible for “inventing and prototyping new interfaces for how people collaborate with AI.”

The Model Behavior team has become one of OpenAI’s key research groups, responsible for shaping the personality of the company’s AI models and for reducing sycophancy — which occurs when AI models simply agree with and reinforce user beliefs, even unhealthy ones, rather than offering balanced responses. The team has also worked on navigating political bias in model responses and helped OpenAI define its stance on AI consciousness.

In the memo to staff, Chen said that now is the time to bring the work of OpenAI’s Model Behavior team closer to core model development. By doing so, the company is signaling that the “personality” of its AI is now considered a critical factor in how the technology evolves.

In recent months, OpenAI has faced increased scrutiny over the behavior of its AI models. Users strongly objected to personality changes made to GPT-5, which the company said exhibited lower rates of sycophancy but seemed colder to some users. This led OpenAI to restore access to some of its legacy models, such as GPT-4o, and to release an update to make the newer GPT-5 responses feel “warmer and friendlier” without increasing sycophancy.

Techcrunch event

San Francisco
|
October 27-29, 2025

OpenAI and all AI model developers have to walk a fine line to make their AI chatbots friendly to talk to but not sycophantic. In August, the parents of a 16-year-old boy sued OpenAI over ChatGPT’s alleged role in their son’s suicide. The boy, Adam Raine, confided some of his suicidal thoughts and plans to ChatGPT (specifically a version powered by GPT-4o), according to court documents, in the months leading up to his death. The lawsuit alleges that GPT-4o failed to push back on his suicidal ideations.

The Model Behavior team has worked on every OpenAI model since GPT-4, including GPT-4o, GPT-4.5, and GPT-5. Before starting the unit, Jang previously worked on projects such as Dall-E 2, OpenAI’s early image-generation tool.

Jang announced in a post on X last week that she’s leaving the team to “begin something new at OpenAI.” The former head of Model Behavior has been with OpenAI for nearly four years.

Jang told TechCrunch she will serve as the general manager of OAI Labs, which will report to Chen for now. However, it’s early days, and it’s not clear yet what those novel interfaces will be, she said.

“I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” said Jang. “I’ve been thinking of [AI systems] as instruments for thinking, making, playing, doing, learning, and connecting.”

When asked whether OAI Labs will collaborate on these novel interfaces with former Apple design chief Jony Ive — who’s now working with OpenAI on a family of AI hardware devices — Jang said she’s open to lots of ideas. However, she said she’ll likely start with research areas she’s more familiar with.

This story was updated to include a link to Jang’s post announcing her new position, which was released after this story published. We also clarify the models that OpenAI’s Model Behavior team worked on.





Source link

Continue Reading

Trending