Connect with us

AI Research

AI companies are throwing million-dollar paychecks at AI PhDs

Published

on


Larry Birnbaum, a professor of computer science at Northwestern University was recruiting a promising PhD to become a graduate researcher. Simultaneously, Google was wooing the student. And when he visited the tech giant’s campus in Mountain View, Calif., the company slated him to chat with its cofounder Sergey Brin and CEO Sundar Pichai, who are collectively worth about $140 billion and command over 183,000 employees. 

“How are we going to compete with that?” Birnbaum asks, noting that PhDs in corporate research roles can make as much as five times professorial salaries, which average $155,000 annually. “That’s the environment that every chair of computer science has to cope with right now.”

Though Birnbaum says these recruitment scenarios have been “happening for a while,” the phenomenon has reportedly worsened as salaries across the industry have been skyrocketing. The trend recently became headline news after reports surfaced of Meta offering to pay some highly experienced AI researchers between seven- and eight-figure salaries. Those offers—coupled with the strong demand for leaders to propel AI applications—may be helping to pull up the salary levels of even newly minted PhDs. Even though some of these graduates have no professional experience, they are being offered the types of comma-filled levels traditionally reserved for director- and executive-level talent.

Some academics fear a ‘brain drain’

Engineering professors and department chairs at Johns Hopkins, University of Chicago, Northwestern, and New York University interviewed by Fortune are divided on whether these lucrative offers lead to a “brain drain” from academic labs.

The brain drain camp believes this phenomenon depletes the ranks of academic AI departments, which still do important research and also are responsible for training the next generation of PhD students. At the private labs, the AI researchers help juice Big Tech’s bottom line while providing, in these critics’ view, no public benefit. The unconcerned argue that academia is a thriving component of this booming labor market. 

Anasse Bari, a professor of computer science and director of the predictive analytics and AI research lab at New York University, says that the corporate opportunities available to AI-focused academics is “significantly” affecting academia. “My general theory is that If we want a responsible future for AI, we must first invest in a solid AI education that upholds these values, cultivating thoughtful AI practitioners, researchers, and educators who will carry this mission forward,” he wrote to Fortune via email, emphasizing that despite receiving “many” offers for industry-side work, his NYU commitments take precedence.

In the days before ChatGPT, top AI researchers were in high demand, just as today. But many of the top corporate AI labs, such as OpenAI, Google DeepMind, and Meta’s FAIR (Fundamental AI Research), would allow established academics to keep their university appointments, at least part-time. This would allow them to continue to teach and train graduate students, while also conducting research for the tech companies.

While some professors say that there’s been no change in how frequently corporate labs and universities are able to reach these dual corporate-academic appointments, others disagree. NYU’s Bari says this model has declined owing to “intense talent competition, with companies offering millions of dollars for full-time commitment which outpaces university resources and shifts focus to proprietary innovation.”

Commitment to their faculty appointments remains true for all the academics Fortune interviewed for this story. But professors like Henry Hoffman, who chairs the University of Chicago’s Department of Computer Science, has watched his PhD students get courted by tech companies since he began his professorship in 2013.

“The biggest thing to me is the salaries,” he says. He mentions a star student with zero professional experience who recently dropped out of the UChicago PhD program to accept a “high six-figure” offer from ByteDance. “When students can get the kind of job they want [as students], there’s no reason to force them to keep going.”

While PhDs thrive, undergrad computer science students struggle

The job market for computer science and engineering PhDs who study AI sits in stark contrast to the one faced by undergraduates in the field. This degree-level polarization exists because many of those with bachelor’s degrees in computer science would traditionally find jobs as coders. But LLMs are now writing large portions of code at many companies, including Microsoft and Salesforce. Meanwhile, most AI-relevant PhD students have their pick of frothy jobs—in academia, tech, and finance. These graduates are courted by the private sector because their training propels AI and machine learning applications, which, in turn, can increase revenue opportunities for model makers.

There were 4,854 people who graduated with AI-relevant PhDs in mathematics and computer science across U.S. universities, according to 2022 data. This number has increased significantly—by about 20%—since 2014. These PhDs’ postgraduate employment rate is greater than those graduating with bachelor’s degrees in similar fields. And in 2023, 70% of AI-relevant PhDs took private sector jobs postgrad, a huge increase from two decades ago when just 20% of these grads accepted corporate work, per MIT.

Make no mistake: PhDs in AI, computer science, applied mathematics, and related fields have always had lucrative opportunities available after graduation. Until now, one of the most financially rewarding paths was quantitative research at hedge funds: All-in compensation for PhDs fresh out of school can climb to $1 million–plus in these roles. It’s a compelling pitch, especially for students who’ve spent up to seven years living off meager stipends of about $40,000 a year.

The all-but-assured path to prosperity has made relevant PhD programs in computer science and math extremely popular. AI and machine learning are the most popular disciplines among engineering PhDs, according to a 2023 Computing Research Association survey. UChicago computer science department chair Hoffman says that PhD admissions applications have surged by about 12% in the past few years alone, pressuring him and his colleagues to hire new faculty to increase enrollment and meet the demand.

Applications to AI PhD programs are on the rise

Though Trump’s federal funding cuts to universities have significant impacts on research in many departments, they may be less pertinent to those working on AI-related projects. This is partially because some of this research is funded by corporations. Google, for example, is collaborating with the University of Chicago to research trustworthy AI.

That dichotomy probably underscores Johns Hopkins University’s decision to open its Data Science and AI Institute: a $2 billion five-year effort to enroll 750 PhD students in engineering disciplines and hire over 100 new tenure-track faculty members, making it one of the largest PhD programs in the country.

“Despite the dreary mood elsewhere, the AI and data science area at Hopkins is rosy,” says Anton Dahbura, the executive director of Johns Hopkins’ Information Security Institute and codirector of the Institute for Assured Autonomy, likely referring to his university’s cut of 2,000 workers after it lost $800 million in federal funding earlier this year. Dahbura supports this argument by noting that Hopkins received “hundreds” of applications for professor positions in its Data Science and AI Institute. 

For some, the reasons to remain in academia are ethical.

Luís Amaral, a computer science professor at Northwestern, is “really concerned” that AI companies have overhyped the capabilities of their large language models and that their strategies will breed catastrophic societal implications, including environmental destruction. He says of OpenAI leadership, “If I’m a smart person, I actually know how bad the team was.”

Because most corporate labs are largely focused on LLM- and transformer-based approaches, if these methods ultimately fall short of the hype, there could be a reckoning for the industry. “Academic labs are among the few places actively exploring alternative AI architectures beyond LLMs and transformers,” says NYU’s Bari, who is researching creative applications for AI using a model based on birds’ intelligence. “In this corporate-dominated landscape, academia’s role as a hub for nonmainstream experimentation has likely become more important.”



Source link

AI Research

I asked ChatGPT to help me pack for my vacation – try this awesome AI prompt that makes planning your travel checklist stress-free

Published

on


It’s that time of year again, when those of us in the northern hemisphere pack our sunscreen and get ready to venture to hotter climates in search of some much-needed Vitamin D.

Every year, I book a vacation, and every year I get stressed as the big day gets closer, usually forgetting to pack something essential, like a charger for my Nintendo Switch 2, or dare I say it, my passport.



Source link

Continue Reading

AI Research

Sakana AI: Think LLM dream teams, not single models

Published

on


Enterprises may want to start thinking of large language models (LLMs) as ensemble casts that can combine knowledge and reasoning to complete tasks, according to Japanese AI lab Sakana AI.

Sakana AI in a research paper outlined a method called Multi-LLM AB-MCTS (Adaptive Branching Monte Carlo Tree Search) that uses a collection of LLMs to cooperate, perform trial-and-error and leverage strengths to solve complex problems.

In a post, Sakana AI said:

“Frontier AI models like ChatGPT, Gemini, Grok, and DeepSeek are evolving at a breathtaking pace amidst fierce competition. However, no matter how advanced they become, each model retains its own individuality stemming from its unique training data and methods. We see these biases and varied aptitudes not as limitations, but as precious resources for creating collective intelligence. Just as a dream team of diverse human experts tackles complex problems, AIs should also collaborate by bringing their unique strengths to the table.”

Sakana AI said AB-MCTS is a method for inference-time scaling to enable frontier AIs to cooperate and revisit problems and solutions. Sakana AI released the algorithm as an open source framework called TreeQuest, which has a flexible API that allows users to use AB-MCTS for tasks with multiple LLMs and custom scoring.

What’s interesting is that Sakana AI gets out of that zero-sum LLM argument. The companies behind LLM training would like you to think there’s one model to rule them all. And you’d do the same if you were spending so much on training models and wanted to lock in customers for scale and returns.

Sakana AI’s deceptively simple solution can only come from a company that’s not trying to play LLM leapfrog every few minutes. The power of AI is in the ability to maximize the potential of each LLM. Sakana AI said:

“We saw examples where problems that were unsolvable by any single LLM were solved by combining multiple LLMs. This went beyond simply assigning the best LLM to each problem. In (an) example, even though the solution initially generated by o4-mini was incorrect, DeepSeek-R1-0528 and Gemini-2.5-Pro were able to use it as a hint to arrive at the correct solution in the next step. This demonstrates that Multi-LLM AB-MCTS can flexibly combine frontier models to solve previously unsolvable problems, pushing the limits of what is achievable by using LLMs as a collective intelligence.”

A few thoughts:

  • Sakana AI’s research and move to emphasize collective intelligence over on LLM and stack is critical to enterprises that need to create architectures that don’t lock them into one provider.
  • AB-MCTS could play into what agentic AI needs to become to be effective and complement emerging standards such as Model Context Protocol (MCP) and Agent2Agent.
  • If combining multiple models to solve problems becomes frictionless, the costs will plunge. Will you need to pay up for OpenAI when you can leverage LLMs like DeepSeek combined with Gemini and a few others? 
  • Enterprises may want to start thinking about how to build decision engines instead of an overall AI stack. 
  • We could see a scenario where a collective of LLMs achieves superintelligence before any one model or provider. If that scenario plays out, can LLM giants maintain valuations?
  • The value in AI may not be in the infrastructure or foundational models in the long run, but the architecture and approaches.

More:



Source link

Continue Reading

AI Research

Positive attitudes toward AI linked to more prone to problematic social media use

Published

on


People who have a more favorable view of artificial intelligence tend to spend more time on social media and may be more likely to show signs of problematic use, according to new research published in Addictive Behaviors Reports.

The new study was designed to explore a question that, until now, had been largely overlooked in the field of behavioral research. While many factors have been identified as risk factors for problematic social media use—including personality traits, emotional regulation difficulties, and prior mental health issues—no research had yet explored whether a person’s attitude toward artificial intelligence might also be linked to unhealthy social media habits.

The researchers suspected there might be a connection, since social media platforms are deeply intertwined with AI systems that drive personalized recommendations, targeted advertising, and content curation.

“For several years, I have been interested in understanding how AI shapes societies and individuals. We also recently came up with a framework called IMPACT to provide a theoretical framework to understand this. IMPACT stand for the Interplay of Modality, Person, Area, Country/Culture and Transparency variables, all of relevance to understand what kind of view people form regarding AI technologies,” said study author Christian Montag, a distinguished professor of cognitive and brain sciences at the Institute of Collaborative Innovation at University of Macau.

Artificial intelligence plays a behind-the-scenes role in nearly every major social media platform. Algorithms learn from users’ behavior and preferences in order to maximize engagement, often by showing content that is likely to capture attention or stir emotion. These AI-powered systems are designed to increase time spent on the platform, which can benefit advertisers and the companies themselves. But they may also contribute to addictive behaviors by making it harder for users to disengage.

Drawing from established models in psychology, the researchers proposed that attitudes toward AI might influence how people interact with social media platforms. In this case, people who trust AI and believe in its benefits might be more inclined to embrace AI-powered platforms like social media—and potentially use them to excess.

To investigate these ideas, the researchers analyzed survey data from over 1,000 adults living in Germany. The participants were recruited through an online panel and represented a wide range of ages and education levels. After excluding incomplete or inconsistent responses and removing extreme outliers (such as those who reported using social media for more than 16 hours per day), the final sample included 1,048 people, with roughly equal numbers of men and women.

Participants completed a variety of self-report questionnaires. Attitudes toward artificial intelligence were measured using both multi-item scales and single-item ratings. These included questions such as “I trust artificial intelligence” and “Artificial intelligence will benefit humankind” to assess positive views, and “I fear artificial intelligence” or “Artificial intelligence will destroy humankind” to capture negative perceptions.

To assess social media behavior, participants were asked whether they used platforms like Facebook, Instagram, TikTok, YouTube, or WhatsApp, and how much time they spent on them each day, both for personal and work purposes. Those who reported using social media also completed a measure called the Social Networking Sites–Addiction Test, which includes questions about preoccupation with social media, difficulty cutting back, and using social media to escape from problems.

Overall, 956 participants said they used social media. Within this group, the researchers found that people who had more positive attitudes toward AI also tended to spend more time on social media and reported more problematic usage patterns. This relationship held for both men and women, but it was stronger among men. In contrast, negative attitudes toward AI showed only weak or inconsistent links to social media use, suggesting that it is the enthusiastic embrace of AI—not fear or skepticism—that is more closely associated with excessive use.

“It is interesting to see that the effect is driven by the male sample,” Montag told PsyPost. “On second thought, this is not such a surprise, because in several samples we saw that males reported higher positive AI attitudes than females (on average). So, we must take into account gender for research questions, such as the present one.”

“Further I would have expected that negative AI attitudes would have played a larger role in our work. At least for males we observed that fearing AI went also along with more problematic social media use, but this effect was mild at best (such a link might be explained via negative affect and escapism tendencies). I would not be surprised if such a link becomes more visible in future studies. Let’s keep in mind that AI attitudes might be volatile and change (the same of course is also true for problematic social media use).”

To better understand how these variables were related, the researchers conducted a mediation analysis. This type of analysis can help clarify whether one factor (in this case, time spent on social media) helps explain the connection between two others (positive AI attitudes and problematic use).

The results suggested that people with positive attitudes toward AI tended to spend more time on social media, and that this increased usage was associated with higher scores on the addiction measure. In other words, time spent on social media partly accounted for the link between AI attitudes and problematic behavior.

“I personally believe that it is important to have a certain degree of positive attitude towards benevolent AI technologies,” Montag said. “AI will profoundly change our personal and business lives, so we should better prepare ourselves for active use of this technology. This said, our work shows that positive attitudes towards AI, which are known to be of relevance to predict AI technology use, might come with costs. This might be in form of over-reliance on such technology, or in our case overusing social media (where AI plays an important role in personalizing content). At least we saw this to be true for male study participants.”

Importantly, the researchers emphasized that their data cannot establish cause and effect. Because the study was cross-sectional—that is, based on a single snapshot in time—it is not possible to say whether positive attitudes toward AI lead to excessive social media use, or whether people who already use social media heavily are more likely to hold favorable views of AI. It’s also possible that a third factor, such as general interest in technology, could underlie both tendencies.

The study’s sample, while diverse in age and gender, skewed older on average, with a mean age of 45. This may limit the generalizability of the findings, especially to younger users, who are often more active on social media and may have different relationships with technology. Future research could benefit from focusing on younger populations or tracking individuals over time to see how their attitudes and behaviors change.

“In sum, our work is exploratory and should be seen as stimulating discussions. For sure, it does not deliver final insights,” Montag said.

Despite these limitations, the findings raise important questions about how people relate to artificial intelligence and how that relationship might influence their behavior. The authors suggest that positive attitudes toward AI are often seen as a good thing—encouraging people to adopt helpful tools and new innovations. But this same openness to AI might also make some individuals more vulnerable to overuse, especially when the technology is embedded in products designed to maximize engagement.

The researchers also point out that people may not always be aware of the role AI plays in their online lives. Unlike using an obvious AI system, such as a chatbot or virtual assistant, browsing a social media feed may not feel like interacting with AI. Yet behind the scenes, algorithms are constantly shaping what users see and how they engage. This invisible influence could contribute to compulsive use without users realizing how much the technology is guiding their behavior.

The authors see their findings as a starting point for further exploration. They suggest that researchers should look into whether positive attitudes toward AI are also linked to other types of problematic online behavior, such as excessive gaming, online shopping, or gambling—especially on platforms that make heavy use of AI. They also advocate for studies that examine whether people’s awareness of AI systems influences how those systems affect them.

“In a broader sense, we want to map out the positive and negative sides of AI technology use,” Montag explained. “I think it is important that we use AI in the future to lead more productive and happier lives (we investigated also AI-well-being in this context recently), but we need to be aware of potential dark sides of AI use.”

“We are happy if people are interested in our work and if they would like to support us by filling out a survey. Here we do a study on primary emotional traits and AI attitudes. Participants also get as a ‘thank you’ insights into their personality traits: https://affective-neuroscience-personality-scales.jimdosite.com/take-the-test/).”

The study, “The darker side of positive AI attitudes: Investigating associations with (problematic) social media use,” was authored by Christian Montag and Jon D. Elhai.



Source link

Continue Reading

Trending