Connect with us

AI Insights

AI ‘detective’ sheds light on how people make decisions

Published

on





A new study deploys small neural networks to clarify how and why people make the decisions they make.

Researchers have long been interested in how humans and animals make decisions by focusing on trial-and-error behavior informed by recent information.

However, the conventional frameworks for understanding these behaviors may overlook certain realities of decision-making because they assume we make the best decisions after taking into account our past experiences.

The new study deploys AI in innovative ways to better understand this process.

By using tiny artificial neural networks, the researchers’ work illuminates in detail what drives an individual’s actual choices—regardless of whether those choices are optimal or not.

“Instead of assuming how brains should learn in optimizing our decisions, we developed an alternative approach to discover how individual brains actually learn to make decisions,” explains Marcelo Mattar, an assistant professor in New York University’s psychology department and one of the authors of the paper in the journal Nature.

“This approach functions like a detective, uncovering how decisions are actually made by animals and humans. By using tiny neural networks—small enough to be understood but powerful enough to capture complex behavior—we’ve discovered decision-making strategies that scientists have overlooked for decades.”

The study’s authors note that small neural networks—simplified versions of the neural networks typically used in commercial AI applications—can predict the choices of animals much better than classical cognitive models, which assume optimal behavior, because of their ability to illuminate suboptimal behavioral patterns. In laboratory tasks, these predictions are also as good as those made by larger neural networks, such as those powering commercial AI applications.

“An advantage of using very small networks is that they enable us to deploy mathematical tools to easily interpret the reasons, or mechanisms, behind an individual’s choices, which would be more difficult if we had used large neural networks such as the ones used in most AI applications,” adds author Ji-An Li, a doctoral student in the Neurosciences Graduate Program at the University of California, San Diego.

“Large neural networks used in AI are very good at predicting things,” says author Marcus Benna, an assistant professor of neurobiology at UC San Diego’s School of Biological Sciences.

“For example, they can predict which movie you would like to watch next. However, it is very challenging to describe succinctly what strategies these complex machine learning models employ to make their predictions —such as why they think you will like one movie more than another one. By training the simplest versions of these AI models to predict animals’ choices and analyzing their dynamics using methods from physics, we can shed light on their inner workings in more easily understandable terms.”

Understanding how animals and humans learn from experience to make decisions is not only a primary goal in the sciences, but, more broadly, useful in the realms of business, government, and technology. However, existing models of this process, because they are aimed at depicting optimal decision-making, often fail to capture realistic behavior.

Overall, the model described in the new Nature study matched the decision-making processes of humans, non-human primates, and laboratory rats. Notably, the model predicted decisions that were suboptimal, thereby better reflecting the “real-world” nature of decision-making—and in contrast to assumptions of traditional models, which are focused on explaining optimal decision-making.

Moreover, the model was able to predict decision-making at the individual level, revealing how each participant deploys different strategies in reaching their decisions.

“Just as studying individual differences in physical characteristics has revolutionized medicine, understanding individual differences in decision-making strategies could transform our approach to mental health and cognitive function,” concludes Mattar.

Support for the research came from the National Science Foundation, the Kavli Institute for Brain and Mind, the University of California Office of the President, and UC San Diego’s California Institute for Telecommunications and Information Technology/Qualcomm Institute.

Source: NYU



Source link

AI Insights

An AI That Promises to “Solve All Diseases” Is About to Test Its First Human Drugs

Published

on


Deep inside Alphabet, the parent company of Google, a secretive lab is working on a promise so audacious it sounds like science fiction: to “solve all diseases.” The company, Isomorphic Labs, is now preparing to start its first human clinical trials for cancer drugs designed entirely by artificial intelligence.

In a recent interview with Fortune, Colin Murdoch, President of Isomorphic Labs and Chief Business Officer of Google DeepMind, confirmed the company is on the verge of this monumental step. For anyone who has watched a loved one battle a devastating illness, the hope this offers is immense. But for a public increasingly wary of AI’s power, it raises a chilling question: can we really trust a “black box” algorithm with our lives?

Isomorphic Labs was born from DeepMind’s celebrated AlphaFold breakthrough, the AI system that stunned scientists by predicting the complex 3D shapes of proteins. To understand why this is a big deal, you need to know how drugs are traditionally made. For decades, it’s been a slow, brutal process of trial and error. Scientists spend an average of 10 to 15 years and over a billion dollars to bring a single new drug to market, with most candidates failing along the way.

Isomorphic Labs uses its AI, AlphaFold 3, to radically accelerate this. The AI can predict the complex 3D structures of proteins in the human body with stunning accuracy, allowing scientists to digitally design new drug molecules that are perfectly shaped to fight a specific disease, all before ever entering a physical lab

The company has already signed multi-billion dollar deals with pharmaceutical giants Novartis and Eli Lilly, and just raised $600 million in new funding to move its own drug candidates—starting with oncology—into human trials. The promise is a medical utopia. “This funding will further turbocharge the development of our next-generation AI drug design engine, help us advance our own programs into clinical development, and is a significant step forward towards our mission of one day solving all disease with the help of AI,” CEO Sir Demi Hassabis, who won the 2024 Nobel Laureate in Chemistry for his pioneering work on AlphaFold 2, said back in March.

But when Big Tech starts designing medicine, who owns your cure? This is where deep-seated fears about AI’s role in our lives come into focus. The biggest concern is the “black box” problem: we know the AI gives an answer, but we don’t always know how. This raises critical questions:

  • Will Alphabet own the next cancer drug like it owns your search results?
  • Will these AI-designed treatments be affordable, or will they be trapped behind sky-high patents accessible only to the wealthy?
  • Will human trial standards keep up with the sheer speed of machine-generated breakthroughs?
  • And who is liable if an AI-designed drug goes wrong? The company that owns the AI? The programmers? The AI itself?

When contacted by Gizmodo, a spokesperson for Isomorphic Labs said the company “don’t have anything more to share.”

AI could revolutionize medicine. But if left unchecked, it could also replicate the worst parts of the tech industry: opacity, monopoly, and profit over access. Isomorphic Labs is pushing humanity toward a monumental turning point. If they succeed, they could alleviate more suffering than any other invention in history.

But to do so, they first have to convince a skeptical public that the promise is worth the unprecedented risk.



Source link

Continue Reading

AI Insights

This Artificial Intelligence (AI) Stock Is Surging After Joining the S&P 500. Can It Continue to Skyrocket?

Published

on


  • Datadog stock has gone parabolic in the past three months, and it recently shot up following the news of its inclusion in the S&P 500 index.

  • The stock is trading at an expensive valuation right now.

  • Datadog’s lucrative addressable opportunity suggests that it may be able to justify its valuation in the long run.

  • 10 stocks we like better than Datadog ›

Shares of Datadog (NASDAQ: DDOG) shot up nearly 15% on July 3 after it was revealed that the provider of cloud-based observability, monitoring, and security solutions will join the S&P 500 index on July 9.

Datadog will be replacing Juniper Networks in the index after the latter was acquired by Hewlett Packard Enterprise. It is easy to see why Datadog’s inclusion in the index has sent its stock soaring. To enter the index, a company needs to demonstrate solid profitability in the past four quarters, along with enough liquidity.

Datadog’s inclusion in the S&P 500 over other contenders is a positive for the stock, as it demonstrates the market’s confidence in the company. It’s also worth noting that the stock has shot up a remarkable 76% in the past three months following its latest surge. Does this mean it is too late to buy Datadog stock? Let’s find out.

Image source: Getty Images.

Datadog’s cloud-based observability platform allows customers to monitor their cloud activity across servers, databases, and applications to detect issues, while its security features scan for vulnerabilities so that they can be fixed quickly. The demand for Datadog’s cloud observability solutions has been rising at an impressive pace, thanks to the secular growth of the cloud market.

Now, the company is also providing tools for monitoring large language models (LLMs) and other artificial intelligence (AI) applications. The company is targeting lucrative end markets that are currently worth around $80 billion. This indicates that it has a lot of room for long-term growth. It has generated $2.8 billion in revenue in the trailing 12 months.

However, investors will now have to pay a rich premium to buy into Datadog’s potential growth. That’s because it is now trading at a whopping 330 times trailing earnings. Though the forward earnings multiple of 82 is significantly lower than the trailing multiple, it is still on the expensive side when compared to the S&P 500 index’s average earnings multiple of 24.

The price-to-sales ratio of 20 is over 6x the index’s average sales multiple. The only way Datadog stock can sustain its impressive stock market momentum is by delivering stronger-than-expected growth and outpacing Wall Street’s growth expectations. But can the company do that?



Source link

Continue Reading

AI Insights

Clarifai AI Runners connect local models to cloud

Published

on


AI platform company Clarifai has launched AI Runners, an offering designed to give developers and MLops engineers flexible options for deploying and managing AI models.

Unveiled July 8, AI Runners let users connect models running on local machines or private servers directly to Clarifai’s AI platform via a publicly accessible API, the company said. Noting the rise of agentic AI, Clarifai said AI Runners provide a cost-effective, secure solution for managing the escalating demands of AI workloads, describing them as “essentially ngrok for AI models, letting you build on your current setup and keep your models exactly where you want them, yet still get all the power and robustness of Clarifai’s API for your biggest agentic AI ideas.”

Clarifai said its platform allows developers to run their models or MCP (Model Context Protocol) tools on a local development machine, an on-premises server, or a private cloud cluster. Connection to the Clarifai API then can be done without complex networking, the company said. This means users can keep sensitive data and custom models within their own environment and leverage existing compute infrastructure without vendor lock-in. AI Runners enable serving of custom models through the Clarifai’s publicly accessible API, enabling integration into any application. Users can build multi-step AI workflows by chaining local models with thousands of models available on the Clarifai platform.



Source link

Continue Reading

Trending