Connect with us

AI Research

Researchers Create An AI Model That Is Able To Detect Habitable Planets With 99% Accuracy » TwistedSifter

Published

on


Shutterstock

Finding planets that could potentially be home to life isn’t very easy, but researchers from Switzerland have created an AI model that has the potential to make it much more practical. The team wrote a paper, which has been published in the journal Astronomy and Astrophysics, about the new tool.

The new algorithm has already been used to spot 44 star systems that it believes contain exoplanets that are similar enough to Earth that they have the potential to support life. These are all exoplanets that had not previously been detected. The astronomers need to go back and study each one to see if the planets actually can (or maybe even do) support life.

This new tool is primarily designed to help point astronomers in the right direction in their search for life-supporting planets, and if simulations are correct, it is extremely good at its job.

Researchers ran a number of different simulations and found that the AI was able to identify systems where these planets would be 99% of the time. Dr. Yann Alibert is the co-director of the University of Bern’s Center for Space and Habitability and the co-author of the study. Forbes quoted him as saying:

“It’s one of the few models worldwide with this level of complexity and depth, enabling predictive studies like ours. This is a significant step in the search for planets with conditions favorable to life and, ultimately, for the search for life in the universe.”

Planets in space

Shutterstock

The fact that the AI has pointed out 44 star systems that warrant further investigation is amazing on its own. Scientists have only identified 5800 confirmed planets outside of our own solar system, and that isn’t just ones that may be able to support life. Having an AI that can help direct astronomers and other researchers in the right direction is invaluable.

The AI was trained by feeding it synthetic planetary systems that were generated using the Bern Model of Planet Formation and Evolution. This is a model that simulates planetary development back in time all the way to their creation as protoplanetary discs. In a statement, Alibert said:

“The Bern Model is one of the only models worldwide that offers such a wealth of interrelated physical processes and enables a study like the current one to be carried out.”

The team also fed in information from about 1600 systems that astronomers know have at least one planet and a star that is G-type, K-type, or M-type. Once all that data had been fed into it, the AI generated the 44 systems that it thought likely had a life-supporting planet.

Planets in space

Shutterstock

The researchers admit that this system is not perfect, but it should only improve with time and more information. Plus, perfection isn’t necessary in this type of thing. Without the AI tool, the astronomers have to analyze every solar system to determine if it has planets, and then research more to see if any of the planets could support life. Given the fact that there are trillions of solar systems out there, this would be a very lengthy process.

Having an AI narrow that search is invaluable.

If you thought that was interesting, you might like to read about a quantum computer simulation that has “reversed time” and physics may never be the same.



Source link

AI Research

Researchers train AI to diagnose heart failure in rural patients using low-tech electrocardiograms

Published

on


WVU computer scientists are training AI models to diagnose heart failure using data generated by low-tech equipment widely available in rural Appalachian medical practices. Credit: WVU/Micaela Morrissette

Concerned about the ability of artificial intelligence models trained on data from urban demographics to make the right medical diagnoses for rural populations, West Virginia University computer scientists have developed several AI models that can identify signs of heart failure in patients from Appalachia.

Prashnna Gyawali, assistant professor in the Lane Department of Computer Science and Electrical Engineering at the WVU Benjamin M. Statler College of Engineering and Mineral Resources, said —a chronic, persistent condition in which the heart cannot pump enough blood to meet the body’s need for oxygen—is one of the most pressing national and global health issues, and one that hits rural regions of the U.S. especially hard.

Despite the outsized impact of heart failure on rural populations, AI models are currently being trained to diagnose the disease using data representing patients from urban and suburban areas like Stanford, California, Gyawali said.

“Imagine Jane Doe, a 62-year-old woman living in a rural Appalachian community,” he suggested. “She has limited access to specialty care, relies on a small local clinic, and her lifestyle, diet and health history reflect the realities of her environment: high physical labor, minimal preventive care, and increased exposure to environmental risk factors like coal dust or poor air quality. Jane begins to experience fatigue and shortness of breath—symptoms that could point to heart failure.

“An AI system, trained primarily on data from urban hospitals in more affluent, coastal areas, evaluates Jane’s lab results. But because the system was not trained on patients who share Jane’s socioeconomic and environmental context, it fails to recognize her condition as urgent or abnormal,” Gyawali said. “This is why this work matters. By training AI models on data from West Virginia patients, we aim to ensure people like Jane receive accurate diagnoses, no matter where they live or how their lives differ from national averages.”

The researchers identified the AI models that were most accurate at diagnosing heart failure in an anonymized sample of more than 55,000 patients who received medical care in West Virginia. They also pinpointed the exact parameters for providing the AI models with data that most enhanced diagnostic accuracy. The findings appear in Scientific Reports, a Nature portfolio journal.

Doctoral student Alina Devkota emphasized they trained the AI models to work from patients’ electrocardiogram results, rather than the echocardiogram readings typical for patient data from urban areas.

Electrocardiograms rely on round electrodes stuck to the patient’s torso to record electrical signals from the heart. According to Devkota, they don’t require specialized equipment or specialized training to operate, but they still provide valuable insights into heart function.

“One of the criteria to diagnose heart failure is by measuring the ‘ejection fraction,’ or how much blood is pumped out of the heart with every beat, and the gold standard for doing that is with echocardiography, which uses to create images of the heart and the blood flowing through its valves,” she said.

“But echocardiography is expensive, time-consuming and often unavailable to patients in the very same rural Appalachian states that have the highest prevalence of heart failure across the nation. West Virginia, for example, ranks first in the U.S. for the prevalence of heart attack and , but many West Virginians don’t have local access to high-tech echocardiograms. They do have access to inexpensive electrocardiograms, so we tested whether AI models could use electrocardiogram readings to predict a patient’s ejection fraction.”

Devkota, Gyawali and their colleagues trained several AI models on patient records from 28 hospitals across West Virginia. The AI models used either “deep learning,” which relies on multilayered neural networks, or “non-deep learning,” which relies on simpler algorithms, to analyze the patient records and draw conclusions.

The researchers found the models, particularly one called ResNet, did best at correctly predicting a patient’s ejection fraction based on data from 12-lead electrocardiograms, with the results suggesting that a larger dataset for training would yield even better results. They also found that providing the AI models with specific “leads,” or combinations of data from different electrode pairs, affected how accurate the models’ ejection fraction predictions were.

Gyawali said while AI models are not yet being used in due to reliability concerns, training an AI to successfully estimate from electrocardiogram signals could soon give clinicians an edge in protecting patients’ cardiac health.

“Heart failure affects more than six million Americans today, and factors like our aging population mean the risk is growing rapidly—approximately 1 in 4 people alive today will experience heart failure during their lifetimes. The prevalence is even higher in rural Appalachia, so it’s critical the people here do not continue to be overlooked.”

Additional WVU contributors to the research included Rukesh Prajapati, graduate research assistant; Amr El-Wakeel, assistant professor; Donald Adjeroh, professor and chair for computer science; and Brijesh Patel, assistant professor in the WVU Health Sciences School of Medicine.

More information:
AI analysis for ejection fraction estimation from 12-lead ECG, Scientific Reports (2025). DOI: 10.1038/s41598-025-97113-0scientific

Citation:
Researchers train AI to diagnose heart failure in rural patients using low-tech electrocardiograms (2025, August 31)
retrieved 31 August 2025
from https://medicalxpress.com/news/2025-08-ai-heart-failure-rural-patients.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

AI Research

How Traditional Search Engines Are Evolving

Published

on


Generative artificial intelligence is not just improving search; it’s revolutionizing the entire concept of information retrieval.

Traditional search engines operated on a simple premise: match keywords to web pages, then rank results. This approach often left users frustrated, forcing them to refine queries multiple times or dig through numerous links to find specific information.

Generative AI has shattered this paradigm. Modern search platforms now interpret natural language queries with unprecedented sophistication. Instead of returning lists of links, they provide direct, contextual answers synthesized from multiple sources. Users can ask follow-up questions, request clarifications, or explore topics within the same conversation.

Consider the difference: searching “climate change Morocco agriculture” traditionally yields thousands of links. An AI-powered search engine provides an immediate, comprehensive overview of climate impacts on Moroccan agriculture, complete with specific data and regional variations – all while citing sources transparently.

The Technology Behind the Magic

Large language models (LLMs) trained on vast datasets enable machines to understand and generate human-like text. When integrated with real-time web crawling, they create “retrieval-augmented generation” (RAG) systems that combine internet knowledge with AI analysis. It’s like having a research assistant that instantly reads thousands of documents and provides tailored summaries.

Major Players Reshape the Landscape

Google has integrated AI into its core search through Search Generative Experience (SGE), essentially rebuilding search from the ground up. Microsoft’s ChatGPT integration transformed Bing from an also-ran to a legitimate competitor overnight. Meanwhile, new players like Perplexity AI have emerged as pure “AI answer engines,” bypassing traditional search entirely.

Impact on Users and Businesses

The benefits for users are transformative. Complex research tasks that once required hours now take minutes through conversational interactions. This democratization particularly benefits users in developing regions with limited digital literacy or bandwidth constraints.

For businesses, traditional SEO strategies focused on keywords are becoming obsolete. Success now requires creating authoritative, well-sourced content that AI systems can understand and cite. Companies must focus on becoming trusted information sources rather than gaming search algorithms.

Voice search capabilities have dramatically improved, making information accessible to users with disabilities or those in hands-free situations. Educational applications are equally impressive, with AI search engines serving as sophisticated tutoring systems.

Challenges and Concerns

Significant challenges remain. AI systems can generate confident-sounding but incorrect information – a phenomenon called “hallucination.” Privacy concerns arise as conversational search engines collect more detailed behavioral data than traditional keyword systems.

The concentration of AI capabilities among few major companies raises concerns about information diversity and potential bias. When a handful of AI models influence how billions access information, fairness and accuracy become critical issues.

The Future of Information Access

Emerging trends include multimodal search capabilities interpreting images, videos, and audio alongside text. Real-time integration promises search engines providing up-to-the-minute data on rapidly changing situations. IoT integration will enable contextual search considering your location, time, and current activity.

For the MENA region, including Morocco, this AI revolution presents unique opportunities. Local businesses creating high-quality, authoritative content in Arabic and French can gain unprecedented visibility in AI search results. The technology also addresses linguistic diversity challenges, as AI systems become sophisticated at handling multiple languages and cultural contexts.

As we stand at this inflection point, the blue link era is ending. The age of conversational AI search promises faster, more accurate, and more intuitive access to human knowledge than ever before. For users worldwide, this transformation represents not just technological progress, but a fundamental shift in how we interact with information itself.



Source link

Continue Reading

AI Research

The Machine Learning Lessons I’ve Learned This Month

Published

on


in machine learning are the same.

Coding, waiting for results, interpreting them, returning back to coding. Plus, some intermediate presentations of one’s progress. But, things mostly being the same does not mean that there’s nothing to learn. Quite on the contrary! Two to three years ago, I started a daily habit of writing down lessons that I learned from my ML work. In looking back through some of the lessons from this month, I found three practical lessons that stand out:

  1. Keep logging simple
  2. Use an experimental notebook
  3. Keep overnight runs in mind

Keep logging simple

For years, I used Weights & Biases (W&B)* as my go-to experiment logger. In fact, I have once been in the top 5% of all active users. The stats in below figure tell me that, at that time, I’ve trained close to 25000 models, used a cumulative 5000 hours of compute, and did more than 500 hyperparameter searches. I used it for papers, for big projects like weather prediction with large datasets, and for tracking countless small-scale experiments.

My once upon a time stats of using W&B for experiment logging. Image by the author.

And W&B really is a great tool: if you want beautiful dashboards and are collaborating** with a team, W&B shines. And, until recently, while reconstructing data from trained neural networks, I ran multiple hyperparameter sweeps and W&B’s visualization capabilities were invaluable. I could directly compare reconstructions across runs.

But I realized that for most of my research projects, W&B was overkill. I rarely revisited individual runs, and once a project was done, the logs just sat there, and I did nothing with them ever after. When I then refactored the mentioned data reconstruction project, I thus explicitly removed the W&B integration. Not because anything was wrong with it, but because it wasn’t necessary.

Now, my setup is much simpler. I just log selected metrics to CSV and text files, writing directly to disk. For hyperparameter searches, I rely on Optuna. Not even the distributed version with a central server — just local Optuna, saving study states to a pickle file. If something crashes, I reload and continue. Pragmatic and sufficient (for my use cases).

The key insight here is this: logging is not the work. It’s a support system. Spending 99% of your time deciding on what you want to log — gradients? weights? distributions? and at which frequency? — can easily distract you from the actual research. For me, simple, local logging covers all needs, with minimal setup effort.

Maintain experimental lab notebooks

In December 1939, William Shockley wrote down an idea into his lab notebook: replace vacuum tubes with semiconductors. Roughly 20 years later, Shockley and two colleagues at Bell Labs were awarded Nobel Prizes for the invention of the modern transistor.

While most of us aren’t writing Nobel-worthy entries into our notebooks, we can still learn from the principle. Granted, in machine learning, our laboraties don’t have chemicals or test tubes, as we all envision when we think about a laboratory. Instead, our labs often are our computers; the same device that I use to write these lines has trained countless models over the years. And these labs are inherently portably, especially when we are developing remotely on high-performance compute clusters. Even better, thanks to highly-skilled administrative stuff, these clusters are running 24/7 — so there’s always time to run an experiment!

But, the question is, which experiment? Here, a former colleague introduced me to the idea of mainting a lab notebook, and lately I’ve returned to it in the simplest form possible. Before starting long-running experiments, I write down:

what I’m testing, and why I’m testing it.

Then, when I come back later — usually the next morning — I can immediately see which results are ready and what I had hoped to learn. It’s simple, but it changes the workflow. Instead of just “rerun until it works,” these dedicated experiments become part of a documented feedback loop. Failures are easier to interpret. Successes are easier to replicate.

Run experiments overnight

That’s a small, but painful lessons that I (re-)learned this month.

On a Friday evening, I discovered a bug that might affect my experiment results. I patched it and reran the experiments to validate. By Saturday morning, the runs had finished — but when I inspected the results, I realized I had forgotten to include a key ablation. Which meant … another full day of waiting.

In ML, overnight time is precious. For us programmers, it’s rest. For our experiments, it’s work. If we don’t have an experiment running while we sleep, we’re effectively wasting free compute cycles.

That doesn’t mean you should run experiments just for the sake of it. But whenever there is a meaningful one to launch, starting them in the evening is the perfect time. Clusters are often under-utilized and resources are more quickly available, and — most importantly — you will have results to analyse the next morning.

A simple trick is to plan this deliberately. As Cal Newport mentions in his book “Deep Work”, good workdays start the night before. If you know tomorrow’s tasks today, you can set up the right experiments in time.


* That ain’t bashing W&B (it would have been the same with, e.g., MLFlow), but rather asking users to evaluate what their project goals are, and then spend the majority of time on pursuing that goals with utmost focus.

** Footnote: mere collaborating is in my eyes not enough to warrant using such shared dashboards. You need to gain more insights from such shared tools than the time spent setting them up.



Source link

Continue Reading

Trending