Connect with us

AI Research

AI Unlocks Earth’s Subsurface Mysteries for Smart Energy Applications – USC Viterbi

Published

on


The Hverir area in Iceland, known for its geothermal landscapes, is a key example of subsurface energy systems that AI research aims to improve, including geothermal energy and CO₂ storage. Photo/iStock.

Environmental scientists have amassed reams of data about the Earth’s surface and the vastness of its atmosphere.

As for the subterranean world?

Not nearly as much.

A new research project co-led by USC Viterbi’s Thomas Lord Department of Computer Science Professor Yan Liu aims to better understand and predict how water, carbon dioxide (CO₂), and energy move underground, which is critical for safe CO₂ storage, water management, and improving sustainable energy recovery.

PI Yan Liu and co-PI Behnam Jafarpour.

PI Yan Liu and co-PI Behnam Jafarpour.

For instance, the results of the study could help scientists tackle such critical challenges as the safe underground storage of CO₂, a chemical compound that drives shifts in Earth’s energy balance.

CO₂ storage, also known as carbon capture and storage (CCS), is a process whereby carbon dioxide (CO₂) emissions from industrial sources or power plants are captured before they enter the atmosphere and then stored underground in geological formations, like depleted oil and gas fields or deep saline aquifers.

“CO₂ capture and storage is one of the grand challenges in geoscience, and our work has the potential to offer major breakthrough solutions to the accurate prediction of CO₂ storage,” said Liu, principal investigator of the study.

“Our work has the potential to offer major breakthrough solutions to the accurate prediction of CO₂ storage.” Yan Liu.

The research project also could aid in groundwater management and geothermal energy recovery, among other applications, added Liu, also a professor of electrical and computer engineering and biomedical sciences.

For example, she said, geoscientists would be able to better identify suitable storage reservoirs, predict their responses to development and operation strategies, and characterize important rock flow and transport properties.

Leveraging strengths

Liu is teaming up on the study with co-principal investigator Behnam Jafarpour, professor of chemical engineering and material science, electrical and computer engineering, and civil and environmental engineering.

The research project will employ a machine learning tool to solve some of the mysteries occurring below ground.

The three-year study, “Advancing Subsurface Flow and Transport Modeling with Physics-Informed Causal Deep Learning Models,” is supported by the U.S. National Science Foundation as part of its Collaborations in Artificial Intelligence and Geosciences (CAIG) program.

“Collaboration between geoscientists and computer scientists is essential.” Behnam Jafarpour

“Collaboration between geoscientists and computer scientists is essential for advancing subsurface flow and transport modeling by harnessing recent breakthroughs in AI and machine learning,” Jafarpour said. “The key lies in seamlessly integrating reliable domain knowledge and physical principles with AI algorithms to develop innovative technologies that leverage the strengths of both fields.”

A ‘paradigm shift’

Rocks, fractures, and fluids interact in a complex way below Earth’s surface, making it difficult to predict their behavior.

In particular, rock deposits form intricate structures and layers often exhibit complex fluid flow patterns in subsurface environments. Predicting the dynamics of the emerging flow patterns in complex geologic formations is paramount for managing the development of underlying resources.

By combining physical science and data that will be generated by an AI deep-learning model called PINCER (Physics-Informed Causal Deep Learning Models), Liu and Jafarpour hope to create a way to better capture and predict subsurface flow and transport dynamics.

The study launched in mid-September 2024 and is estimated to last through Aug. 31, 2027.

“PINCER presents a paradigm shift from traditional data-driven approaches or model-based techniques to a hybrid solution that combines the benefits of both methods,” an abstract of the study explains. “(It) advances geoscience research by developing more efficient and robust modeling and prediction of fluid flow and transport processes in subsurface environments.”

A clearer picture

As Liu explained, simulation systems have been used for decades to predict the subsurface flow dynamics, “but these models have their limitations,” she said. She explained that they rely heavily on highly uncertain inputs and are based on simplified descriptions of the underlying physics.

The new AI tool will build up the dataset from what is now a small amount of data, she said.

With a clearer picture of the underground dynamics, identifying suitable sites for underground CO₂ storage, for example, will become less of a guessing game, thus reducing the risk of accidental leaks due to unanticipated movements of subterranean materials.

Standard AI tools rely heavily on large training datasets and may produce predictions that deviate from the governing principles of subsurface flow systems, according to Jafarpour.

“The hope is that customized solutions like PINCER can help mitigate these limitations by enhancing physical consistency and reducing the data requirements of AI models,” he said.

AI techniques in geosciences

Two other USC studies were funded in the NSF grant package, one involving paleoclimatology and the other earthquake dynamics.

The NSF aims to advance the development and implementation of innovative AI techniques in geosciences to help better understand extreme weather, solar activity, earthquake hazards, and more.

The CAIG grants, announced in August 2024, require the collaboration of geoscientists, computer scientists, mathematicians, and others.

Liu and Jafarpour had received seed funding from the USC Ershaghi Center for Energy Transition to start their collaboration in this important area.

Published on July 9th, 2025

Last updated on July 9th, 2025



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI Research Healthcare: Transforming Drug Discovery –

Published

on



Artificial intelligence (AI) is transforming the pharmaceutical industry. More and more, AI is being used in drug discovery to predict which drugs might work and speed up the whole development process.

But here’s something you probably didn’t see coming: some of the same AI tools that help find new drug candidates are now being used to catch insurance fraud. It’s an innovative cross-industry application that’s essential in protecting the integrity of healthcare systems.

AI’s Core Role in Drug Discovery

The field of drug discovery involves multiple stages, including initial compound screening and preclinical testing to clinical trials and regulatory framework compliance. These steps are time-consuming, expensive, and often risky. Traditional methods can take over a decade and cost billions, and success rates remain frustratingly low. This is where AI-powered drug discovery comes in.

The technology taps machine learning algorithms, deep learning, and advanced analytics so researchers can process vast amounts of molecular and clinical data. As such, pharmaceutical firms and biotech companies can reduce the cost and time required in traditional drug discovery processes.

AI trends in drug discovery cover a broad range of applications, too. For instance, specialized AI platforms for the life sciences are now used to enhance drug discovery workflows, streamline clinical trial analytics, and accelerate regulatory submissions by automating tasks like report reviews and literature screenings. This type of technology demonstrates how machine learning can automatically sift through hundreds of models to identify the optimal one that best fits the data, a process that is far more efficient than manual methods.

In the oncology segment, for example, it’s responsible for innovative precision medicine treatments that target specific genetic mutations in cancer patients. Similar approaches are used in studies for:

  • Neurodegenerative diseases
  • Cardiovascular diseases
  • Chronic diseases  
  • Metabolic diseases
  • Infectious disease segments

Rapid development is critical in such fields, and AI offers great help in making the process more efficient. These applications will likely extend to emerging diseases as AI continues to evolve. Experts even predict that the AI drug discovery market will grow from around USD$1.5 billion in 2023 to between USD$20.30 billion by 2030. Advanced technologies, increased availability of healthcare data, and substantial investments in healthcare technology are the main drivers for its growth.

From Molecules to Fraud Patterns

So, how do AI-assisted drug discovery tools end up playing a role in insurance fraud detection? It’s all about pattern recognition. The AI-based tools used in drug optimization can analyze chemical structures and molecular libraries to find hidden correlations. In the insurance industry, the same capability can scan through patient populations, treatment claims, and medical records to identify suspicious billing or treatment patterns.

The applications in drug discovery often require processing terabytes of data from research institutions, contract research organizations, and pharmaceutical sectors. In fraud detection, the inputs are different—claims data, treatment histories, and reimbursement requests. The analytical methods remain similar, however. Both use unsupervised learning to flag anomalies and predictive analytics to forecast outcomes, whether that’s a promising therapeutic drug or a suspicious claim.

Practical Applications In and Out of the Lab

Let’s break down how this dual application works in real-world scenarios:

  • In the lab: AI helps identify small-molecule drugs, perform high-throughput screening, and refine clinical trial designs. Using generation models and computational power, scientists can simulate trial outcomes and optimize patient recruitment strategies, leading to better trial outcomes and fewer delays and ensure drug safety.
  • In insurance fraud detection: Advanced analytics can detect billing inconsistencies, unusual prescription patterns, or claims that don’t align with approved therapeutic product development pathways. It protects insurance systems from losing funds that could otherwise support genuine patients and innovative therapies.

This shared analytical backbone creates an environment for innovation that benefits both the pharmaceutical sector and healthcare insurers.

Challenges and Future Outlook

The integration of AI in drug discovery and insurance fraud detection is promising, but it comes with challenges. Patient data privacy, for instance, is a major concern for both applications, whether it’s clinical trial information or insurance claims data. The regulatory framework around healthcare data is constantly changing, and companies need to stay compliant across both pharmaceutical and insurance sectors.

On the fraud detection side, AI systems need to balance catching real fraud without flagging legitimate claims. False positives can delay patient care and create administrative headaches. Also, fraudsters are getting more sophisticated, so detection algorithms need constant updates to stay ahead.

Despite these hurdles, the market growth for these integrated solutions is expected to outpace other applications due to their dual benefits. With rising healthcare costs and more complex fraud schemes, insurance companies are under increasing pressure to protect their systems while still covering legitimate treatments.

Looking ahead, AI-driven fraud detection is likely to become more sophisticated as it learns from drug discovery patterns. And as healthcare fraud becomes more complex and treatment options expand, we can expect these cross-industry AI solutions to play an even bigger role in protecting healthcare dollars.

Final Thoughts

The crossover between AI drug discovery tools and insurance fraud detection shows how pattern recognition technology can solve problems across different industries. What started as a way to find new medicines is now helping catch fraudulent claims and protect healthcare dollars.

For patients, this dual approach means both faster access to new treatments and better protection of the insurance systems that help pay for their care. For the industry, it’s about getting more value from AI investments; the same technology that helps develop drugs can also stop fraud from draining resources. It’s a smart example of how one innovation can strengthen healthcare from multiple angles.





Source link

Continue Reading

AI Research

LLMs will hallucinate forever – here is what that means for your AI strategy

Published

on


The AI’s inescapable rulebook

Now let’s apply this to your AI. Its rulebook is the vast dataset it was trained on. It has ingested a significant portion of human knowledge, but that knowledge is itself a finite, inconsistent, and incomplete system. It contains contradictions, falsehoods, and, most importantly, gaps.

An AI, operating purely within its training data, is like a manager who refuses to think outside the company manual. When faced with a query that falls into one of Gödel’s gaps – a question where the answer is true but not provable from its data – the AI does not have the human capacity to say, “I do not know,” or to seek entirely new information. Its core programming is to respond. So, it does what the OpenAI paper describes: it auto-completes, or hallucinates. It creates a plausible-sounding reality based on the patterns in its data.

The AI invents a financial figure because the pattern suggests a number should be there. It cites a non-existent regulatory case because the pattern of legal language is persuasive. It designs a product feature that is physically impossible because the training data contains both engineering truths and science fiction.

The AI’s hallucination is not simply a technical failure; it is a Gödelian inevitability. It is the system’s attempt to be complete, which forces it to become inconsistent, unless the system says, “I don’t know,” in which case the system would be consistent but incomplete. Interestingly. OpenAI’s latest model has a feature billed as an improvement – namely its “abstention rate” (the rate at which the model admits that it cannot provide an answer). This rate has gone from about 1% in previous models to over 50% in GPT-5.



Source link

Continue Reading

AI Research

Artificial intelligence is at the forefront of educational discussions

Published

on



Artificial intelligence is at the forefront of educational discussions as school leaders, teachers, and business professionals gathered at the Education Leadership Summit in Tulsa to explore AI’s impact on classrooms and its implications for students’ futures.

Source: Youtube



Source link

Continue Reading

Trending