Connect with us

AI Research

The More People Learn About AI, the Less They Trust It

Published

on


Researchers have found that trust in artificial intelligence falls among people as they become more AI literate — a damning revelation that highlights persistent skepticism in the tech.

AI companies continue to paint the tech as a mesmerizing, revolutionary inflection point for humanity that justifies enormous capital expenditures to run wildly resource-intensive AI models.

But when real-life users become more familiar with the tech — realizing that, at their core, products like ChatGPT are word prediction algorithms rather than human-like sentient entities — it can be a major turnoff, as the Wall Street Journal reports.

As detailed in a study published in the Journal of Marketing earlier this year, an international team of researchers found that the AI’s biggest fans tend to be the people with the shallowest familiarity with it.

“Contrary to expectations revealed in four surveys, cross-country data and six additional studies find that people with lower AI literacy are typically more receptive to AI,” they wrote, proposing that “people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI’s execution of tasks that seem to require uniquely human attributes.”

It’s an especially pertinent topic due to the widespread use of the tech among students, who may lack the literacy to make informed decisions on when or how to use AI — and employ it as a crutch to avoid learning deeper reasoning, writing, and research skills of their own. Of course, those students are likely to become even more reliant on companies like OpenAI as they age and enter the workforce.

“When you don’t really get what’s going on under the hood, AI creating these things seems amazing, and that’s when it can feel magical,” University of Southern California associate professor of marketing Stephanie Tully told the WSJ. “And that feeling can actually increase people’s willingness to use it.”

The findings should serve as a wake-up call for the industry. Instead of leading to higher adoption, those who are more clued in to how the tech works are less likely to use it, flying in the face of the assumption that greater technical knowledge will lead to wider adoption.

“In other domains, like wine, the people who know the most about it are wine lovers,” Tully told the WSJ. “With AI, it’s the opposite.”

In an experiment, the researchers gave 234 undergraduate students a questionnaire, asking them whether they would use AI to help with writing four different papers.

Those who scored lower on AI literacy were more willing to use the tech to complete the assignments. That’s despite them being more concerned about AI ethics and its potential to impact humanity negatively.

“Understanding that AI is just pattern-matching can strip away the emotional experience,” coauthor and George Washington University assistant professor of marketing Gil Appel told the WSJ.

The team corroborated their own findings by pointing to several other studies that also showed lower AI literacy was associated with greater willingness to use the tech.

As a result, the researchers argue that users should be educated about how AI works so they can make better-informed decisions.

“With the increase in AI around us, consumers should have a basic level of literacy to be able to understand when AI might have important limitations,” Tully told the WSJ.

More on AI literacy: Hypocrite Teachers Are Telling Students Not to Use AI While Using It to Grade Their Work



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

AI Research Healthcare: Transforming Drug Discovery –

Published

on



Artificial intelligence (AI) is transforming the pharmaceutical industry. More and more, AI is being used in drug discovery to predict which drugs might work and speed up the whole development process.

But here’s something you probably didn’t see coming: some of the same AI tools that help find new drug candidates are now being used to catch insurance fraud. It’s an innovative cross-industry application that’s essential in protecting the integrity of healthcare systems.

AI’s Core Role in Drug Discovery

The field of drug discovery involves multiple stages, including initial compound screening and preclinical testing to clinical trials and regulatory framework compliance. These steps are time-consuming, expensive, and often risky. Traditional methods can take over a decade and cost billions, and success rates remain frustratingly low. This is where AI-powered drug discovery comes in.

The technology taps machine learning algorithms, deep learning, and advanced analytics so researchers can process vast amounts of molecular and clinical data. As such, pharmaceutical firms and biotech companies can reduce the cost and time required in traditional drug discovery processes.

AI trends in drug discovery cover a broad range of applications, too. For instance, specialized AI platforms for the life sciences are now used to enhance drug discovery workflows, streamline clinical trial analytics, and accelerate regulatory submissions by automating tasks like report reviews and literature screenings. This type of technology demonstrates how machine learning can automatically sift through hundreds of models to identify the optimal one that best fits the data, a process that is far more efficient than manual methods.

In the oncology segment, for example, it’s responsible for innovative precision medicine treatments that target specific genetic mutations in cancer patients. Similar approaches are used in studies for:

  • Neurodegenerative diseases
  • Cardiovascular diseases
  • Chronic diseases  
  • Metabolic diseases
  • Infectious disease segments

Rapid development is critical in such fields, and AI offers great help in making the process more efficient. These applications will likely extend to emerging diseases as AI continues to evolve. Experts even predict that the AI drug discovery market will grow from around USD$1.5 billion in 2023 to between USD$20.30 billion by 2030. Advanced technologies, increased availability of healthcare data, and substantial investments in healthcare technology are the main drivers for its growth.

From Molecules to Fraud Patterns

So, how do AI-assisted drug discovery tools end up playing a role in insurance fraud detection? It’s all about pattern recognition. The AI-based tools used in drug optimization can analyze chemical structures and molecular libraries to find hidden correlations. In the insurance industry, the same capability can scan through patient populations, treatment claims, and medical records to identify suspicious billing or treatment patterns.

The applications in drug discovery often require processing terabytes of data from research institutions, contract research organizations, and pharmaceutical sectors. In fraud detection, the inputs are different—claims data, treatment histories, and reimbursement requests. The analytical methods remain similar, however. Both use unsupervised learning to flag anomalies and predictive analytics to forecast outcomes, whether that’s a promising therapeutic drug or a suspicious claim.

Practical Applications In and Out of the Lab

Let’s break down how this dual application works in real-world scenarios:

  • In the lab: AI helps identify small-molecule drugs, perform high-throughput screening, and refine clinical trial designs. Using generation models and computational power, scientists can simulate trial outcomes and optimize patient recruitment strategies, leading to better trial outcomes and fewer delays and ensure drug safety.
  • In insurance fraud detection: Advanced analytics can detect billing inconsistencies, unusual prescription patterns, or claims that don’t align with approved therapeutic product development pathways. It protects insurance systems from losing funds that could otherwise support genuine patients and innovative therapies.

This shared analytical backbone creates an environment for innovation that benefits both the pharmaceutical sector and healthcare insurers.

Challenges and Future Outlook

The integration of AI in drug discovery and insurance fraud detection is promising, but it comes with challenges. Patient data privacy, for instance, is a major concern for both applications, whether it’s clinical trial information or insurance claims data. The regulatory framework around healthcare data is constantly changing, and companies need to stay compliant across both pharmaceutical and insurance sectors.

On the fraud detection side, AI systems need to balance catching real fraud without flagging legitimate claims. False positives can delay patient care and create administrative headaches. Also, fraudsters are getting more sophisticated, so detection algorithms need constant updates to stay ahead.

Despite these hurdles, the market growth for these integrated solutions is expected to outpace other applications due to their dual benefits. With rising healthcare costs and more complex fraud schemes, insurance companies are under increasing pressure to protect their systems while still covering legitimate treatments.

Looking ahead, AI-driven fraud detection is likely to become more sophisticated as it learns from drug discovery patterns. And as healthcare fraud becomes more complex and treatment options expand, we can expect these cross-industry AI solutions to play an even bigger role in protecting healthcare dollars.

Final Thoughts

The crossover between AI drug discovery tools and insurance fraud detection shows how pattern recognition technology can solve problems across different industries. What started as a way to find new medicines is now helping catch fraudulent claims and protect healthcare dollars.

For patients, this dual approach means both faster access to new treatments and better protection of the insurance systems that help pay for their care. For the industry, it’s about getting more value from AI investments; the same technology that helps develop drugs can also stop fraud from draining resources. It’s a smart example of how one innovation can strengthen healthcare from multiple angles.





Source link

Continue Reading

AI Research

Research Tip Sheet: AI and Heart Failure Plus Recent Headlines

Published

on


LOS ANGELES (Sept. 12, 2025) — An artificial intelligence (AI) program created by Cedars-Sinai may reduce hospitalizations in people diagnosed with heart failure, a new study reports.

The study, published in JACC: Heart Failure, included 50 people who had been diagnosed with a condition called heart failure with reduced ejection fraction, in which the heart’s main pumping chamber, the left ventricle, becomes too weak to circulate blood throughout the body.

For three months, patients used a smartphone app to transmit home blood pressure readings to their cardiologists. The blood pressure readings were analyzed by an AI program that generated prescribing recommendations to the cardiologists, such as whether a new drug should be added or a dosage changed. The software, named HF-AI (for heart failure AI) was trained using data from Cedars-Sinai patients with heart failure between 2020 to 2022 and incorporates national and international heart failure guidelines.

Cardiologists accepted HF-AI medication and dose recommendations 90.8% of the time. This meant they more than doubled their use of guideline-directed heart failure medications. The program also dramatically decreased hospitalizations. Among the 50 enrolled patients, 23 were hospitalized in the six months before enrolling in the trial. In the six months after the intervention, only six were hospitalizeda 74% reduction. 

Investigators plan to use and study the program with more Cedars-Sinai patients.

“People with heart failure are among our most fragile patients, with extremely high risk of hospitalization and death,” said first author and co-inventor Raj Khandwalla, MD, division chief of Cardiology at Cedars-Sinai Medical Group and director of Digital Therapeutics at the Smidt Heart Institute. “By translating home blood pressure data into treatment advice, HF-AI lets us fine-tune medications sooner and keep more patients out of the hospital.”

This study was funded by Cedars-Sinai Technology Ventures.

“This research is a testament to the mission of Cedars-Sinai Technology Ventures to invest in innovative technology and improve clinical outcomes for patients,” said James Laur, JD, chief intellectual property officer for Technology Ventures.

Other Cedars-Sinai authors of the study include Alex Shvartser, MS; Raymond J. Zimmer, MD; Merije Chukumerije, MD; Michael Share, MD; Ronit Zadikany, MD; Michael Farkouh, MD; Yaron Elad, MD; and Michelle Maya Kittleson, MD, PHD.

Gregg Fonarow, MD, of UCLA Medical Center also authored the study.

Declaration of interests: The paper describes software that is the subject of U.S. Provisional Patent Application number 63/314,207, filed by Cedars-Sinai Medical Center on February 25, 2022. Dr. Fonarow has done consulting for Abbott, Amgen, AstraZeneca, Bayer, Boehringer Ingelheim, Cytokinetics, Eli Lilly, Johnson and Johnson, Medtronic, Merck, Novartis, and Pfizer. All other authors have reported that they have no relationships relevant to the contents of this paper to disclose.

In Case You Missed It

Recent headlines from the Cedars-Sinai Newsroom

Cedars-Sinai Health Sciences University is advancing groundbreaking research and educating future leaders in medicine, biomedical sciences and allied health sciences. Learn more about the university.





Source link

Continue Reading

AI Research

LLMs will hallucinate forever – here is what that means for your AI strategy

Published

on


The AI’s inescapable rulebook

Now let’s apply this to your AI. Its rulebook is the vast dataset it was trained on. It has ingested a significant portion of human knowledge, but that knowledge is itself a finite, inconsistent, and incomplete system. It contains contradictions, falsehoods, and, most importantly, gaps.

An AI, operating purely within its training data, is like a manager who refuses to think outside the company manual. When faced with a query that falls into one of Gödel’s gaps – a question where the answer is true but not provable from its data – the AI does not have the human capacity to say, “I do not know,” or to seek entirely new information. Its core programming is to respond. So, it does what the OpenAI paper describes: it auto-completes, or hallucinates. It creates a plausible-sounding reality based on the patterns in its data.

The AI invents a financial figure because the pattern suggests a number should be there. It cites a non-existent regulatory case because the pattern of legal language is persuasive. It designs a product feature that is physically impossible because the training data contains both engineering truths and science fiction.

The AI’s hallucination is not simply a technical failure; it is a Gödelian inevitability. It is the system’s attempt to be complete, which forces it to become inconsistent, unless the system says, “I don’t know,” in which case the system would be consistent but incomplete. Interestingly. OpenAI’s latest model has a feature billed as an improvement – namely its “abstention rate” (the rate at which the model admits that it cannot provide an answer). This rate has gone from about 1% in previous models to over 50% in GPT-5.



Source link

Continue Reading

Trending