AI Research
UMD Researchers Leverage AI to Enhance Confidence in HPV Vaccination

Human papillomavirus (HPV) vaccination represents a critical breakthrough in cancer prevention, yet its uptake among adolescents remains disappointingly low. Despite overwhelming evidence supporting the vaccine’s safety and efficacy against multiple types of cancer—including cervical, anal, and oropharyngeal cancers—only about 61% of teenagers aged 13 to 17 in the United States have received the recommended doses. Even more concerning are the even lower vaccination rates among younger children, starting at age nine, when the vaccine is first suggested. Addressing this paradox between scientific consensus and public hesitancy has become a focal point for an innovative research project spearheaded by communication expert Professor Xiaoli Nan at the University of Maryland (UMD).
The project’s core ambition involves harnessing artificial intelligence (AI) to transform the way vaccine information is communicated to parents, aiming to dismantle the barriers that fuel hesitancy. With a robust $2.8 million grant from the National Cancer Institute, part of the National Institutes of Health, Nan and her interdisciplinary team are developing a personalized, AI-driven chatbot. This technology is engineered not only to provide accurate health information but to adapt dynamically to parents’ individual concerns, beliefs, and communication preferences in real time—offering a tailored conversational experience that traditional brochures and websites simply cannot match.
HPV vaccination has long struggled with public misconceptions, stigma, and misinformation that discourage uptake. A significant factor behind the reluctance is tied to the vaccine’s association with a sexually transmitted infection, which prompts some parents to believe their children are too young for the vaccine or that vaccination might imply premature engagement with sexual activity. This misconception, alongside a lack of tailored communication strategies, has contributed to persistent disparities in vaccination rates. These disparities are especially pronounced among men, individuals with lower educational attainment, and those with limited access to healthcare, as Professor Cheryl Knott, a public health behavioral specialist at UMD, highlights.
Unlike generic informational campaigns, the AI chatbot leverages cutting-edge natural language processing (NLP) to simulate nuanced human dialogue. However, it does so without succumbing to the pitfalls of generative AI models, such as ChatGPT, which can sometimes produce inaccurate or misleading answers. Instead, the system draws on large language models to generate a comprehensive array of possible responses. These are then rigorously curated and vetted by domain experts before deployment, ensuring that the chatbot’s replies remain factual, reliable, and sensitive to users’ needs. When interacting live, the chatbot analyzes parents’ input in real time, selecting the most appropriate response from this trusted set, thereby balancing flexibility with accuracy.
This “middle ground” model, as described by Philip Resnik, an MPower Professor affiliated with UMD’s Department of Linguistics and Institute for Advanced Computer Studies, preserves the flexibility of conversational AI while instituting “guardrails” to maintain scientific integrity. The approach avoids the rigidity of scripted chatbots that deliver canned, predictable replies; simultaneously, it steers clear of the “wild west” environment of fully generative chatbots, where the lack of control can lead to misinformation. Instead, it offers an adaptive yet responsible communication tool, capable of engaging parents on their terms while preserving public health objectives.
The first phase of this ambitious experiment emphasizes iterative refinement of the chatbot via a user-centered design process. This involves collecting extensive feedback from parents, healthcare providers, and community stakeholders to optimize the chatbot’s effectiveness and cultural sensitivity. Once this foundational work is complete, the team plans to conduct two rigorous randomized controlled trials. The first trial will be conducted online with a nationally representative sample of U.S. parents, compare the chatbot’s impact against traditional CDC pamphlets, and measure differences in vaccine acceptance. The second trial will take place in clinical environments in Baltimore, including pediatric offices, to observe how the chatbot influences decision-making in real-world healthcare settings.
Min Qi Wang, a behavioral health professor participating in the project, emphasizes that “tailored, timely, and actionable communication” facilitated by AI signals a paradigm shift in public health strategies. This shift extends beyond HPV vaccination, as such advanced communication systems possess the adaptability to address other complex public health challenges. By delivering personalized guidance directly aligned with users’ expressed concerns, AI can foster a more inclusive health dialogue that values empathy and relevance, which traditional mass communication methods often lack.
Beyond increasing HPV vaccination rates, the research team envisions broader implications for public health infrastructure. In an era where misinformation can spread rapidly and fear often undermines scientific recommendations, AI-powered tools offer a scalable, responsive mechanism to disseminate trustworthy information quickly. During future pandemics or emergent health crises, such chatbots could serve as critical channels for delivering customized, real-time guidance to diverse populations, helping to flatten the curve of misinformation while respecting individual differences.
The integration of AI chatbots into health communication represents a fusion of technological innovation with behavioral science, opening new horizons for personalized medicine and health education. By engaging users empathetically and responsively, these systems can build trust and facilitate informed decision-making, critical components of successful public health interventions. Professor Nan highlights the profound potential of this marriage between AI and public health communication by posing the fundamental question: “Can we do a better job with public health communication—with speed, scale, and empathy?” Project outcomes thus far suggest an affirmative answer.
As the chatbot advances through its pilot phases and into clinical trials, the research team remains committed to maintaining a rigorous scientific approach, ensuring that the tool’s recommendations align with the highest standards of evidence-based medicine. This careful balance between innovation and reliability is essential to maximize public trust and the chatbot’s ultimate impact on vaccine uptake. Should these trials demonstrate efficacy, the model could serve as a blueprint for deploying AI-driven communication tools across various domains of health behavior change.
Moreover, the collaborative nature of this project—bringing together communication experts, behavioral scientists, linguists, and medical professionals—illustrates the importance of interdisciplinary efforts in addressing complex health challenges. Each field contributes unique insights: linguistic analysis enables nuanced conversation design, behavioral science guides motivation and persuasion strategies, and medical expertise ensures factual accuracy and clinical relevance. This holistic framework strengthens the chatbot’s ability to resonate with diverse parent populations and to overcome entrenched hesitancy.
In conclusion, while HPV vaccines represent a major advancement in cancer prevention, their potential remains underutilized due to deeply embedded hesitancy fueled by stigma and misinformation. Leveraging AI-driven, personalized communication stands as a promising strategy to bridge this gap. The University of Maryland’s innovative chatbot project underscores the use of responsible artificial intelligence to meet parents where they are, addressing their unique concerns with empathy and scientific rigor. This initiative not only aspires to improve HPV vaccine uptake but also to pave the way for AI’s transformative role in future public health communication efforts.
Subject of Research: Artificial intelligence-enhanced communication to improve HPV vaccine uptake among parents.
Article Title: Transforming Vaccine Communication: AI Chatbots Target HPV Vaccine Hesitancy in Parents
News Publication Date: Information not provided in the source content.
Web References:
https://sph.umd.edu/people/cheryl-knott
https://sph.umd.edu/people/min-qi-wang
Image Credits: Credit: University of Maryland (UMD)
Keywords: Vaccine research, Science communication
Tags: adolescent vaccination ratesAI-driven health communicationcancer prevention strategieschatbot technology in healthcareevidence-based vaccine educationHPV vaccination awarenessinnovative communication strategies for parentsNational Cancer Institute fundingovercoming vaccine hesitancyparental engagement in vaccinationpersonalized health informationUniversity of Maryland research
AI Research
Companies Bet Customer Service AI Pays

Klarna’s $15 billion IPO was more than a financial milestone. It spotlighted how the Swedish buy-now-pay-later (BNPL) firm is grappling with artificial intelligence (AI) at the heart of its operations.
AI Research
Artificial intelligence (AI)-powered anti-ship missile with double the range

Questions and answers:
- What is the primary feature of the LRASM C-3 missile compared to earlier variants? It has nearly double the range of previous versions, with a range of about 1,000 miles, compared to 200 to 300 miles for the C-1 and 580 miles for the C-2.
- How does artificial intelligence enhance the LRASM C-3’s capabilities? AI helps the missile with autonomous mission planning, target discrimination, and attack coordination, adjust flight paths based on real-time data, identify and track moving targets, and adapt to changing conditions like jamming and interference.
- What can launch the LRASM C-3 missile? U.S. Air Force B-1B bombers, Navy F/A-18E/F Super Hornets, and F-35 Lightning II jets, with possible future launches from Navy ships and attack submarines.
PATUXENT RIVER NAS, Md. – U.S. Navy surface warfare experts are asking Lockheed Martin Corp. to move forward with developing the new LRASM C-3 anti-ship missile with double the range of previous versions.
Officials of the Naval Air Systems Command at Patuxent River Naval Air Station, Md., announced a $48.1 million order last month to the Lockheed Martin Missiles and Fire Control segment in Orlando, Fla., for engineering to establish the Long Range Anti-Ship Missile (LRASM) C-3 variant.
The subsonic LRASM is for attacking high-priority enemy surface warships like aircraft carriers, troop transport ships, and guided-missile cruisers from Navy, U.S. Air Force, and allied aircraft.
LRASM is designed to detect and destroy high-priority targets within groups of ships from extended ranges in electronic warfare jamming environments. It is a precision-guided, standoff anti-ship missile based on the Lockheed Martin Joint Air-to-Surface Standoff Missile-Extended Range (JASSM-ER).
1,000-mile range
The LRASM C-3 variant has a range of nearly 1,000 miles, compared to the 200-to-300-mile C-1 variant, and 580-mile range of the LRASM C-2 variant.
LRASM C-3 also introduces machine learning and advanced artificial intelligence (AI) algorithms to enhance autonomous mission planning, target discrimination, and attack coordination, even amid intense electronic warfare (EW) jamming.
The C-3 also can exchange information from military satellites, and has an enhanced imaging infrared and RF seeker for survivability and target identification.
The C-3 also can be launched form the Air Force from B-1B strategic jet bomber, as well as the Navy F/A-18E/F Super Hornet jet fighter-bomber and the F-35 Lightning II attack jet. Navy leaders also envision using the Navy MK 41 shipboard vertical launch system with the LRASM C-3, and are considering options to launch the missile from attack submarines.
Tell me more about applying artificial intelligence to missile guidance …
- Applying artificial intelligence to missile guidance will enhance precision, adapt to dynamic environments, and improve decision-making in real-time. AI can help missiles navigate autonomously by using real-time data from radar, infrared sensors, and GPS to adjust flight paths. AI also can help missiles visually identify targets from images or video feeds, and not only enhance the missile’s ability to recognize and track moving targets, but also to predict and follow moving targets even if they change direction or speed. Using AI, missile guidance systems can make real-time adjustments to their trajectory based on changing conditions like wind, RF interference, and jamming. Missiles also may use AI to other weapons in swarm tactics, and operate effectively against countermeasures.
Helping to extend the LRASM C-3’s range are an advanced multi-mode sensor suite; enhanced data exchange and communications; digital anti-jam GPS and navigation; and AI and machine learning capabilities.
The missile’s multi-mode sensor suite is expected to blend imaging infrared and RF sensors to help the weapon identify and attack targets. Its communications will have data links for secure real-time communication with satellites, drones, and strike aircraft.
Digital anti-jam GPS and navigation will provide midcourse guidance to target areas far beyond the effective range of traditional systems. AI and machine learning, meanwhile, should help the missile identify targets and plan its routes autonomously. The LRASM C-3 version should enter service next year.
On this order, Lockheed Martin will do the work in Orlando and Ocala, Fla.; and in Troy, Ala., and should be finished in November 2026. For more information contact Lockheed Martin Missiles and Fire Control online at https://www.lockheedmartin.com/en-us/products/long-range-anti-ship-missile.html, or Naval Air Systems Command at www.navair.navy.mil.
AI Research
Human-Machine Understanding in AI | Machine Precision Meets Human Intuition

-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi