AI Insights
AI helped older adults report accurate blood pressure readings at home

Research Highlights:
- Use of an AI voice agent to prompt self-reported blood pressure readings helped to improve accuracy of blood pressure measures and patient outcomes in a group of majority ages 65 and older patients with high blood pressure.
- The study’s findings demonstrate how integrating AI into care can help to improve home blood pressure monitoring and completion rates, which can lead to improved quality outcomes for patients.
- Note: The study featured in this news release is a research abstract. Abstracts presented at the American Heart Association’s scientific meetings are not peer-reviewed, and the findings are considered preliminary until published as a full manuscript in a peer-reviewed scientific journal.
Embargoed until 10:00 a.m., ET/9:00 a.m. CT, Sunday, Sept. 7, 2025
BALTIMORE, Sept.7, 2025 — Artificial intelligence (AI) voice agents helped older adults with high blood pressure to accurately report their blood pressure readings and improved blood pressure management, according to preliminary research presented at the American Heart Association’s Hypertension Scientific Sessions 2025. The meeting is in Baltimore, September 4-7, 2025, and is the premier scientific exchange focused on recent advances in basic and clinical research on high blood pressure and its relationship to cardiac and kidney disease, stroke, obesity and genetics.
“Controlling blood pressure remains a cornerstone for improving cardiovascular outcomes for patients, however, capturing timely, compliant blood pressure readings remains a challenge, particularly for patients with limited access to care,” said lead study author Tina-Ann Kerr Thompson, M.D., senior vice president of the primary care service line and executive director of the population health collaborative at Emory Healthcare in Atlanta. “In our study, we were able to improve accuracy of blood pressure measures and patient outcomes.”
AI voice agents are conversational systems powered by large language models that can understand and produce natural speech in real time when interacting with humans. This study included 2,000 adults, a majority ages 65 and older, and was designed to evaluate the effectiveness and scalability of a voice-enabled AI agent in engaging patients to self-report accurate blood pressure readings, in place of a phone call with a health care professional about their blood pressure measures. The AI agent also identified patients in need of follow-up medical care based on their blood pressure readings.
The AI voice-agent calls to patients were made using commercially available AI in multiple languages, including English and Spanish. A blood pressure reading outside the threshold range for readings that vary based on the presence of other conditions, such as diabetes, resulted in the call being escalated to a licensed nurse or medical assistant. The presence of symptoms such as dizziness, blurred vision or chest pain also prompted escalation of the call. Escalation to additional care was immediate in urgent situations or within 24 hours for non-urgent issues.
The patients were contacted by the voice agent to provide recent blood pressure readings or to conduct live measurements during the call. After the call, the readings were entered into the patient’s electronic health record and reviewed by a clinician. Call routing and referrals for care management were prompted for patients with difficult-to-control high blood pressure. This process reduced the manual workload by clinicians and resulted in an 88.7% lower cost-per-reading. This amount was calculated by comparing the cost of commercially available AI voice agents with the use of human nurses to perform similar tasks that result in successfully obtaining patient self-reported blood pressure readings.
The study found that integrating AI into clinical workflows lowered costs and improved care management for patients. During the study period:
- 85% of patients were successfully reached by the voice-based AI agent.
- Of those patients, 67% completed the call, and 60% took a compliant blood pressure reading during the call. Among these patients, 68% met CBP (controlling blood pressure) Stars compliance thresholds.
- Overall, 1,939 CBP gaps were closed, elevating the measure from 1-Star to 4-Star performance—a 17% improvement. The Medicare Advantage (MA) and Healthcare Effectiveness Data and Information Set (HEDIS) CBP measure increased from a previously reported 1-star rating to 4-star rating.
- At the end of each completed call, patients received a two-question survey to rate their satisfaction on a scale of 1 to 10, with 10 being 100% satisfied. Among the completed calls, the average patient-reported satisfaction rate exceeded 9 out of 10, reflecting an excellent overall experience with the voice-based AI agent.
“We were surprised by the high patient satisfaction scores after interacting with artificial intelligence-based voice agents,” said Thompson. “We are excited for what that means for the future, since patient engagement and satisfaction are so critical to health care outcomes.”
“This could be a game-changing study,” said Eugene Yang, M.D., M.S., FACC, an American Heart Association volunteer expert. “Accurate blood pressure readings are essential to improving control, and new approaches can help make that possible. Breakthrough AI technologies like this could transform how we manage blood pressure by reaching patients wherever they are and addressing critical barriers, such as limited access to care and gaps in patient support.” Yang, who was not involved in this study, is a professor in the division of cardiology and the Carl and Renée Behnke Endowed Chair for Asian Health at University of Washington School of Medicine.
The study has several limitations. This study was observational and did not have a control group. The consecutive AI calls were not compared to human calls; instead, AI voice-calls were deployed because it was not possible to make an adequate number of human-only calls. In addition, the study was retrospective, meaning it reviewed existing data, and evaluation was completed after the clinically identified calls were already made.
Study details, background and design:
- Participants included 2,000 adults; a majority were ages 65 or older (average age of 72 years; 61% women) receiving care for high blood pressure.
- Review of electronic health records identified patients who were missing blood pressure data or whose most recent BP reading was not within the normal range of <120/80 mm Hg. Patients with these gaps in data were tagged to receive calls from the AI voice agent.
- The study was conducted with patients at Emory Healthcare in Atlanta during a 10-week period. Patients received at least one phone call during the study. Patients received more than one call if they did not answer the phone.
- Patients with open gaps in managing blood pressure were identified through electronic medical records (EMR) and payer analytics. Patient lists were reviewed to ensure the information in their records was correct, and they were verified for outreach by a clinical operations team to ensure real-time accuracy of gaps before outreach to the patients.
- AI texts, phone calls from the conventional care team, recent clinical visits where documentation could be found for a blood pressure reading and generative AI voice agents were used to contact patients to provide recent blood pressure readings or take their blood pressure reading during the call. These included any recent clinical visits where documentation could be found for a BP recorded.
- A post-call validation step was integrated into the workflow, in which readings were entered into the EHR, reviewed by a clinician and submitted as supplemental data to close the Stars quality gap. For patients with uncontrolled high blood pressure, clinical escalation referrals were made to care management teams.
- The Centers for Medicare and Medicaid Services (CMS) developed the Star Ratings system, known as MA Stars, to rate Medicare Advantage (MA) (Part C) and prescription drug (Part D) plans on a 5-star scale with 1 being the lowest score and 5 being the highest score. MA plans are plans from private insurance companies approved by Medicare and not issued by Medicare itself. Hospitals, care centers and clinicians are eligible to receive a bonus payment increase if they achieve at least a 4-star rating.
Self-measured blood pressure is a focus area of Target:BP, an American Heart Association initiative that helps health care organizations improve blood pressure control rates through an evidence-based program. Home blood pressure monitoring is recommended for all adults with any level of high blood pressure, as noted in the Association’s new 2025 guideline on high blood pressure, released last month.
Note: Oral presentation #107 is at 10:00 a.m. ET, Sunday, Sept. 7, 2025.
Co-authors, their disclosures and funding sources are listed in the abstract.
Statements and conclusions of studies that are presented at the American Heart Association’s scientific meetings are solely those of the study authors and do not necessarily reflect the Association’s policy or position. The Association makes no representation or guarantee as to their accuracy or reliability. Abstracts presented at the Association’s scientific meetings are not peer-reviewed, rather, they are curated by independent review panels and are considered based on the potential to add to the diversity of scientific issues and views discussed at the meeting. The findings are considered preliminary until published as a full manuscript in a peer-reviewed scientific journal.
The Association receives more than 85% of its revenue from sources other than corporations. These sources include contributions from individuals, foundations and estates, as well as investment earnings and revenue from the sale of our educational materials. Corporations (including pharmaceutical, device manufacturers and other companies) also make donations to the Association. The Association has strict policies to prevent any donations from influencing its science content and policy positions. Overall financial information is available here.
Additional Resources:
###
The American Heart Association’s Hypertension Scientific Sessions 2025 is a premier scientific conference dedicated to recent advancements in both basic and clinical research related to high blood pressure and its connections to cardiac and kidney diseases, stroke, obesity and genetics. The primary aim of the meeting is to bring together interdisciplinary researchers from around the globe and facilitate engagement with leading experts in the field of hypertension. Attendees will have the opportunity to discover the latest research findings and build lasting relationships with researchers and clinicians across various disciplines and career stages. Follow the conference on X using the hashtag #Hypertension25.
About the American Heart Association
The American Heart Association is a relentless force for a world of longer, healthier lives. Dedicated to ensuring equitable health in all communities, the organization has been a leading source of health information for more than one hundred years. Supported by more than 35 million volunteers globally, we fund groundbreaking research, advocate for the public’s health, and provide critical resources to save and improve lives affected by cardiovascular disease and stroke. By driving breakthroughs and implementing proven solutions in science, policy, and care, we work tirelessly to advance health and transform lives every day. Connect with us on heart.org, Facebook, X or by calling 1-800-AHA-USA1.
For Media Inquiries and AHA Expert Perspective:
AHA Communications & Media Relations in Dallas: 214-706-1173; ahacommunications@heart.org
Michelle Kirkwood: Michelle.Kirkwood@heart.org
For Public Inquiries: 1-800-AHA-USA1 (242-8721)
heart.org and stroke.org
AI Insights
Westwood joins 40 other municipalities using artificial intelligence to examine roads

The borough of Westwood has started using artificial intelligence to determine if their roads need to be repaired or repaved.
It’s an effort by elected officials as a way to save money on manpower and to be sure that all decisions are objective.
Instead of relying on his own two eyes, the superintendent of Public Works is now allowing an app on his phone to record images of Westwood’s roads as he drives them.
Data on every pothole, faded striping and 13 other types of road defects are collected by the app.
The road management app is from a New Jersey company called Vialytics.
Westwood is one of 40 municipalities in the state to use the software, which also rates road quality and provides easy to use data.
“Now you’re relying on the facts here not just my opinion of the street. It’s helped me a lot already. A lot of times you’ll have residents who just want their street paved. Now I can go back to people and say there’s nothing wrong with your street that it needs to be repaved,” said Rick Woods, superintendent of Public Works.
Superintendent Woods says he can even create work orders from the road as soon as a defect is detected.
Borough officials believe the Vialytics app will pay for itself in manpower and offer elected officials objective data when determining how to use taxpayer dollars for roads.
AI Insights
How AI Simulations Match Up to Real Students—and Why It Matters

AI-simulated students consistently outperform real students—and make different kinds of mistakes—in math and reading comprehension, according to a new study.
That could cause problems for teachers, who increasingly use general prompt-based artificial intelligence platforms to save time on daily instructional tasks. Sixty percent of K-12 teachers report using AI in the classroom, according to a June Gallup study, with more than 1 in 4 regularly using the tools to generate quizzes and more than 1 in 5 using AI for tutoring programs. Even when prompted to cater to students of a particular grade or ability level, the findings suggest underlying large language models may create inaccurate portrayals of how real students think and learn.
“We were interested in finding out whether we can actually trust the models when we try to simulate any specific types of students. What we are showing is that the answer is in many cases, no,” said Ekaterina Kochmar, co-author of the study and an assistant professor of natural-language processing at the Mohamed bin Zayed University of Artificial Intelligence in the United Arab Emirates, the first university dedicated entirely to AI research.
How the study tested AI “students”
Kochmar and her colleagues prompted 11 large language models (LLMs), including those underlying generative AI platforms like ChatGPT, Qwen, and SocraticLM, to answer 249 mathematics and 240 reading grade-level questions on the National Assessment of Educational Progress in reading and math using the persona of typical students in grades 4, 8, and 12. The researchers then compared the models’ answers to NAEP’s database of real student answers to the same questions to measure how closely AI-simulated students’ answers mirrored those of actual student performance.
The LLMs that underlie AI tools do not think but generate the most likely next word in a given context based on massive pools of training data, which might include real test items, state standards, and transcripts of lessons. By and large, Kochmar said, the models are trained to favor correct answers.
“In any context, for any task, [LLMs] are actually much more strongly primed to answer it correctly,” Kochmar said. “That’s why it’s very difficult to force them to answer anything incorrectly. And we’re asking them to not only answer incorrectly but fall in a particular pattern—and then it becomes even harder.”
For example, while a student might miss a math problem because he misunderstood the order of operations, an LLM would have to be specifically prompted to misuse the order of operations.
None of the tested LLMs created simulated students that aligned with real students’ math and reading performance in 4th, 8th, or 12th grades. Without specific grade-level prompts, the proxy students performed significantly higher than real students in both math and reading—scoring, for example, 33 percentile points to 40 percentile points higher than the average real student in reading.
Kochmar also found that simulated students “fail in different ways than humans.” While specifying specific grades in prompts did make simulated students perform more like real students with regard to how many answers they got correct, they did not necessarily follow patterns related to particular human misconceptions, such as order of operations in math.
The researchers found no prompt that fully aligned simulated and real student answers across different grades and models.
What this means for teachers
For educators, the findings highlight both the potential and the pitfalls of relying on AI-simulated students, underscoring the need for careful use and professional judgment.
“When you think about what a model knows, these models have probably read every book about pedagogy, but that doesn’t mean that they know how to make choices about how to teach,” said Robbie Torney, the senior director of AI programs at Common Sense Media, which studies children and technology.
Torney was not connected to the current study, but last month released a study of AI-based teaching assistants that similarly found alignment problems. AI models produce answers based on their training data, not professional expertise, he said. “That might not be bad per se, but it might also not be a good fit for your learners, for your curriculum, and it might not be a good fit for the type of conceptual knowledge that you’re trying to develop.”
This doesn’t mean teachers shouldn’t use general prompt-based AI to develop tools or tests for their classes, the researchers said, but that educators need to prompt AI carefully and use their own professional judgement when deciding if AI outputs match their students’ needs.
“The great advantage of the current technologies is that it is relatively easy to use, so anyone can access [them],” Kochmar said. “It’s just at this point, I would not trust the models out of the box to mimic students’ actual ability to solve tasks at a specific level.”
Torney said educators need more training to understand not just the basics of how to use AI tools but their underlying infrastructure. “To be able to optimize use of these tools, it’s really important for educators to recognize what they don’t have, so that they can provide some of those things to the models and use their professional judgement.”
AI Insights
We’re Entering a New Phase of AI in Schools. How Are States Responding?

Artificial intelligence topped the list of state technology officials’ priorities for the first time, according to an annual survey released by the State Educational Technology Directors’ Association on Wednesday.
More than a quarter of respondents—26%—listed AI as their most pressing issue, compared to 18% in a similar survey conducted by SETDA last year. AI supplanted cybersecurity, which state leaders previously identified as their No. 1 concern.
About 1 in 5 state technology officials—21%—named cybersecurity as their highest priority, and 18% identified professional development and technology support for instruction as their top issues.
Forty percent of respondents reported that their state had issued guidance on AI. That’s a considerable increase from just two years ago, when only 2% of respondents to the same survey reported their state had released AI guidance.
State officials’ heightened attention on AI suggests that even though many more states have released some sort of AI guidance in the past year or two, officials still see a lot left on their to-do lists when it comes to supporting districts in improving students’ AI literacy, offering professional development about AI for educators, and crafting policies around cheating and proper AI use.
“A lot of guidance has come out, but now the rubber’s hitting the road in terms of implementation and integration,” said Julia Fallon, SETDA’s executive director, in an interview.
SETDA, along with Whiteboard Advisors, surveyed state education leaders—including ed-tech directors, chief information officers, and state chiefs—receiving more than 75 responses across 47 states. It conducted interviews with state ed-tech teams in Alabama, Delaware, Nebraska, and Utah and did group interviews with ed-tech leaders from 14 states.
AI professional development is a rising priority
States are taking a myriad of approaches to responding to the AI challenge, the report noted.
Some states—such as North Carolina and Utah—designated an AI point person to help support districts in puzzling through the technology. For instance, Matt Winters, who leads Utah’s work, has helped negotiate statewide pricing for AI-powered ed-tech tools and worked with an outside organization to train 4,500 teachers on AI, according to the report.
Wyoming, meanwhile, has developed an “innovator” network that pays teachers to offer AI professional development to colleagues across the state. Washington hosted two statewide AI summits to help district and school leaders explore the technology.
And North Carolina and Virginia have used state-level competitive grant programs to support activities such as AI-specific professional development or AI-infused teaching and learning initiatives.
“As AI continues to evolve, developing connections with those in tech, in industry, and in commerce, as well as with other educators, will become more important than ever,” wrote Sydnee Dickson, formerly Utah’s state superintendent of public instruction, in an introduction to the report. “The technology is advancing too quickly for any one person or state to have all the answers.”
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi