AI Research
Meet Aimee, The Artificial Intelligence Ally Taking On South Africa’s HIV Crisis

There are approximately 7.5 million people living with HIV in South Africa, making it the largest HIV epidemic globally, according to a retrospective study published in the South African Journal of HIV Medicine. Women and young girls there are twice as likely to acquire HIV than men, with a national prevalence of 25% compared to 12% of men. This discrepancy can be attributed to education barriers, stigma and gender-based violence.
The nonprofit company, Audere, is addressing this gap with their program, Self-Care from Anywhere, which was co-created with local community members and SHOUT-IT-NOW, a South African nonprofit providing youth-focused HIV prevention and sexual health services, serving approximately 1.5 million people in South Africa.
In the driver’s seat is Aimee, a customizable, empathetic, artificial intelligence (AI) companion available via WhatsApp. Aimee links users to sexual health information and HIV counseling. Conversations are monitored by local healthcare providers who can coordinate clinic appointments and provide personalized guidance.
In this interview, Sarah Morris, chief product officer of Audere, explains how AI is being used to support HIV care, how Aimee works and why it’s important to keep humans in the loop.
This interview has been edited for length and clarity.
MHE: How is AI being used to support the HIV response in South Africa?
Morris: The platform that we developed is called Self-Care from Anywhere, and it’s delivered through an empathetic, private AI companion called Aimee. SHOUT-IT-NOW has clinicians that monitor a telehealth center, and these young women can get anonymous direct access to the services that they’re seeking.
After 18 months of codesigning with young women and young men, we dove into creating a multimodal, AI-powered digital intervention. We really hammered down on the need to empower and build trust by first informing people and by providing insights. Then, you can introduce the intervention.
Our target is women and girls ages 16 to 24 years old, and that’s largely driven by where the HIV epidemic is hitting the hardest in South Africa. When we think about adolescent girls and young women, they’re especially vulnerable. They face significant risks like stigma and restricted access to services and a disproportionately high HIV prevalence.
MHE: How does Aimee work?
Morris: Aimee takes on three different personas, based on who young women told us they trust. We brought forward a bestie, a big sis and a nurse persona that you can talk to, and they change the way they talk, the emojis they use and the South African slang that they use.
What she’s doing is collecting information to understand their vulnerability and guiding them towards HIV self-testing. She knows about them; she remembers what they shared last time she asked them about their life.
MHE: How involved are humans in the Aimee experience?
Morris: We have humans in the loop at two different points. One is the human clinician who’s monitoring and who you can reach out directly to over WhatsApp through the web portal.
Humans in the loop are also reviewing the responses of AI. They are clinicians in South Africa who are trained in the care delivery protocols there, but they’re also community members. They can say, ‘Yeah, that was a good response. No, that was a bad response.’
Humans are much better at reading empathy than AI, and AI is better at reading accuracy than humans. You should never let an AI tool loose in the wild because AI tools are not 100% accurate, but humans are not 100% accurate either. We’ve found in prior studies that together, that partnership is stronger.
MHE: What are the biggest barriers to developing software like Aimee?
Morris: Services like Aimee are on WhatsApp, and they obviously require internet access, so there are limited options on a platform like WhatsApp for providers like us to further reduce the cost of data.
For example, if you’re using five different chatbots on your WhatsApp, WhatsApp does not enable each chatbot to zero-rate their own data, so for the provider to take on the cost of the data used for their chatbot, it’s basically all WhatsApp data or nothing.
WhatsApp has 97% penetration in South Africa. A lot of youth in South Africa have WhatsApp data packages, but if, for example, you link out to another website, they can’t get to that website. You really need to think about what data needs to be right there within WhatsApp so that they can access it.
On the service side, I think it’s getting increasingly difficult directing people to the available services amidst stockouts and the reliability of clinics. When we think about the kind of turmoil earlier this year, where everybody was getting stop work orders, there were clinics that would close overnight and have to cancel all their appointments, and then a week later get notified that they can start services again. That makes it hard to plan or physically deliver services that are trusted, safe and peer-led.
MHE: How does Self-Care from Anywhere help healthcare workers?
Morris: A lot of the work we’re doing in Self-Care from Anywhere is not just Aimee. There is a full AI suite that helps the healthcare workers have the right context.
Health care workers are working with a dashboard where they can see tasks that have been created for them to reach out to people who might be experiencing self-harm ideation or violence, have an urgent health care need, or are somebody who has tested positive for HIV and should get in for confirmatory testing right away. We’re also giving them clinical summaries of what the client talked about with the AI companion, so they have the right context to provide the right care.
If somebody does upload a self-test, we run computer vision over it, which has been shown to be much better than humans at identifying faint lines, making sure we never miss a positive.
There’s also a predictive vulnerability score, which helps prioritize if somebody is highly vulnerable to HIV acquisition. This lets workers know they should maybe spend their limited time today reaching out to them, versus somebody who might already be using condoms regularly and have a good understanding of their own health.
AI Research
Here’s how doctors say you should ask AI for medical help

The DoseWhat should I know about asking ChatGPT for health advice?
Family physician Dr. Danielle Martin doesn’t mince words about artificial intelligence.
“I don’t think patients should use ChatGPT for medical advice. Period,” said Martin, chair of the University of Toronto’s department of family and community medicine.
Still, with roughly 6.5 million Canadians without a primary care provider, she acknowledges that physicians can’t stop patients from turning to chatbots powered by large language models (LLMs) for health answers.
Martin isn’t alone in her concerns. Physician groups like the Ontario Medical Association and research from institutions like the Sunnybrook Health Science Centre all caution patients against relying on AI for medical advice.
A 2025 study comparing 10 popular chatbots, including ChatGPT, DeepSeek and Claude, found “a strong bias in many widely used LLMs towards overgeneralizing scientific conclusions, posing a significant risk of large-scale misinterpretations of research findings.”
Martin and other experts believe most patients would be better served by using telehealth options available across Canada, such as dialling 811 in most provinces.
But she also told The Dose host Dr. Brian Goldman that if they do choose to use chatbots, they can help reduce the risk of harm by avoiding open-ended questions and restricting AI-generated answers to credible sources.
Learning to ask the right questions
Unlike traditional search engines that provide users with links to reputable sources to answer questions, chatbots like Gemini, Claude and ChatGPT generate their own answers to users’ questions, based on existing databases of information.
Martin says a key challenge is figuring out how much of an AI-generated answer to a medical question is or isn’t essential information.
If you ask a chatbot something like, “I have a red rash on my leg, what could it be?” you could be given a “dump of information” which can do more harm than good.
“My concern is that the average busy person isn’t going to be able to read and process all of that information,” she said.
What’s more, if a patient asks “What do I need to know about lupus?”, for example, they “probably don’t know enough yet about lupus to be able to screen out or recognize the stuff that actually doesn’t make sense,” said Martin.
Martin says patients are more often better-served by asking them for help finding reliable sources, like official government websites.
Instead of asking, “Should I get this year’s flu shot?” a better question would be, “What are the most reliable websites to learn more about this year’s flu shot?”
Be careful following treatment advice
Martin says that patients shouldn’t rely on solutions recommended by AI — like purchasing topical creams for rashes — without consulting a medical expert.
In the case of symptoms like rashes which may have many possible causes, Martin instead recommends speaking to a health-care worker and to not ask an AI at all.
Some people might also worry that an AI chatbot might talk patients out of consulting real-life physicians, but family physician Dr. Onil Bhattacharry says it’s not as likely as some may fear.
“Generally the tools are … slightly risk-averse, so they might push you to more likely seek care than not,” said Bhattacharrya, director of Women’s College Hospital’s institute for health system solutions and virtual care.
Bhattacharrya is interested in how technology can support clinical care, and says artificial intelligence could be a way to democratize access to medical expertise.
He uses tools like OpenEvidence which compiles information from medical journals and gives answers that are accessible to most health professionals.
The Quebec government says it’s launching a pilot project involving artificial intelligence transcription tools for health-care professionals, with an increasing number saying they cut down the time they spend filling paperwork.
Still, Bhattacharrya recognizes that it can be more challenging for patients to determine the reliability of medical advice from an AI.
“As a doctor, I can critically appraise that information,” but it isn’t always easy for patients to do the same, he said.
Bhattacharrya also said chatbots can suggest treatment options that are available in some countries but not Canada, since many of them draw from American medical literature.
Despite her hesitations, Martin acknowledges there are some things an AI can do better than human physicians — like recalling a long list of possible conditions associated with a symptom.
“On a good day, we’re best at identifying the things that are common and the things that are dangerous,” she said.
“I would imagine that if you were to ask the bot, ‘What are all of the possible causes of low platelets?’ or whatever, it would probably include six things on the list that I have forgotten about because I haven’t seen or heard about them since third year medical school.”
Can patients with chronic conditions benefit from AI?
For his part, Bhattacharrya also sees AI as a way to empower people to improve their health literacy.
A chatbot can help patients with chronic conditions looking for general information in simple language, though he cautions against “exploring nonspecific symptoms and their implications.”
Warning: Mention of suicide and self-harm. Millions of people, especially teens, are finding companionship and emotional support in using AI chatbots, according to a kids digital safety non-profit. But health and technology experts say artificial intelligence isn’t properly designed for these scenarios and could do more harm than good.
“In primary care we see a large number of people with nonspecific symptoms,” he said.
“I have not tested this, but I suspect the chatbots are not great at saying ‘I don’t know what is causing this but let’s just monitor it and see what happens.’ That’s what we say as family doctors much of the time.”
AI Research
As they face conflicting messages about AI, some advice for educators on how to use it responsibly

When it comes to the rapid integration of artificial intelligence into K-12 classrooms, educators are being pulled in two very different directions.
One prevailing media narrative stokes such profound fears about the emerging strengths of artificial intelligence that it could lead one to believe it will soon be “game over” for everything we know about good teaching. At the same time, a sweeping executive order from the White House and tech-forward education policymakers paint AI as “game on” for designing the educational system of the future.
I work closely with educators across the country, and as I’ve discussed AI with many of them this spring and summer, I’ve sensed a classic “approach-avoidance” dilemma — an emotional stalemate in which they’re encouraged to run toward AI’s exciting new capabilities while also made very aware of its risks.
Even as educators are optimistic about AI’s potential, they are cautious and sometimes resistant to it. These conflicting urges to approach and avoid can be paralyzing.
Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.
What should responsible educators do? As a learning scientist who has been involved in AI since the 1980s and who conducts nationally funded research on issues related to reading, math and science, I have some ideas.
First, it is essential to keep teaching students core subject matter — and to do that well. Research tells us that students cannot learn critical thinking or deep reasoning in the abstract. They have to reason and critique on the basis of deep understanding of meaningful, important content. Don’t be fooled, for example, by the notion that because AI can do math, we shouldn’t teach math anymore.
We teach students mathematics, reading, science, literature and all the core subjects not only so that they will be well equipped to get a job, but because these are among the greatest, most general and most enduring human accomplishments.
You should use AI when it deepens learning of the instructional core, but you should also ignore AI when it’s a distraction from that core.
Second, don’t limit your view of AI to a focus on either teacher productivity or student answer-getting.
Instead, focus on your school’s “portrait of a graduate” — highlighting skills like collaboration, communication and self-awareness as key attributes that we want to cultivate in students.
Much of what we know in the learning sciences can be brought to life when educators focus on those attributes, and AI holds tremendous potential to enrich those essential skills. Imagine using AI not to deliver ready-made answers, but to help students ask better, more meaningful questions — ones that are both intellectually rigorous and personally relevant.
AI can also support student teams by deepening their collaborative efforts — encouraging the active, social dimensions of learning. And rather than replacing human insight, AI can offer targeted feedback that fuels deeper problem-solving and reflection.
When used thoughtfully, AI becomes a catalyst — not a crutch — for developing the kinds of skills that matter most in today’s world.
In short, keep your focus on great teaching and learning. Ask yourself: How can AI help my students think more deeply, work together more effectively and stay more engaged in their learning?
Related: PROOF POINTS: Teens are looking to AI for information and answers, two surveys show
Third, seek out AI tools and applications that are not just incremental improvements, but let you create teaching and learning opportunities that were impossible to deliver before. And at the same time, look for education technologies that are committed to managing risks around student privacy, inappropriate or wrong content and data security.
Such opportunities for a “responsible breakthrough” will be a bit harder to find in the chaotic marketplace of AI in education, but they are there and worth pursuing. Here’s a hint: They don’t look like popular chatbots, and they may arise not from the largest commercial vendors but from research projects and small startups.
For instance, some educators are exploring screen-free AI tools designed to support early readers in real-time as they work through physical books of their choice. One such tool uses a hand-held pointer with a camera, a tiny computer and an audio speaker — not to provide answers, but to guide students as they sound out words, build comprehension and engage more deeply with the text.
I am reminded: Strong content remains central to learning, and AI, when thoughtfully applied, can enhance — not replace — the interactions between young readers and meaningful texts without introducing new safety concerns.
Thus, thoughtful educators should continue to prioritize core proficiencies like reading, math, science and writing — and using AI only when it helps to develop the skills and abilities prioritized in their desired portrait of a graduate. By adopting ed-tech tools that are focused on novel learning experiences and committed to student safety, educators will lead us to a responsible future for AI in education.
Jeremy Roschelle is the executive director of Digital Promise, a global nonprofit working to expand opportunity for every learner.
Contact the opinion editor at opinion@hechingerreport.org.
This story about AI in the classroom was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.
AI Research
Now Artificial Intelligence (AI) for smarter prison surveillance in West Bengal – The CSR Journal
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi