AI-enabled obstetric point-of-care ultrasound as an emerging technology in low- and middle-income countries: provider and health system perspectives | BMC Pregnancy and Childbirth
In total, 70 individuals were invited to participate. The response rate among midwives was 52.9% (18/34) and 63.9% among other individuals (23/36). Forty individuals completed the REDCap survey, while 41 participated in IDIs or FGDs, indicating that one person did not complete the survey.
Among the survey respondents, most were healthcare providers, with 42.5% midwives/nurses and 20% physicians (Table 2). Over 70% of participants had more than 10 years of experience in the obstetrics field. While approximately two-thirds were LMIC residents—specifically from Kenya, Uganda, Nigeria, Burkina Faso and Zambia—those that did not currently live in an LMIC have worked extensively in these contexts. Three-quarters were familiar with POCUS (i.e., had first-hand experience or were conceptually familiar), and 55% had prior knowledge of efforts in the AI-enabled ultrasound space before participating in this study.
Table 2 Demographics of survey respondents (n = 40)
The following sections present quantitative and qualitative results related to perceptions of standard and AI-enabled POCUS (Domains 2 and 3, respectively – Table 1) to compare and contrast opinions of the existing and emerging technology.
Priority AI capabilities
Respondents were asked to rate via a Likert scale the importance of select maternal and fetal assessments that would be most helpful for AI-enabled POCUS to automatically screen for in a basic emergency obstetric and neonatal care (BEmONC) facility (Table 3). Fetal heart rate/viability, multiple gestation and placental location were the three most highly ranked conditions—the latter two were included in the described prototype which was introduced after completion of the survey.
Table 3 Maternal and fetal conditions ranked by importance for AI-enabled POCUS to automatically screen for (% of respondents by Likert rating)
Qualitatively, the prototype capabilities presented to respondents was seen as acceptable:
“That’s some of the biggest risk which we identified: lie or the presentation is one of the risks, and number of fetuses is another risk, and the placenta location is another big risk, which we identified among several others.” – Researcher 2, IDI, LMIC
While only 70% of respondents felt gestational age – another prototype feature and part of the WHO recommendation—was very important, an additional 22.5% of respondents felt it was somewhat important. Gestational age was flagged during conversation as an important component:
“I think this artificial intelligence will help us more because, first of all, it has the gestational age. Most of our mothers, a big percentage, are not very sure of their dates, so it’s going to really help us.” – Midwife 10, FGD, LMIC
However, getting women into ANC earlier was a key consideration:
“Our colleagues think this is really critical to have gestational age […] but we have to get women in earlier in order to get the most accurate dating.” – Funder/Nurse-midwife 1, FGD, HIC
Several respondents flagged detection of congenital anomalies as both important and concerning, particularly in LMIC contexts. While ethical concerns were raised about screening for conditions that healthcare providers and systems may not be prepared to manage, others argued for the importance of screening for congenital anomalies to allow clients the option of termination if desired.
“I think that [detecting congenital anomalies with AI] is also a slippery slope […] I think it would be really hard if providers had access to that information – that’s a whole different type of counseling.” – Policy-maker/Physician 1, IDI, HIC
“[With POCUS], the mother gets benefited by knowing that she’s going to deliver a live baby… At the same time, also knowing any deformity or congenital abnormality that the baby is having, so that she decides before the time for delivery.” – Midwife 2, FGD, LMIC
This sentiment was countered with several respondents emphasizing that both standard and AI-enabled POCUS, particularly in the hands of midwives, should be introduced as a screening tool rather than a diagnostic tool.
“My thinking as an obstetrician, […] if there’s any doubt, then that will be an indication to send this patient [to a] radiologist to rule out the abnormality that you are worried about. So, I think AI will be vital for screening, but we shall need a detailed scan just for confirmatory tests to make sure we reduce on the issue of over treatment.” – OBGYN/Researcher 2, FGD, LMIC
Potential impact on ANC quality, services and clinical outcomes
ANC utilization and experience of care
Survey respondents were asked about their perceptions regarding how standard POCUS and AI-enabled POCUS might impact ANC utilization and experience of care. When comparing their survey responses, there was overall strong agreement that both technologies could improve ANC attendance (75% standard and 65% AI), though there was less agreement with AI-enabled POCUS. Fewer respondents felt that AI-enabled POCUS would increase trust between providers and women compared to standard POCUS (60% and 82.5%, respectively). Approximately half of respondents largely agreed that neither technology would lead to clients deciding to forego other ANC necessities in order to pay for a scan. Across these questions, there was slightly more uncertainty with AI-enabled POCUS (Fig. 1).
Fig. 1
Perceptions of standard and AI-enabled POCUS on ANC utilization and experience (n = 40)
Findings from IDI/FGDs provided some nuance to these survey data. For example, several respondents believed that AI-enabled POCUS could be seen as less engaging to clients particularly if a clear image of the fetus was not readily available. This could impact ANC utilization, demand and provider–client trust as compared to standard POCUS:
“Ultimately, a mother wants to see as much as possible of what their baby looks like, and so restricting the images that the nurse gets out of these blind sweeps, I’d say to some extent, [is] a bit limiting to what the mother might want to be seeing and what the nurse might want to be sharing with the mother.” – Implementer 2, IDI, LMIC
“And they will be asking us questions like, ‘So that is the only thing that you’ve done and you’re telling me everything is okay? I’ve not seen you looking for the head.’ […] But for this new thing, we are not going to be showing them and it’s just going to be displaying itself there. I think they’ll be asking us a lot of questions…” – Midwife 3, FGD, LMIC
Respondents noted that the time spent with a client during a standard POCUS scan was critical for human connection and that AI-enabled POCUS may compromise this.
“For me, I think it’ll [AI-enabled POCUS] reduce physical contact with the patient – doctor-patient relationship may be minimal. […] if it is minimal, sometimes you are not able to explain well, the condition of the patient to the client. And they may feel they are not getting enough information also.” – Midwife 8, FGD, LMIC
Quality and content of ANC services
The majority (85%) of respondents agreed that standard POCUS can strengthen health care providers’ ability and confidence in making appropriate clinical decisions; however, 70% felt AI-enabled POCUS would build confidence (Fig. 2). Thus, there was more uncertainty with AI-enabled POCUS in reinforcing confidence compared to standard POCUS, a sentiment that was reinforced by qualitative findings.
Fig. 2
Perceptions of standard and AI-enabled POCUS on client services (n = 40)
Some providers tended to express enthusiasm for AI, believing it could enhance accuracy and usability:
“Those simple sweeps [that are] reproducible and reduce chance of making error is a very good way to go. If those sweeps done correctly can give you, at least those very basic outcomes that you want to know: the presentation, the placental position, and so forth, it’ll be useful at least to make that quick decision at a point of care.” – OBGYN/Researcher 3, FGD, LMIC
“Yeah, that one will be just a very good help because it is not going to disturb us – like previously, we used to look for the presentation and you move with the probe all over the mother’s abdomen looking for one thing, but now this time around we are just going to use it three times and the things will just display. It’s just going to be very easy for us.” – Midwife 3, FGD, LMIC
On the other hand, some respondents expressed concern that clinical acumen may diminish with increased reliance on AI and negatively affect clinical decision-making and confidence.
“Good midwifery skills are all about hands on bellies, and all about interaction with the woman and all about having a dialogue. […] I just worry a little bit that if you then put this great machine in the middle of all that, then the emphasis is on the machine. And that if the machine’s broken, how will the health worker have maintained their skills and be able to do just hands on belly to work out where the baby is and potentially what size it is. […] But there are sets of skills that midwives and OBs develop over time which are critically important to maintain.” – Funder, IDI, HIC
There were more mixed perceptions around whether it would decrease time available for other services and how it would impact provider workload with 30–40% either strongly agreeing or strongly disagreeing with the statement (Fig. 2). A few respondents raised the issue that introduction of POCUS could replace time spent completing another critical ANC components:
“What are you not gonna do? I feel like in this whole prioritization conversation, nobody says, ‘well, if you’re prioritizing something new, you have to deprioritize something’, and we never, ever acknowledge that. So, it means those things get dropped randomly, which means that if the midwife is now prioritizing her time on AI ultrasounds, that she can do, what is she not doing? And if we don’t provide guidance on that, she’s gonna drop whatever the hardest thing to do is, and that might be the one thing that would save more lives […].” – Funder/Nurse-midwife 2, FGD, HIC
While some said poorly remunerated nurses and heavy workloads could de-motivate scanning, others mentioned that AI-enabled ultrasound could mitigate this by decreasing the amount of scanning time needed per client. Several midwife respondents acknowledged the workload, but this sentiment was outweighed by their excitement for the new technology and its potential to allow for more equitable access to ultrasound.
“Of course the workload will there, but I will [prefer it] this way – I am very okay to do it.” – Midwife 3, FGD, LMIC
“You go to the next client, you may conduct the ultrasound to as many clients as possible. Unlike the [standard] POCUS, we sometimes do five to eight [scans]. But for this one [AI-enabled POCUS], you may conduct the ultrasound to almost every client because the results are very fast. I think it’s a game changer also to me.” – Midwife 9, FGD, LMIC
Referrals and clinical outcomes
The majority of survey respondents felt both technologies could improve appropriate referrals (85% and 72.5% for standard and AI, respectively) and neonatal outcomes (85% standard and 72.5% AI) – and to a lesser degree, maternal outcomes (Fig. 3).
Fig. 3
Perceptions of standard and AI-enabled POCUS on referrals and clinical outcomes (n = 40)
Qualitatively, respondents emphasized that without strengthened and functional referral systems, it is unlikely that either standard or AI-enabled POCUS will have any impact on clinical outcomes.
“We have to take like a giant step back. […] There needs to be forward thinking of the referral pathway. […] Because, what I’ve seen is that introduction of point of care ultrasound by itself, even in my experience, is never the thing that works by itself. […] I’m one of the biggest proponents of POCUS, but even more so, I’m one of the biggest proponents of a reminder that POCUS is just a tool in the larger toolbox of clinical tools that we have, including your physical exam and your clinical gestalt of a patient’s disease process.” – Researcher/Physician 3, IDI, HIC
“When you have added the ultrasound, what do the women do? Where do they go? So I think that we need to really consider how this is embedded within the system and the modifications that need to happen within the clinical pathway for it to be feasibly and sustainably integrated.” – Funder/Policy-maker, IDI, LMIC
“The purpose of the ultrasound is to identify potential problems and then deal with them appropriately. That’s all part of the intervention. So going back to referrals – I think that has to be front and center. You identify and then you have to know what to do with what you have identified. That’s not a small part of it either.” – Radiologist/Physician, IDI, HIC
Some respondents expressed concern that referrals would increase overall, potentially straining systems and depleting family resources.
“If someone gets an ultrasound and they misinterpret something and it leads them to refer a patient that they otherwise wouldn’t have referred […], the process of referring someone is a pretty resource intensive one. When you’re coming from a little clinic, and now all of a sudden you have to put someone in an ambulance or you have to tell them you have to mobilize your own resources to go to a hospital that’s like, you know, 70 kilometers away to be seen for this thing; they have to go mobilize their own money, a lot of people don’t have insurance, so you have to get your own transportation, you have to go pay for the extra test, pay for the extra consultation, worry a lot [about] what could be happening to me, only for them to go and find, ‘Oh no, you’re actually okay’. I think that’s a potential source of harm.” – Researcher/Physician 2, IDI, LMIC
In addition to increasing inappropriate referrals, another potential unintended consequence related to health outcomes was inappropriate clinical management. Several respondents voiced concerns about liability risks involved with missed diagnoses or mismanagement due to algorithmic limitations or inaccuracy:
“If for example, it makes a wrong impression, and I go in and intervene and I’m in error, do I blame the clinician or you blame the machine? [With] the other one [standard POCUS], the person was the one interpreting the picture and saying, ‘I think from my training it is this’, [but] now the [AI] machine has given you that this baby is distressed [then when you go] and deliver, you deliver a preterm baby. That could give you challenges: ‘the machine told me that it was distressed, but it was not actually’.” – OBGYN/Researcher 4, FGD, LMIC
Considerations for implementation and health system integration
Target user and potential for task sharing
There was general agreement among respondents that midwives should be the primary target-user of AI-enabled POCUS. In the survey (respondents could select up to 3 provider cadres), midwives and nurses were selected most frequently (97.2%), followed by other doctors (55.6%), and OBGYNs (52.8%).
While the central role of midwives was echoed qualitatively, several others also emphasized the need to ensure doctors/OBGYNs were equally trained.
“I think that’s a great idea because, both in private and in public, midwives are the ones who spend a substantial amount of time with the patients. […] So, strengthening their capacity and empowering them to do point of care ultrasound, I think that would be very welcome to improve outcomes.” – OBGYN/Researcher 3, FGD, LMIC
“If for example, the midwife is able to pick it [AI-enabled POCUS], [then], for me to embrace it, to understand what she has referred to me a patient with certain condition, then I need to also get used to it, such that I’m able to be in the same boat with her. […Otherwise, I’m] subjecting the patient to another scan.” – OBGYN/Researcher 4, FGD, LMIC
However, some respondents specifically called out AI integration as being more likely to result in professional displacement.
“So, if everybody and anybody can scan [because of access to AI], then the potential is that people are going to be displaced. And in LMICs, people are looking for jobs. So we don’t want professional conflicts arising out of this.” – Funder/Policy-maker, IDI, LMIC
Relevant to both standard and AI-enabled POCUS, many respondents agreed that provider protections are critical as task sharing of new technologies emerge:
“I think in terms of training, regulation is not very clear. […] So I think there is a need for a policy review to ensure that if it is going to be task shifting, then those people are allowed, but also protected of litigation, it’s becoming quite common. […] So I think definitely regulation has to be important; I know sometimes technology comes and these things are rushed because there is a need, but I think looking into that aspect is also important to make sure that there’s a policy that is generally agreeable and it define clearly: what one can do and what one can’t do. In fact, with the increase in technology, this is becoming quite an important space.” – Researcher/Physician 4, IDI, LMIC
Introduction outside of the health facility and in the community was met with more heterogenous response. Approximately one-quarter 27.8% of survey respondents felt that community health workers (CHWs) should be trained to use AI-enabled POCUS. Many recognized that CHWs are often the first touchpoint with women, while others voiced concerns related to CHWs’ limited obstetric clinical training and competency to communicate obstetric findings.
“We are ignoring the fact that women come into contact with the health system in the community. So, we keep trying to incentivize women […] to come to the facility early. And it really doesn’t work. […] On the other hand, these community health promoters are seeing those women […]. Let’s be very open minded about that first contact, and don’t force women to change their behavior. But just leverage what’s already there.” – Funder/Nurse-midwife 2, FGD, HIC
Others believed community-based scanning might de-motivate women from seeking care.
“The community health promoters are service providers, but they are not well trained in [healthcare]. Their work is to bring us clients as early as possible, like those who are pregnant they should start clinic early. […] But it [AI-enabled POCUS] is a technical thing which can be used by a provider who can explain more to the client what is happening to the baby.” – Midwife 9, FGD, LMIC
AI-POCUS training
In terms of training, many believed that AI-enabled POCUS could reduce training requirements compared to standard POCUS training given the ease of blind sweeps. Approximately two-thirds of respondents (67.5%) believed that standard POCUS required intensive training compared to 42.5% for AI-enabled POCUS (Fig. 4).
“I think that training may not be very complicated, or maybe may not take a lot of time, rather, because the way I see it, it may be easier than [standard] POCUS.” – Midwife 9, FGD, LMIC
Fig. 4
Perceptions of health system integration of standard and AI-enabled POCUS (n = 40)
However, several respondents emphasized that concomitant training in POCUS and image interpretation remained critical.
“But, what knowledge does this type of technology add to me as a provider, because it is just like giving me everything; I’m not even thinking. That may be my worry – a situation where we don’t have so much providers who have been trained on ultrasound. Maybe what I would suggest even at the deployment: training on the analog then to the AI, so that at least somebody knows how to interpret images so that even when now I go to this magic gadget that will interpret everything, at least I have something that I’ve learned from that session.” – OBGYN/Researcher 1, FGD, LMIC
“And maybe [with] the AI, it may be easier for them [midwives] to do it maybe in a faster time to be able to get the findings. Except that I believe that training is very important because having the background of POCUS would also be very helpful even as they use the AI […] I think it’s also good to understand, how will the midwife confirm that this machine has told me the right thing?” – Nurse-educator/Researcher, FGD, LMIC
At least one participant saw AI-enabled POCUS as an opportunity to enhancing existing training programs and focus on strengthening clinical-decision making:
“Because of the fact that the training itself of the use of ultrasound might be a little bit more facilitated by AI, might leave a little bit more time to focus on the ‘what do you do next’ part, which often, to be very honest, isn’t always focused on in point of care obstetric ultrasound classes because people are very, very focused on learning the ultrasound and not learning the clinical algorithm, which is ironic because you need to know the second part of that when you’re learning the first part.” – Researcher/Physician 3, IDI, HIC
Resources, data systems and facility infrastructure
Similar to standard POCUS, issues around device maintenance, availability of consumables, machine cost, misuse and overuse (e.g., fetal sex determination, over-charging clients for unnecessary scans, undue anxiety among clients) were mentioned in IDIs/FGDs. In the survey data, there was little to no difference between how either technology might increase ultrasound misuse or overuse with approximately 40% strongly disagreeing with this statement (Fig. 4); however, respondents offered some ideas for how AI might change this, such as disabling identification of fetal sex through “digital diapers.”
“In terms of the guarding against the misuse for fetal sex detection, and all the consequences of that, where such a limited visualization is desirable, but, on the other hand, the blind sweep paradigm is very restricted in terms of what it can detect.” – Implementer 2, IDI, LMIC
In the survey data, there was a slight difference in perceptions of how the technology might affect health system documentation with 70% of respondents believing that AI-enabled POCUS would streamline documentation compared to 62.5% for standard POCUS) (Fig. 4). However, many respondents raised larger issues around data privacy and storage, including conflicting interests between industry and governments:
“[Data storage] is going to become really, really important in the implementation because countries will be like, ‘I don’t want my data to go to these U.S. companies’ cloud’. And the more the more you know about it, even with regulations, what you think is anonymous, it can be so easily de-anonymous.” – Policy-maker/Physician 2, IDI, HIC
“How well can we localize some of this data […while still feeding] into the larger technology companies? Because obviously, I know my Minister of Health won’t have the capacity to develop some of these technologies – it needs either big corporations or big funders. But now it goes to the ethics part of AI in terms of how well the data is kept, how well all these policies -all these laws- that have been put in place, are being complied to.” – Implementer 1, IDI, LMIC
In terms of overall health system integration, many respondents emphasized that ultrasound was just one part of the continuum of care, underscoring the need to strengthen systems more broadly.
“…it can’t be called a game changer, per se, without other aspects like, strengthening the system, improving referral, and improving skills, improving availability of commodities, improving emergency obstetric care, but surprisingly also improving the welfare of women.” – Researcher/Physician 4, IDI, LMIC
“A lot of the work we’ve done […], is to better understand why some of the simpler interventions that are in our guidelines are still not being implemented. For example, why don’t we have a blood pressure cuff at every antenatal care clinic that functions? Why don’t we have a weighing scale? Why don’t we have calcium supplements? So, I think I’m a little bit concerned that if we jumped right to these technological solutions [AI-enabled POCUS], which I know have the potential for huge change, we also might lose on some of the really important interventions that we know are important. It might not be as attractive for providers or for women to pay for.” – Policy-maker/Physician 1, IDI, HIC
Evidence needed and research priorities
Respondents were asked to prioritize up to five research outcomes for AI-enabled POCUS (Fig. 5) from a list of 12 options. The top five outcomes included accuracy of diagnoses (77.5%), ANC quality (65%), early ANC attendance (50%), impact on referral (37.5%), and women’s experience of care (37.5%).
Fig. 5
Priority research topics and outcomes related to AI-enabled POCUS (n = 40)
The importance of accuracy was conveyed by several respondents who cited potential inaccuracies stemming from a lack of diversity in training datasets. To mitigate bias, they emphasized the necessity of using representative and diverse datasets for algorithm training.
“My only question would be in terms of training these algorithms […] whether they’ll be consistent across the different populations […] so that at least whenever you get an outcome, it is an outcome that can actually help you make a decision, not an outcome that gets you into an error.”– OBGYN/Researcher 3, FGD, LMIC
“In India it’s generally known that a baby is small for gestational age, but may not be necessarily a preterm, there are more small babies. And then in Africa, there are bigger babies. So this algorithm has been tested on who? Does it work? Where does it work?” – Researcher/Physician 4, IDI, LMIC
Priorities related to ANC reflect the need to assess impact on ANC quality, early ANC attendance, and women’s experience of care during the pregnancy journey.
“I would want to, to answer the question: does it improve referrals that we’re getting over the analog? […] So on the part of the provider, do they think it’s better because it’s taking less time? […] We have challenges with human resources in our facilities. […] Does it improve the time they are taking to provide the service, but at the same time, not compromising quality of the work?” – Researcher 3, IDI, LMIC
“And how do we really assess what is good counseling and what does that that mean? So, I mean, if there’s an opportunity with all this excitement about AI ultrasound to really do some research around how do we ensure effective counseling around communicating what it means, and then how does the woman then interpret that and use that also for her own discussions with her, her family. […] I think there’s a lot of great questions that could be, that could be looked at more on some of the operational issues.” – Policy-maker/Physician 1, IDI, HIC
Measures related to maternal mortality and morbidity, stillbirth, neonatal mortality and morbidity were less prioritized (selected by 15–32% of the survey sample) than more proximal outcomes.
“I think we don’t need studies that look at, ‘Does it reduce maternal mortality? Does it reduce newborn mortality? […] We should look at earlier outcomes: ‘Is it diagnosing things correctly? Is it leading to correct referrals?’ I think that’s the kind of studies we should be designing. And of course, ‘What does it mean to implement it within a health system?’” – Policy-maker/Physician 2, IDI, HIC
“My point would be to think through what are these main causes of adverse maternal outcomes and birth outcomes. In our setting, we know it’s PPH, preeclampsia, sepsis, and so forth. Then see how we can integrate the tool towards mitigation of those big five. So, if we have that integrated within the tools, then we might see some level of reduction in the adverse outcomes. If not, then we’ll just still have the status quo.” – OBGYN/Researcher 3, FGD, LMIC
As cybersecurity threats and artificial intelligence continue reshaping the job market, Northeastern State University is stepping up its efforts to prepare students for these in-demand fields.
With programs targeting both K-12 engagement and college-level degrees, NSU is positioning itself as a key player in Oklahoma’s tech talent pipeline.
Cybersecurity: Training the Next Generation
NSU is working to meet the rising need for cybersecurity professionals by launching educational initiatives for students at multiple levels. Dr. Stacey White, the university’s cybersecurity program coordinator, says young people are especially suited for these roles because of their comfort with technology.
That’s why NSU is hosting cybersecurity camps and has built hands-on facilities like a cybersecurity lab to introduce students to real-world applications.
“When I first started in technology and the cyber world, it was usernames and passwords,” Dr. White said. “Today, it’s much more intricate than that.”
The Scope of the Problem
Cybercrime is a growing threat that shows no signs of slowing down. According to Dr. White, everyone should have a basic understanding of cybersecurity, but the greatest need lies in training new professionals who can keep up with evolving threats.
Currently, there are nearly 450,000 open cybersecurity jobs nationwide — including almost 4,200 in Oklahoma alone.
New AI Degree Launching This Fall
This fall, NSU is introducing a new degree in Artificial Intelligence and Data Analytics. Dr. Janet Buzzard, dean of the College of Business and Technology, says the program combines technical knowledge with business insight — a skill set that employers across many industries are seeking.
“All of our graduates in our College of Business and Technology need that skill set of artificial intelligence,” Dr. Buzzard said. “Not just the one major and degree that we’re promoting here.”
The new degree is designed to respond to student interest and market demand, offering versatile career paths in fields such as finance, logistics, and technology development.
Encouraging Early Engagement
Dr. Buzzard adds that exposing students to artificial intelligence and cybersecurity early in their academic careers helps them see these paths as viable and exciting career options.
This is one of the reasons NSU Broken Arrow is hosting a cybersecurity camp for middle school-aged students today and June 8. Campers will learn from industry professionals and experienced educators about the importance of cybersecurity, effective communication in a rapidly evolving digital world and foundational concepts in coding and encoding.
NSU’s efforts to modernize its programs come at a crucial time, with both AI and cybersecurity jobs seeing major growth. For students and professionals alike, the university is building opportunities that align with the future of work.
This as-told-to essay is based on a transcribed conversation with Risa Morimoto, a senior lecturer in economics at SOAS University of London, in England. The following has been edited for length and clarity.
Students always cheat.
I’ve been a lecturer for 18 years, and I’ve dealt with cheating throughout that time, but with AI tools becoming widely available in recent years, I’ve experienced a significant change.
There are definitely positive aspects to AI. It’s much easier to get access to information and students can use these tools to improve their writing, spelling, and grammar, so there are fewer badly written essays.
AI is supposed to help us work efficiently, but my workload has skyrocketed because of it. I have to spend lots of time figuring out whether the work students are handing in was really written by them.
I’ve decided to take dramatic action, changing the way I assess students to encourage them to be more creative and rely less on AI. The world is changing, so universities can’t stand still.
Cheating has become harder to detect because of AI
I’ve worked at SOAS University of London since 2012. My teaching focus is ecological economics.
Initially, my teaching style was exam-based, but I found that students were anxious about one-off exams, and their results wouldn’t always correspond to their performance.
I eventually pivoted to a focus on essays. Students chose their topic and consolidated theories into an essay. It worked well — until AI came along.
Related stories
Business Insider tells the innovative stories you want to know
Business Insider tells the innovative stories you want to know
Cheating used to be easier to spot. I’d maybe catch one or two students cheating by copying huge chunks of text from internet sources, leading to a plagiarism case. Even two or three years ago, detecting inappropriate AI use was easier due to signs like robotic writing styles.
Now, with more sophisticated AI technologies, it’s harder to detect, and I believe the scale of cheating has increased.
I’ll read 100 essays and some of them will be very similar using identical case examples, that I’ve never taught.
These examples are typically referenced on the internet, which makes me think the students are using an AI tool that is incorporating them. Some of the essays will cite 20 pieces of literature, but not a single one will be something from the reading list I set.
While students can use examples from internet sources in their work, I’m concerned that some students have just used AI to generate the essay content without reading or engaging with the original source.
I started using AI detection tools to assess work, but I’m aware this technology has limitations.
AI tools are easy to access for students who feel pressured by the amount of work they have to do. University fees are increasing, and a lot of students work part-time jobs, so it makes sense to me that they want to use these tools to complete work more quickly.
There’s no obvious way to judge misconduct
During the first lecture of my module, I’ll tell students they can use AI to check grammar or summarize the literature to better understand it, but they can’t use it to generate responses to their assignments.
SOAS has guidance for AI use among students, which sets similar principles about not using AI to generate essays.
Over the past year, I’ve sat on an academic misconduct panel at the university, dealing with students who’ve been flagged for inappropriate AI use across departments.
I’ve seen students refer to these guidelines and say that they only used AI to support their learning and not to write their responses.
It can be hard to make decisions because you can’t be 100% sure from reading the essay whether it’s AI-generated or not. It’s also hard to draw a line between cheating and using AI to support learning.
Next year, I’m going to dramatically change my assignment format
My colleagues and I speak about the negative and positive aspects of AI, and we’re aware that we still have a lot to learn about the technology ourselves.
The university is encouraging lecturers to change their teaching and assessment practices. At the department level, we often discuss how to improve things.
I send my two young children to a school with an alternative, progressive education system, rather than a mainstream British state school. Seeing how my kids are educated has inspired me to try two alternative assessment methods this coming academic year. I had to go through a formal process with the university to get them approved.
I’ll ask my students to choose a topic and produce a summary of what they learned in the class about it. Second, they’ll create a blog, so they can translate what they’ve understood of the highly technical terms into a more communicable format.
My aim is to make sure the assignments are directly tied to what we’ve learned in class and make assessments more personal and creative.
The old assessment model, which involves memorizing facts and regurgitating them in exams, isn’t useful anymore. ChatGPT can easily give you a beautiful summary of information like this. Instead, educators need to help students with soft skills, communication, and out-of-the-box thinking.
In a statement to BI, a SOAS spokesperson said students are guided to use AI in ways that “uphold academic integrity.” They said the university encouraged students to pursue work that is harder for AI to replicate and have “robust mechanisms” in place for investigating AI misuse. “The use of AI is constantly evolving, and we are regularly reviewing and updating our policies to respond to these changes,” the spokesperson added.
Stamatis Gatirdakis, co-founder and president of the Ethikon Institute, still remembers the first time he used ChatGPT. It was the fall of 2022 and a fellow student in the Netherlands sent him the link to try it out. “It made a lot of mistakes back then, but I saw how it was improving at an incredible rate. From the very first tests, I felt that it would change the world,” he tells Kathimerini. Of course, he also identified some issues, mainly legal and ethical, that could arise early on, and last year, realizing that there was no private entity that dealt exclusively with the ethical dimension of artificial intelligence, he decided to take the initiative.
He initially turned to his friends, young lawyers like him, engineers and programmers with similar concerns. “In the early days, we would meet after work, discussing ideas about what we could do,” recalls Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters. Her master’s degree, which she earned in the Netherlands in 2019, was on the ethics and regulatory aspects of new technologies. “At that time, the European Union’s white paper on artificial intelligence had just been released, which was a first, hesitant step. But even though technology is changing rapidly, the basic ethical dilemmas and how we legislate remain constant. The issue is managing to balance innovation with citizen protection,” she explains.
Together with three other Greeks (Apostolos Spanos, Michael Manis and Nikos Vadivoulis), they made up the institute’s founding team, and sought out colleagues abroad with experience in these issues. Thus, Ethikon was created – a nonprofit company that does not provide legal services, but implements educational, research and social awareness actions on artificial intelligence.
Stamatis Gatirdakis, co-founder and president of the Ethikon Institute.
Copyrights
One of the first issues they addressed was copyrights. “In order not to stop the progress of technology, exceptions were initially made so that these models of productive artificial intelligence could use online content for educational purposes, without citing the source or compensating the creators,” explains Gatirdakis, adding that this resulted in copyrights being sidelined. “The battle between creators and the big tech giants has been lost. But because companies don’t want them against them, they have started making commercial agreements, whereby every time their data is used to produce answers, they receive percentages on a calculated model.”
Beyond compensation, another key question arises: Who is ultimately the creator of a work produced through artificial intelligence? “There are already conflicting court decisions. In the US, they argue that artificial intelligence cannot produce an ‘original’ work and that the work belongs to the search engine companies,” says Voukelatou. A typical example is the comic book, ‘Zarya of the Dawn,’ authored by artist and artificial intelligence (AI) consultant Kris Kashtanova, with images generated through the AI platform Midjourney. The US Copyright Office rejected the copyright application for the images in her book when it learned that they were created exclusively by artificial intelligence. On the contrary, in China, in corresponding cases, they ruled that because the user gives the exact instructions, he or she is the creator.
Personal data
Another crucial issue is the protection of personal data. “When we upload notes or files, what happens to all this content? Does the algorithm learn from them? Does it use them elsewhere? Presumably not, but there are still no safeguards. There is no case law, nor a clear regulatory framework,” says Voukelatou, who mentions the loopholes that companies exploit to overcome obstacles with personal data. “Like the application that transforms your image into a cartoon by the famous Studio Ghibli. Millions of users gave consent for their image to be processed and so this data entered the systems and trained the models. If a similar image is subsequently produced, it no longer belongs to the person who first uploaded it. And this part is legally unregulated.”
The problem, they explain, is that the development of these technologies is mainly taking place in the United States and China, which means that Europe remains on the sidelines of a meaningful discussion. The EU regulation on artificial intelligence (AI Act), first presented in the summer of 2024, is the first serious attempt to set a regulatory framework. Members of Ethikon participated in the consultation of the regulation and specifically focused on the categorization of artificial intelligence applications based on the level of risk. “We supported with examples the prohibition of practices such as ‘social scoring’ adopted by China, where citizens are evaluated in real time through surveillance cameras. This approach was incorporated and the regulation explicitly prohibits such practices,” says Gatirdakis, who participated in the consultation.
“The final text sets obligations and rules. It also provides for strict fines depending on turnover. However, we are in a transition period and we are all waiting for further guidelines from the European Union. It is assumed that it will be fully implemented in the summer of 2026. However, there are already delays in the timetable and in the establishment of the supervisory authorities,” the two experts said.
Maria Voukelatou, executive director at Ethikon and lawyer specialized in technology law and IP matters.
The team’s activities
Beyond consultation, the Ethikon team is already developing a series of actions to raise awareness among users, whether they are business executives or students growing up with artificial intelligence. The team’s executives created a comic inspired by the Antikythera Mechanism that explains in a simple way the possibilities but also the dangers of this new technology. They also developed a generative AI engine based exclusively on sources from scientific libraries – however, its use is expensive and they are currently limiting it to pilot educational actions. They recently organized a conference in collaboration with the Laskaridis Foundation and published an academic article on March 29 exploring the legal framework for strengthening of copyright.
In the article, titled “Who Owns the Output? Bridging Law and Technology in LLMs Attribution,” they analyze, among other things, the specific tools and techniques that allow the detection of content generated by artificial intelligence and its connection to the data used to train the model or the user who created it. “For example, a digital signature can be embedded in texts, images or videos generated by AI, invisible to the user, but recognizable with specific tools,” they explain.
The Ethikon team has already begun writing a second – more technical – academic article, while closely monitoring technological developments internationally. “In 2026, we believe that we will be much more concerned with the energy and environmental footprint of artificial intelligence,” says Gatirdakis. “Training and operating models requires enormous computing power, resulting in excessively high energy and water consumption for cooling data centers. The concern is not only technical or academic – it touches the core of the ethical development of artificial intelligence. How do we balance innovation with sustainability.” At the same time, he explains, serious issues of truth management and security have already arisen. “We are entering a period where we will not be able to easily distinguish whether what we see or hear is real or fabricated,” he continues.
In some countries, the adoption of technology is happening at breakneck speed. In the United Arab Emirates, an artificial intelligence system has been developed that drafts laws and monitors the implementation of laws. At the same time, OpenAI announced a partnership with the iPhone designer to launch a new device that integrates artificial intelligence with voice, visual and personal interaction in late 2026. “A new era seems to be approaching, in which artificial intelligence will be present not only on our screens but also in the natural environment.”