Education
What does AI in the classroom look like? Ask Margaret Sass

Since generative AI tools like ChatGPT and Google Gemini emerged on the scene in 2022 and 2023, School for the Digital Future Lecturer Margaret Sass has been quick to adopt them as powerful learning aids in the classroom.
“I think I needed to understand what the panic was and then I realized how useful it is,” Sass said, recalling her first experiences with generative AI.
Mixing AI and education is a controversial topic, but Sass is not encouraging students to shirk their assignments or let computers think for them. Instead, she has found ways to use AI to support career-forward learning objectives.
Sass’s AI avatars, deployed in her TEAM303 Teamwork in the Digital Age class, are a prime example of her innovative approach. “I created 12 AI avatars. Basically employees at a fictitious company,” she said. “The students pick them to be on their team and they work on assignments together.”
The 12 avatars embody different workplace personalities. You have the charismatic communicator, who inspires others but can gloss over details; the visionary designer who can think creatively about problems, but hates structure and deadlines; and more, covering the full range of people students might work with in their careers.
Sass prompted each avatar personally, a more demanding task than it seems. “The problem is ChatGPT and Gemini are typically nice, but we know coworkers sometimes create conflict,” she said. “‘Ted’ will not agree with anything you say and he refuses to do anything, but for ChatGPT to do that I have to design the script.”
Another class uses student AI creations to simulate interactions with fictional community partners. Those have been a hit.
“The students are really happy about communicating with a fictitious community partner before they go out into the real world,” Sass said. “A lot of them have not gone knocking door to door. That doesn’t happen anymore like it did in my era, so going out there and talking to someone feels uncomfortable.”
It’s not just college students who benefit from Sass’s AI expertise. She’s also created a suite of courses teaching AI skills to senior citizens.
“Senior citizens actually use AI more than I thought,” she said. “I would say about 50% [of the senior students 60+] in the classes I teach face-to-face use it.”
While Sass’s Boise State students learn professional skills with AI, her senior classes focus on the positive impact AI has on their lives and some safety tips. One of her courses covers AI scams, which use generative AI tools to make existing phone and email scams even more dangerous. Awareness gained from the course helps seniors avoid AI-related threats, so they can get the full benefit of these tools.
Sass is a champion for AI in education, but she’s well aware of the pitfalls surrounding its use. The right approach to the future of AI, according to Sass, is to balance ethical concerns with the inevitable reality. “AI is not going anywhere. [Students] will go to their jobs and they’ll be required to use it, so I want to teach them the ethical aspects of it.”
Education
Medical education needs rigorous trials to validate AI’s role

The use of artificial intelligence in medical education is raising urgent questions about quality, ethics, and oversight. A new study explores how AI is being deployed in training programs and the risks of implementing poorly validated systems.
The research, titled “Artificial Intelligence in Medical Education: A Narrative Review on Implementation, Evaluation, and Methodological Challenges” and published in AI in 2025, presents a sweeping review of the ways AI is already embedded in undergraduate and postgraduate medical education. It highlights both the opportunities offered by AI-driven tutoring, simulations, diagnostics, and assessments, and the methodological and ethical shortcomings that could undermine its long-term effectiveness.
How AI is being implemented in medical education
The study found four major areas where AI is reshaping medical training: tutoring and content generation, simulation and practice, diagnostic skill-building, and competency assessment.
AI-driven tutoring is already gaining ground through large language models such as ChatGPT, which generate quizzes, exam preparation tools, and academic writing support. These tools have been shown to improve student engagement and test performance. However, the research underscores that such systems require constant human supervision to prevent factual errors and discourage students from outsourcing critical thinking.
Simulation and practice environments are another area of rapid development. Machine learning and virtual reality platforms are being deployed to train students in surgery, anesthesia, and emergency medicine. These systems deliver real-time performance feedback and can differentiate between novice and expert performance. Yet challenges persist, including scalability issues, lack of interpretability, and concerns that students may lose self-confidence if they rely too heavily on automated guidance.
Diagnostic training has also been revolutionized by AI. In specialties such as radiology, pathology, dermatology, and ultrasound, AI systems often outperform students in visual recognition tasks. While this demonstrates significant potential, the study warns that biased datasets and privacy concerns linked to biometric data collection could reinforce inequities. Over-reliance on automated diagnosis also risks weakening clinical judgment.
Competency assessment is the fourth area of innovation. Deep learning and computer vision tools now enable objective and continuous evaluation of motor, cognitive, and linguistic skills. They can identify expertise levels, track errors, and deliver adaptive feedback. Still, most of these tools suffer from limited validation, lack of generalizability across contexts, and weak clinical integration.
What risks and challenges are emerging
Enthusiasm for AI must be tempered by a recognition of its limitations, the study asserts. Methodologically, fewer than one-third of published studies rely on randomized controlled trials. Many evaluations are exploratory, small-scale, or short-term, limiting the evidence base for AI’s real impact on education.
There are also risks of passive learning. When students turn to AI systems for ready-made solutions, they may bypass the critical reasoning that medical training is designed to foster. This dynamic raises concerns about the erosion of clinical decision-making skills and the creation of over-dependent learners.
Ethical challenges are equally pressing. Training data for AI systems is often incomplete, unrepresentative, or biased, leading to disparities in how well these tools perform across different populations. Compliance with privacy frameworks such as GDPR remains inconsistent, especially when biometric or sensitive patient data is used in educational platforms. Unequal access to AI resources also risks widening the gap between well-resourced and low-resource institutions, exacerbating inequalities in global medical training.
The study also highlights gaps in faculty preparedness. Many educators lack sufficient AI literacy, leaving them unable to properly supervise or critically evaluate AI-assisted teaching. This threatens to create an uneven landscape in which some institutions adopt AI thoughtfully while others deploy it without adequate safeguards.
What must be done to ensure responsible adoption
The study provides a clear roadmap for addressing these challenges. At its core is the principle of human-in-the-loop supervision. AI should complement but never replace instructors, ensuring that students continue to develop critical reasoning alongside digital support.
The authors call for more rigorous research designs. Longitudinal, multicenter studies and randomized controlled trials are needed to generate evidence that is both reliable and generalizable. Without such studies, AI’s promise in medical education remains speculative.
Curriculum reform is another priority. AI literacy, ethics, and critical appraisal must become standard components of medical training so that students can understand not only how to use AI but also how to question and evaluate it. Educators, too, require training to guide responsible use and prevent misuse.
Finally, the study presses for inclusivity. Access to AI-driven tools must be extended to low-resource settings, ensuring that medical education worldwide benefits from innovation rather than reinforcing divides. Regulatory frameworks should also evolve to cover privacy, fairness, and accountability in AI-assisted learning.
Education
Children and teenagers share impact of pandemic in new report

Branwen JeffreysEducation Editor and
Erica Witherington

When lockdown started, college student Sam was living with his mum because his parents were separated.
Then his dad died unexpectedly, leaving him feeling that “something had been stolen” from him.
His experience is one of many being highlighted as the Covid-19 public inquiry prepares to look at the pandemic’s impact on children and young people.
A new report – seen exclusively by the BBC – includes individual accounts of 600 people who were under 18 during the pandemic.
They include happy memories of time spent with family, as well as the impact of disruption to schools being moved online, social isolation and the loss of relatives.
The inquiry will start hearing evidence on these issues from Monday 29 September.
‘I lost a relationship’

Wigan resident Sam was 12 during the first lockdowns and says he found it hard to understand the rules that prevented him spending more time with his dad.
His dad’s death left him struggling with regrets that he had “lost a relationship” because of the isolation before his father’s death.
“I do feel deep down that something has been stolen from me,” he says.
“But I do know that the procedures that we had to go through were right. It was a bad situation.”
Now 17, Sam’s resilience has sadly been tested further after the loss of his mum, who recently died from cancer.
But Sam says that strength he built up during Covid has helped give him “the tools to deal with grief alone”.
‘Trying to catch up on the lost moments’
Kate Eisenstein, who is part of the team leading the inquiry, says the pandemic was a “life-changing set of circumstances” for the children and teenagers who lived through it.
The impact of the pandemic set out in the testimony is hugely varied and includes happier memories from those who flourished in secure homes, enjoying online learning.
Other accounts capture the fears of children in fragile families with no escape from mental health issues or domestic violence.
Some describe the devastating sudden loss of parents or grandparents, followed by online or physically distanced funerals.
Grief for family members lost during the pandemic is an experience shared with some of Sam’s college classmates.
Student Ella told the BBC that losing her granddad during Covid had made her value spending more time with her grandma.
It is one of the ways in which Ella says she is trying to “catch up on the lost moments” she missed during Covid.
Living life online
One almost universal experience for children living through the pandemic was much of life shifting to online platforms.
While this allowed family connections and friendships to be maintained, Ms Eisenstein said some children had darker experiences, spending up to 19 hours a day online, leaving them “really anxious”.
“Some told us how they started comparing their body image to people online, how video games and social media distracted from their learning,” she said.
Most worrying, she said, were the accounts revealing an increased risk of adults seeking to exploit young children online, including sending nude images and inappropriate messages.
The remarkable variety of experiences, both positive and stressful, adds up to what she describes as “an unprecedented insight into children’s inner world”.
Aaliyah, a student at Winstanley College near Wigan, says the social isolation she experienced aged 11 led to her spending hours looking at social media, which began altering her self-confidence.
“With the content I was seeing online, I’d start to look in the mirror and go, ‘I could change that about myself,’ or ‘I don’t really like that about myself,'” she says.
Lasting effects

The inquiry is also expected to hear about the experiences of children still living with long Covid, like Avalyn, now 16, who became ill with the virus in October 2021.
While schools were beginning to return to normal, Avalyn was struggling with a deep and debilitating fatigue, and eventually left school for home education.
It took a year to get a formal diagnosis of long Covid and specialist advice.
“I enjoyed being in school, I enjoyed being social and seeing people, and then suddenly that was taken away from me very quickly,” Avalyn says.
Before long Covid, Avalyn says she was sporty at primary school and enjoyed acrobatics.
Like lots of other children her age, Avalyn has shown determination and resilience to achieve the things that might not have been so difficult in other circumstances, and she has now passed four GCSEs.
“I knew I wanted to do GCSEs to prove to myself especially that I still had the ability to do what everyone else was doing,” she says.
She still goes to a performing arts group, which allows her to join in as much or as little as she can manage.
Avalyn admits “it’s weird to say”, but in some ways she is “grateful” to have had long Covid, because of the things she has achieved during her long spells at home.
She has written, illustrated and self-published two children’s books and spent more time on her art.
While the path ahead is not straightforward, she says she is optimistic of finding a way to study and get into work.
The inquiry plans to hear evidence on the impact of children and young people across four weeks from 29 September to 23 October.
Education
AI In Education Is A Very Divisive Topic, And Administrators Are Trying To Find Ethical Uses And The Right Balance In Schools » TwistedSifter

Shutterstock
Artificial intelligence (AI) programs such as ChatGPT, Grok, and others haven’t been around that long, but their popularity has absolutely skyrocketed, and it is easy to see why. While nobody would argue that they are perfect, they are able to perform some amazing tasks and serve as a great resource to people in a variety of situations.
Many industries are trying to find the right balance when it comes to how and when AI should be used. With most companies, this is just going to be a matter of trial and error. When a company gets it wrong, they will suffer and at worst, go out of business. For the education industry, however, the stakes are much higher. If schools don’t handle AI properly, it could result in millions of young people finding themselves at a disadvantage for the rest of their lives.
On the one hand, schools want to make sure that their students are actually learning (and, perhaps more importantly, learning HOW to learn) rather than just pasting an assignment into ChatGPT and turning in whatever it spits out. On the other hand, there is almost no doubt that AI is here to stay, so failing to teach students how to effectively use it is going to be equally debilitating.

Shutterstock
On top of that, there is the question of how educators and administrators should use AI.
It is clear that AI tools can help complete a lot of tasks much more quickly, which frees up teachers to actually teach. In a world where there aren’t enough teachers to go around, this could be a critical resource. The potential problem, however, is that when AI gets something wrong, it does so very confidently, which can lead to many other issues. For example, many teachers use AI to grade papers, which in theory should be fine. When the AI does it incorrectly, however, the students will notice and feel betrayed. When students lose confidence in their teachers, they often lose the desire to learn.
The New York Times reported on how schools are using AI for various things including grading papers, tutoring students, and much more. There are even some tools available that will monitor the grades, behavior reports, and even social media activity of students and publish reports to the administrators to try to catch at-risk students and get them help.
While this obviously has a lot of potential to do good for the students, it can also feel like a major overstep when it comes to privacy and providing students with the attention from real people that they may need.
On top of that, there are real concerns with schools telling students that they are not allowed to use AI for their assignments, but then the teachers are using AI for their work. This topic is filled with nuance, but it is easy to see how students would think of this as a double standard.

Shutterstock
Unfortunately, AI is advancing much more quickly than schools, and it is the students that are being left behind. Fortunately, some schools are at least making an effort to find the right balance. For example, prohibiting the use of AI for certain activities, but also offering classes that specifically teach students how to use AI for other activities.
Mistakes will undoubtedly be made, but as long as schools are willing to adopt this technology and guide the students on how to ethically use it, they will benefit in the end.
If you enjoyed that story, check out what happened when a guy gave ChatGPT $100 to make as money as possible, and it turned out exactly how you would expect.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries