Education
Best UK universities for dentistry – league table

Dentists study prevention, detection, management and treatment of oral and dental diseases, and the maintenance of oral and dental health
Education
AI can play a critical role in HSA education

- Key Insight: Learn how AI-driven HSA guidance enables personalized benefits education at scale.
- What’s at Stake: Low HSA literacy risks underutilized tax-advantaged savings and increased employer administrative costs.
- Supporting Data: 69% of employees unclear on HSAs; over 50% unaware HSA funds are investable.
- Source: Bullets generated by AI with editorial review
HSAs can be a complicated benefit for employees to
Sixty-nine percent of employees were
“It was clear that benefits administrators and HR teams, as well as their employees, had questions about HSA rules,” says Shuki Licht, head of innovation and AI technology at HealthEquity. “When employees don’t understand their HSAs, they underutilize them, missing out on triple tax advantages and long-term savings opportunities. And for employers, this translates to lower engagement with the benefits they’re investing in and higher administrative costs from repetitive employee questions.”
Read more:
HSAnswers is available right on HealthEquity’s site for workers in any industry for free.
For example, Licht says an employee could ask a question like, ‘I’m turning 65 and signing up for Medicare, can I still contribute to my HSA this year?’ and receive a response immediately, rather than waiting for HR to research and respond. The AI is also
Unlike many AI chat tools on the market, HSAnswers doesn’t rely on public information — it’s built exclusively from HealthEquity’s own knowledge base that includes over 500 curated educational resources, ensuring that users only get reliable and contextually relevant answers whenever and wherever they need them.
Read more:
“What’s important is that we’re meeting employees where they are,” Licht says. “When they have a question at 8 PM on a Sunday about whether a medical expense is qualified, they don’t have to wait until Monday to call benefits administration.”
Benefit leaders need AI support too
While communicating with employees and addressing benefit concerns is a
Having employees engage with a reliable AI tool doesn’t just alleviate some of the administrative burden by eliminating the amount of employees seeking their help; they also have a use for the tech.
Read more:
“We’re even seeing usage from benefits administrators themselves who use it as a reference tool when their employees ask questions,” Licht says. “It ensures consistent, accurate information across the organization.”
As
“AI is going to be transformative for benefit education because it solves the core challenge of scale,” Licht says. “You have millions of employees with individual questions and situations, but limited human resources to provide personalized guidance. AI enables us to deliver that personalized education at scale.”
Education
Why AI’s true power in education isn’t about saving time
Key points:
- When we frame AI as a creative partner rather than a productivity tool, something shifts
- 5 ways to infuse AI into your classroom this school year
- In training educators on AI, don’t outsource the foundational work of teaching
- For more on AI’s role in education, visit eSN’s Digital Learning hub
As a former teacher, educator coach, and principal, I’ve witnessed countless edtech promises come and go. The latest refrain echoes through conference halls and staff meetings: “AI saves teachers X hours a week.” While time is undeniably precious in our profession, this narrative sells both educators and students short. After years of working at the intersection of pedagogy and technology, I’ve come to believe that if we only use AI to do the same things faster, we’re not innovating–we’re just optimizing yesterday.
The real opportunity: From efficiency to impact
Great teaching has never been about efficiency. It’s iterative, adaptive, and deeply human. Teachers read the room, adjust pace mid-lesson, and recognize that moment when understanding dawns in a student’s eyes. Yet most AI tools flatten this beautiful complexity into task lists: generate a worksheet, create a quiz, save time, done.
The question we should be asking isn’t, “How do I get through prep faster?” but rather, “What would I try if I didn’t have to start from scratch?”
Consider the pedagogical best practices we know drive student success: timely personalized feedback, inquiry-based learning, differentiation, regular formative assessments, and fostering metacognition. These are time-intensive practices that many educators struggle to implement consistently–not for lack of desire, but for lack of bandwidth.
AI as a pedagogical ally
When AI is truly designed for education–not just wrapped around a large language model–it becomes a pedagogical ally that reduces barriers to best practices. I recently observed a teacher who’d always wanted to create differentiated choice boards for her diverse learners but never had the time to build them. With AI-powered tools that understand learning progressions and can generate standards-aligned content variations, she transformed a single instructional idea into personalized pathways for 30 students in minutes, then spent her saved time having one-on-one conferences with struggling readers.
This is the multiplier effect. AI didn’t replace her professional judgment; it amplified her impact by removing the mechanical barriers to her pedagogical vision.
Creativity unleashed, not automated
The educators I work with already have innovative ideas, but often lack the time and resources to bring them to life. When we frame AI as a creative partner rather than a productivity tool, something shifts. Teachers begin asking: What if I could finally try project-based learning without spending weekends creating materials? What if I could provide immediate, specific feedback to every student, not just the few I can reach during class?
We’ve seen educators use AI to experiment with flipped classrooms, design escape room reviews, and create interactive scenarios that would have taken days to develop manually. The AI handles the heavy lifting of content generation, alignment, and interactivity, while teachers focus on what only they can do: inspire, connect, and guide.
Educators are the true catalysts
As we evaluate AI tools for our schools, we must look beyond time saved to amplified impact. Does the tool respect teaching’s complexity? Does it support iterative, adaptive instruction? Most importantly, does it free educators to do what they do best?
The catalysts for educational transformation have always been educators themselves. AI’s purpose isn’t to automate teaching, but to clear space for the creativity, experimentation, and human connection that define great pedagogy. When we embrace this vision, we move from doing the same things faster to doing transformative things we never thought possible.
Education
AI Chatbot Caught Red-Handed in Canadian Education Report Scandal

An embarrassing discovery has rocked Newfoundland and Labrador’s education establishment. The province’s flagship education reform document—which ironically advocates for “ethical” AI use in schools—appears to have been partially written by the very technology it seeks to regulate.

CBC News broke the story last Friday, revealing that “A Vision for the Future: Transforming and Modernizing Education” contains at least 15 fabricated academic citations that experts believe came from AI language models.
The 418-page roadmap took 18 months to complete and carries enormous weight for the province’s educational future. Released August 28, the document maps out a decade-long transformation plan for public schools and universities across Newfoundland and Labrador.
Co-chairs Anne Burke and Karen Goodnough, both Memorial University education professors, presented the report alongside Education Minister Bernard Davis. None anticipated the brewing scandal.
The Smoking Gun Citation
One fake reference particularly caught researchers’ attention. The report cites a 2008 National Film Board movie titled “Schoolyard Games”—a film that never existed, according to board officials.
The citation didn’t materialize from thin air. Aaron Tucker, a Memorial assistant professor specializing in Canadian AI history, traced its origins to a University of Victoria style guide. That academic document uses fictional examples to teach proper citation formatting.
The style guide explicitly warns readers on page one: “Many citations in this guide are fictitious.” These examples exist solely for educational demonstration, not as actual sources.
Yet someone—or something—lifted the fake citation wholesale and planted it in the education report as legitimate research.
Tucker spent considerable time hunting for multiple sources referenced in the document. His searches through Memorial University’s library, academic databases, and Google came up empty.
“The fabrication of sources at least begs the question: did this come from generative AI?” Tucker told CBC. “Whether that’s AI, I don’t know, but fabricating sources is a telltale sign of artificial intelligence.”
AI’s Citation Problem
Language models have struggled with source fabrication since their inception. These systems generate statistically plausible text based on training patterns, prioritizing believability over accuracy.
ChatGPT, Gemini, Claude, and similar models excel at producing convincing fiction. When their pattern recognition fails to align with reality, the result sounds authoritative while being completely wrong.
The fabricated citations slip past human reviewers because they appear professionally formatted and contextually appropriate. Academic and legal fields face particular vulnerability to this deception.
Even AI models equipped with web search capabilities can fabricate citations, select inappropriate sources, or misrepresent their content.
The Crushing Irony
The fake citations scandal becomes particularly awkward given the report’s own recommendations. Among its 110 policy suggestions, the document specifically urges the provincial government to “provide learners and educators with essential AI knowledge, including ethics, data privacy, and responsible technology use.”
Sarah Martin, a Memorial political science professor, invested days reviewing the document and uncovered multiple questionable citations.
“Around the references I cannot find, I can’t imagine another explanation,” she told CBC. “You’re like, ‘This has to be right, this can’t not be.’ This is a citation in a very important document for educational policy.”
Josh Lepawsky, former Memorial University Faculty Association president, resigned from the report’s advisory board in January, citing concerns about the process. His assessment proves prescient.
“Errors happen. Made-up citations are a totally different thing where you essentially demolish the trustworthiness of the material,” Lepawsky told CBC, describing the process as “deeply flawed.”
Damage Control Begins
Co-chair Karen Goodnough declined CBC’s interview request, writing via email: “We are investigating and checking references, so I cannot respond to this at the moment.”
The Department of Education and Early Childhood Development acknowledged the problem through spokesperson Lynn Robinson’s statement. The department recognizes “a small number of potential errors in citations” and promises updates to the online version “in the coming days to rectify any errors.”
The scandal raises fundamental questions about academic integrity in the AI age. If a major government report advocating for responsible AI use contains AI-generated fabrications, what does this mean for educational standards and public trust?
Written by Alius Noreika
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries