Connect with us

Education

Linda McMahon’s education toolkit: Reading, AI, and discipline at the heart of Trump’s school reforms

Published

on


Education Secretary Linda McMahon is preparing to roll out a sweeping toolkit of recommendations for American schools, aimed at boosting literacy rates, integrating artificial intelligence responsibly, and restoring classroom discipline. The initiative underscores a pivotal shift in federal education policy under the Trump administration, blending classical approaches with cutting-edge technology while reasserting parental and local control, according to reporting by the New York Post.

Federal leadership in a decentralised system

Although President Trump has pushed to scale back the Department of Education, McMahon argues that Washington still has a role in promoting a culture of excellence. As reported by the New York Post, the new toolkit will not mandate compliance but instead share proven strategies and case studies from schools nationwide, complete with direct educator contacts for peer-to-peer learning.

Science of reading and classical learning models

Central to the toolkit is the science of reading, a structured literacy approach outperforming legacy federal initiatives like No Child Left Behind. McMahon is also spotlighting a revival of classical education models in states such as Mississippi, Louisiana, and Florida, where traditional curricula have shown measurable gains in student achievement, the Post noted.

Responsible AI integration

AI literacy will become a cornerstone of K–12 education through Trump’s executive order creating a federal AI task force. Key goals include:

  • Early exposure for students via hands-on projects and industry partnerships.
  • Teacher training grants under the Elementary and Secondary Education Act and Title II of the Higher Education Act.
  • Public-private collaboration with tech and academic institutions to ensure safe and effective AI adoption.

Yet, officials warn of risks: generative AI tools like ChatGPT may enable widespread cheating. McMahon has pledged to propose “guardrails” to safeguard academic integrity while encouraging innovation, according to the New York Post.

Restoring classroom discipline

Marking a sharp departure from Biden-era guidance on equity in discipline, the Trump administration will emphasize behaviour-based accountability. Schools will be directed to enforce discipline without racial discrimination, but also without deference to what McMahon calls “ideology-based” leniency. Teachers and principals, she says, must be empowered to curb disruptive behaviour that undermines learning, the Post reported.

Parental empowerment and school choice

The initiative also ties into the broader Trump administration push for school choice, ensuring parents have greater say in where their children are educated. McMahon has framed parental involvement as key to driving grassroots reform and ensuring students are not “trapped in failing schools,” as reported by the New York Post.

Urgency amid declining performance

The rollout comes at a time when U.S. test scores remain below pre-pandemic levels. Critics argue that a fixation on diversity, equity, and inclusion (DEI) has displaced merit in both K–12 and higher education. The Trump administration, already clashing with elite universities over admissions and curricula, is signalling a strong return to merit-based standards.

A restructuring of priorities

Taken together, McMahon’s toolkit represents a sweeping recalibration of American education. By merging classical learning principles with AI-driven innovation and restoring discipline in classrooms, the Trump administration hopes to redirect schools toward measurable academic excellence—while shifting power from Washington to parents, teachers, and local leaders.





Source link

Education

OpenAI inks deal with Greece for AI innovation in schools

Published

on


Greece and OpenAI have signed a Memorandum of Understanding (MoU) to expand the integration of artificial intelligence (AI) in the country, eyeing education and SME utility.

Dubbed “OpenAI for Greece,” the collaboration between Greece and OpenAI was signed at the Hellenic Expo event, with key government officials in attendance. Prime Minister Kyriakos Mitsotakis, Onassis Foundation President Anthony Papadimitriou, and OpenAI Chief Global Affairs Officer Chris Lehane inked their signatures on the document.

Greece will pioneer the mainstream use of ChatGPT Edu, a tailor-made version of the AI chatbot for educational institutions. The MoU will back a phased pilot starting in the next academic session to integrate ChatGPT Edu into its educational system.

Under the first phase, authorities will focus on improving AI literacy among students and teachers in select institutions, with the second phase featuring a nationwide rollout. The Onassis Foundation will lead the implementation of ChatGPT Edu with The Tipping Point, which will be handling teacher onboarding.

“From Plato’s Academy to Aristotle’s Lyceum—Greece is the historical birthplace of western education,” said Lehane. “Today, with millions of Greeks using ChatGPT on a regular basis, the country is once again showing its dedication to learning and ideas.”

Going forward, a joint task force comprising representatives from the Prime Minister’s Office and the Ministry of Education will supervise the pilot project. Meanwhile, OpenAI will provide technical support and co-design a teacher’s training manual focusing on safety and productivity.

Launched in 2024, ChatGPT Edu has succeeded at leading universities, including Harvard and Oxford. OpenAI executives are confident of extended adoption levels in Europe, given its GDPR compliance, while offering students and teachers access to OpenAI’s latest models.

MoU poised to support the local startup ecosystem

Besides pushing to improve the local educational landscape, the MoU will launch the Greek AI Accelerator Program. The program will support Greek firms that are starting to build AI products and emerging technologies in partnership with Endeavor Greece.

Successful firms will have access to OpenAI technology and credits while receiving technical mentorship from OpenAI engineers. Furthermore, the program will offer tailored workshops on regulatory compliance and global safety standards while providing international exposure to local firms.

Before the MoU with OpenAI, Greece had taken early steps with AI to future-proof key industries. The country rolled out a national blueprint for AI while backing an AI-based platform to stifle the spread of fake news in Greek cyberspaces.

Indonesia unveils evaluation mechanism for responsible AI development

Months after launching a Center of Excellence for Artificial Intelligence (AI), Indonesian authorities have developed an evaluation mechanism for safe AI innovation for service providers.

According to a report by local news outlet Antara, the Ministry of Communications and Digital Affairs is the brainchild behind the evaluation mechanism. The newly minted mechanism is an attempt by the Ministry to ensure that AI innovation remains aligned with ethics and international best practices.

At the moment, Indonesia’s AI Ethics Guideline is still under development, but a self-assessment by local AI developers via an incident reporting system provided the Ministry with data for the evaluation mechanism.’

The Ministry’s Director of AI and New Technology Ecosystems, Aju Widya Sari, disclosed that the ethical guidelines will promote inclusiveness, safety, transparency, and accessibility. Following the above, the evaluation mechanism will reflect the ethical guidelines, allowing AI developers to operate under the highest global standards.

“The evaluation of the ethical guidelines will be carried out gradually to ensure ethical and responsible AI governance,” said Sari.

While not expressly stated, the evaluation mechanism will involve a multi-layer process involving rigorous checks from pre-deployment to post-deployment. Furthermore, pundits opine that the mechanism will feature metrics to measure fairness, transparency, privacy, and safety.

Sari disclosed that the evaluation mechanism will offer Indonesia the benefits of sustainability in its economy, social demographics, and environment. Indonesia has since launched an AI Center of Excellence to boost adoption demographics amid a keen stance for public safety.

Furthermore, the Southeast Asian country has made a significant play to deepen its talent pool for emerging technologies via a raft of initiatives.

AI regulation sweeps through the ecosystem

Amid the global push for AI innovation, attempts at regulation have gathered significant steam over the last year. While the EU has surged ahead with a regulatory playbook, other regions are adopting a cautious stance on AI rules for service providers.

Japan has adopted a friendly stance, while Switzerland is opting to remain neutral toward regulations, allowing the ecosystem to develop at its own pace. Meanwhile, UNESCO is riding a wave of international collaborations to promote ethical AI development, signing MoUs with Jamaica, Bangladesh, and the Netherlands.

In order for artificial intelligence (AI) to work right within the law and thrive in the face of growing challenges, it needs to integrate an enterprise blockchain system that ensures data input quality and ownership—allowing it to keep data safe while also guaranteeing the immutability of data. Check out CoinGeek’s coverage on this emerging tech to learn more why Enterprise blockchain will be the backbone of AI.

Watch | Treechat AI: Empowering Super Creators

title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen=””>



Source link

Continue Reading

Education

Medical education needs rigorous trials to validate AI’s role

Published

on


The use of artificial intelligence in medical education is raising urgent questions about quality, ethics, and oversight. A new study explores how AI is being deployed in training programs and the risks of implementing poorly validated systems.

The research, titled “Artificial Intelligence in Medical Education: A Narrative Review on Implementation, Evaluation, and Methodological Challenges” and published in AI in 2025, presents a sweeping review of the ways AI is already embedded in undergraduate and postgraduate medical education. It highlights both the opportunities offered by AI-driven tutoring, simulations, diagnostics, and assessments, and the methodological and ethical shortcomings that could undermine its long-term effectiveness.

How AI is being implemented in medical education

The study found four major areas where AI is reshaping medical training: tutoring and content generation, simulation and practice, diagnostic skill-building, and competency assessment.

AI-driven tutoring is already gaining ground through large language models such as ChatGPT, which generate quizzes, exam preparation tools, and academic writing support. These tools have been shown to improve student engagement and test performance. However, the research underscores that such systems require constant human supervision to prevent factual errors and discourage students from outsourcing critical thinking.

Simulation and practice environments are another area of rapid development. Machine learning and virtual reality platforms are being deployed to train students in surgery, anesthesia, and emergency medicine. These systems deliver real-time performance feedback and can differentiate between novice and expert performance. Yet challenges persist, including scalability issues, lack of interpretability, and concerns that students may lose self-confidence if they rely too heavily on automated guidance.

Diagnostic training has also been revolutionized by AI. In specialties such as radiology, pathology, dermatology, and ultrasound, AI systems often outperform students in visual recognition tasks. While this demonstrates significant potential, the study warns that biased datasets and privacy concerns linked to biometric data collection could reinforce inequities. Over-reliance on automated diagnosis also risks weakening clinical judgment.

Competency assessment is the fourth area of innovation. Deep learning and computer vision tools now enable objective and continuous evaluation of motor, cognitive, and linguistic skills. They can identify expertise levels, track errors, and deliver adaptive feedback. Still, most of these tools suffer from limited validation, lack of generalizability across contexts, and weak clinical integration.

What risks and challenges are emerging

Enthusiasm for AI must be tempered by a recognition of its limitations, the study asserts. Methodologically, fewer than one-third of published studies rely on randomized controlled trials. Many evaluations are exploratory, small-scale, or short-term, limiting the evidence base for AI’s real impact on education.

There are also risks of passive learning. When students turn to AI systems for ready-made solutions, they may bypass the critical reasoning that medical training is designed to foster. This dynamic raises concerns about the erosion of clinical decision-making skills and the creation of over-dependent learners.

Ethical challenges are equally pressing. Training data for AI systems is often incomplete, unrepresentative, or biased, leading to disparities in how well these tools perform across different populations. Compliance with privacy frameworks such as GDPR remains inconsistent, especially when biometric or sensitive patient data is used in educational platforms. Unequal access to AI resources also risks widening the gap between well-resourced and low-resource institutions, exacerbating inequalities in global medical training.

The study also highlights gaps in faculty preparedness. Many educators lack sufficient AI literacy, leaving them unable to properly supervise or critically evaluate AI-assisted teaching. This threatens to create an uneven landscape in which some institutions adopt AI thoughtfully while others deploy it without adequate safeguards.

What must be done to ensure responsible adoption

The study provides a clear roadmap for addressing these challenges. At its core is the principle of human-in-the-loop supervision. AI should complement but never replace instructors, ensuring that students continue to develop critical reasoning alongside digital support.

The authors call for more rigorous research designs. Longitudinal, multicenter studies and randomized controlled trials are needed to generate evidence that is both reliable and generalizable. Without such studies, AI’s promise in medical education remains speculative.

Curriculum reform is another priority. AI literacy, ethics, and critical appraisal must become standard components of medical training so that students can understand not only how to use AI but also how to question and evaluate it. Educators, too, require training to guide responsible use and prevent misuse.

Finally, the study presses for inclusivity. Access to AI-driven tools must be extended to low-resource settings, ensuring that medical education worldwide benefits from innovation rather than reinforcing divides. Regulatory frameworks should also evolve to cover privacy, fairness, and accountability in AI-assisted learning.



Source link

Continue Reading

Education

Children and teenagers share impact of pandemic in new report

Published

on


Branwen JeffreysEducation Editor and

Erica Witherington

BBC A group of male and female students, all aged 17, sit together, smiling, at two picnic tables in front of their college buildingBBC

When lockdown started, college student Sam was living with his mum because his parents were separated.

Then his dad died unexpectedly, leaving him feeling that “something had been stolen” from him.

His experience is one of many being highlighted as the Covid-19 public inquiry prepares to look at the pandemic’s impact on children and young people.

A new report – seen exclusively by the BBC – includes individual accounts of 600 people who were under 18 during the pandemic.

They include happy memories of time spent with family, as well as the impact of disruption to schools being moved online, social isolation and the loss of relatives.

The inquiry will start hearing evidence on these issues from Monday 29 September.

‘I lost a relationship’

17-year-old student Sam stands and smiles in the courtyard of his FE college. He is wearing sunglasses and his college lanyard

Sam’s dad died suddenly during the pandemic, when he was 12

Wigan resident Sam was 12 during the first lockdowns and says he found it hard to understand the rules that prevented him spending more time with his dad.

His dad’s death left him struggling with regrets that he had “lost a relationship” because of the isolation before his father’s death.

“I do feel deep down that something has been stolen from me,” he says.

“But I do know that the procedures that we had to go through were right. It was a bad situation.”

Now 17, Sam’s resilience has sadly been tested further after the loss of his mum, who recently died from cancer.

But Sam says that strength he built up during Covid has helped give him “the tools to deal with grief alone”.

‘Trying to catch up on the lost moments’

Kate Eisenstein, who is part of the team leading the inquiry, says the pandemic was a “life-changing set of circumstances” for the children and teenagers who lived through it.

The impact of the pandemic set out in the testimony is hugely varied and includes happier memories from those who flourished in secure homes, enjoying online learning.

Other accounts capture the fears of children in fragile families with no escape from mental health issues or domestic violence.

Some describe the devastating sudden loss of parents or grandparents, followed by online or physically distanced funerals.

Grief for family members lost during the pandemic is an experience shared with some of Sam’s college classmates.

Student Ella told the BBC that losing her granddad during Covid had made her value spending more time with her grandma.

It is one of the ways in which Ella says she is trying to “catch up on the lost moments” she missed during Covid.

Living life online

One almost universal experience for children living through the pandemic was much of life shifting to online platforms.

While this allowed family connections and friendships to be maintained, Ms Eisenstein said some children had darker experiences, spending up to 19 hours a day online, leaving them “really anxious”.

“Some told us how they started comparing their body image to people online, how video games and social media distracted from their learning,” she said.

Most worrying, she said, were the accounts revealing an increased risk of adults seeking to exploit young children online, including sending nude images and inappropriate messages.

The remarkable variety of experiences, both positive and stressful, adds up to what she describes as “an unprecedented insight into children’s inner world”.

Aaliyah, a student at Winstanley College near Wigan, says the social isolation she experienced aged 11 led to her spending hours looking at social media, which began altering her self-confidence.

“With the content I was seeing online, I’d start to look in the mirror and go, ‘I could change that about myself,’ or ‘I don’t really like that about myself,'” she says.

Lasting effects

Avalyn, a 16 year old with long Covid, sits on her bed smiling propped up with cushions, with the books she used for GCSE revision open alongside her.

Avalyn was home schooled through her GCSEs after contracting long Covid

The inquiry is also expected to hear about the experiences of children still living with long Covid, like Avalyn, now 16, who became ill with the virus in October 2021.

While schools were beginning to return to normal, Avalyn was struggling with a deep and debilitating fatigue, and eventually left school for home education.

It took a year to get a formal diagnosis of long Covid and specialist advice.

“I enjoyed being in school, I enjoyed being social and seeing people, and then suddenly that was taken away from me very quickly,” Avalyn says.

Before long Covid, Avalyn says she was sporty at primary school and enjoyed acrobatics.

Like lots of other children her age, Avalyn has shown determination and resilience to achieve the things that might not have been so difficult in other circumstances, and she has now passed four GCSEs.

“I knew I wanted to do GCSEs to prove to myself especially that I still had the ability to do what everyone else was doing,” she says.

She still goes to a performing arts group, which allows her to join in as much or as little as she can manage.

Avalyn admits “it’s weird to say”, but in some ways she is “grateful” to have had long Covid, because of the things she has achieved during her long spells at home.

She has written, illustrated and self-published two children’s books and spent more time on her art.

While the path ahead is not straightforward, she says she is optimistic of finding a way to study and get into work.

The inquiry plans to hear evidence on the impact of children and young people across four weeks from 29 September to 23 October.



Source link

Continue Reading

Trending