Education
Is AI helping or hurting students? We answered your questions
Acadia professor Dr. Daniel Lametti says that you can’t rely on AI detectors to identify computer-generated writing.Michael C. York
On Sept. 4, postsecondary education reporter Joe Friesen, online culture reporter Samantha Edwards and psychology professor Dr. Daniel Lametti from Acadia University answered reader questions on the use of AI in schools, what parents and educators should look out for, and how students could harness its potential.
Readers asked about how AI use is changing classrooms, students’ ability to learn, and how they can use it responsibly. Here are some highlights from the Q+A.
Questions and answers have been edited for length and clarity.
AI in classrooms
In what ways can AI be a helpful tool for education, from the perspective of both students and teachers?
Dr. Daniel Lametti: Great question! We know from decades of research in cognitive psychology that passively reviewing material (for example, flipping through flash cards) has almost no impact on learning, and does not increase your memory of the material. In contrast, repeat testing is highly beneficial to learning; this is where AI can be helpful. For instance, students can use AI to generate practice questions in preparation for tests and exams.
In my classes where I allow AI use, I suggest students use it for test preparation, with some important warnings – mainly, anything shared with an AI to generate practice problems (e.g., class notes) may be used to train new AI models, making the material public.
As a university instructor, I’m always on the lookout for new ways to engage students in the classroom. In some cases, I’ve found that AI can be helpful as a brainstorming tool for coming up with new assignments and pedagogical approaches.
Joe Friesen: I’ve heard a lot from universities about how they want to ensure they’re teaching students how to work with AI. It’s here, it’s a reality and it will likely be something that shapes the world of work students are preparing to enter. So teaching students how to use AI can be really valuable and I expect it’s something students want. It’s a tool, and like any tool it has strengths and limitations. For example, in my experience, it provides results that need interpretation and refining, and educators can help students learn how to do that.
I’ve heard students say they use AI tools to help them get started with assignments and understanding, particularly with big, daunting topics. I’ve seen them use it for help with outlines, or with preparing quizzes that help them study.
What does AI mean for catching and addressing learning inequalities? Are less privileged students more likely to fall behind if they don’t have as much access to technology at home?
Friesen: That’s an interesting point. One of the things I notice reading about the high flyers of Silicon Valley and the technology world is an inclination to limit how much access to technology their children have. I see the same thing among many university-educated parents in Canada. I wonder if in future having less access to technology in the early years will be correlated with higher academic achievement.
But I think the danger of missing out is something many people worry about: phones, computers, wifi access, are all nearly essential these days and assumed to be available. But that’s not true for everyone, and it can have an impact.
The danger of missing out is a strong factor in the pull towards new technologies, Joe Friesen says.ILLUSTRATION BY MATT ROTA
How can teachers and professors know that their students’ essays and research papers are actually fully written by them and not by AI?
Lametti: It’s challenging. AI detectors do not work, and likely never will. In classes where I don’t want students to use AI, I tend to give assignments that focus on the learning process over the final outcome. For example, in a writing seminar I have students start assignments in class in front of me. They work in a Google doc so I can track the history of how their papers came together. They’re also graded based on effort rather than the final outcome. So far, I’ve found this approach effective.
Some university professors say AI is here to stay, so students should learn how to use it
Why does it feel like institutions are pushing AI use onto students and kids?
Lametti: I think that some institutions are, and we tend to hear about these schools more than the ones taking a more thoughtful approach. Students should be taught how to use AI, but this is not a challenging task.
Depending on the course, pushing AI on students risks jeopardizing learning outcomes. For example, I teach a senior seminar on effective science writing, and if students used AI for every assignment—or even just for editing—they wouldn’t get much from the class, and thus I don’t allow AI in this course.
Rather than pushing AI on students, at Acadia we leave AI use up to individual professors. We also offer all first year undergrads a 45-minute lecture on how AI works – what it does well, what it fails at, the ethical implications of using AI, and the possibility that overreliance on AI could compromise learning and lead to cognitive debt. I’ve found that undergraduates are very receptive to this information, and I think this is the correct approach.
I’ve noticed that while a handful of students in public classrooms can manage their phone use, most simply can’t. Whatever benefits phones might bring to learning are often overshadowed by how much they hurt attention spans and real-world social interaction.
Samantha Edwards: You raise a good point! I recently spoke to teachers, principals and students about how effective the cellphone bans are in the classroom, and the results have definitely been mixed. And even when the bans are working, students are still allowed to use their personal laptops or school-supplied Chromebooks in the classroom, which can be even more distracting. Social media is blocked on school wifi, but students told me they’d just hotspot from their phones or would download VPNs to access Snapchat and Instagram anyway.
I agree that we need to encourage students to be creative thinkers, and that we need to teach greater tech/media literacy in elementary and high schools. I’d love to see a class that touches on how social media apps are designed to be addictive and how algorithms actually work.
A classroom at John D Bracco School in Edmonton, Alberta on Thursday, August 28, 2025.Amber Bracken /The Globe and Mail
Youth and technology
How should you talk to your students or kids about AI?
Edwards: I think the most important thing is to explain the limitations of these chatbots and how they can get things wrong. So, if you’re using ChatGPT for homework help, be sure to double check the answers it gives you with a more reputable source. We’re also seeing more people form relationships with AI chatbots, either as friends, therapists or romantic interests. I think for parents it’s super important to talk to their kids about these relationships and how although they can be beneficial, they can be harmful too.
Is all this technology use affecting how these young people socialize and interact with others?
Edwards: For sure. I’ve heard from teens who say they chat with their friends constantly on Discord or in other group chats, but they don’t actually hang out in person as much. Some have said while they do want to meet up with friends, they don’t have the energy to actually make plans. It’s easier to just chill at home and scroll social media. So yes, I think it’s definitely affecting how young people socialize.
Also, I should say, I’m a millennial so I spent a lot of my teen years online and chatting with friends on MSN, so I don’t think technology = bad for socializing. But I do think the rise of algorithmic social media has really affected how we interact with others online. Instead of chatting with friends, a lot of the time we’re passively watching strangers do stuff.
I’ve also been seeing a resurgence of in-person events catered specifically to young people who want to be offline, which makes me hopeful! Earlier this summer someone in Toronto posted a TikTok about throwing a picnic in a downtown park and hundreds of people showed up. I think that’s awesome.
AI-generated videos are all over social media, is it normalizing its use to young people?
Edwards: Definitely. The proliferation of AI-generated videos on TikTok, Instagram, X and Facebook in even in the past six months has been wild to see. Some of these videos are clearly AI-generated, but many look incredibly realistic. Social platforms say that they mark AI-generated content, but in my reporting, I’ve found this rarely happens. Earlier this year I wrote a story about AI influencers and some of the followers of these accounts didn’t even realize they were AI!
At the same time that users are uploading their own AI videos/images, a lot of social media platforms are integrating generative AI right into their platforms, which I think normalizes its use too. On Facebook, you can now chat with different kinds of AI avatars/users, and on X, its chatbot Grok is built right in too.
Students work on laptops at First Avenue Elementary School in Newark, N.J., on May 22, 2023.GABRIELA BHASKAR/The New York Times News Service
The future of education
As AI takes over more cognitive tasks, what should we resist giving up, and what should we accept as healthy adaptation?
Lametti: As a cognitive scientist, I’m tempted to say that we should resist it all, but in our digital world there are certainly tasks that are more a distraction than a benefit (handling emails, for example). If AI can eliminate these distractions, freeing up time for more productive work, it will be a beneficial tool. But as you point out, it’s important that we don’t offload all of our “thinking” to AI – for one reason, it can’t really do it (AI makes a lot of mistakes), but there’s also some evidence that keeping a mentally sharp mind benefits our cognition as we age. In a university context, it’s important to teach students about AI, how it works, what it does well, and what it fails at, so they know when to use it and when to put it aside in favour of thoughtful learning.
What does AI mean for university admissions, especially at the most competitive programs?
Friesen: I would imagine that AI would be a useful tool for admissions offices when it comes to sorting applications. We know that competitive programs can get hundreds and even thousands more applications than they have spots, so I expect universities may find it economical to have AI doing some of the first phase analysis.
If the question is more about whether we can be sure that top students are really earning their top marks in the age of AI, that’s a tougher one to answer. I think it’s in the hands of high schools to keep an eye on it.
AI is likely here to stay. What do you think needs to happen in the education sphere to adapt to this reality?
Friesen: I asked the president of Toronto Metropolitan University, Mohamed Lachemi, about artificial intelligence and the university in an interview last week. I liked the phrase he used. “Artificial intelligence is like a sword with two edges,” he said. “We use artificial intelligence in many aspects of research and it can give us advantages in efficiency and the use of resources. But at the same time, we have to be also mindful about the danger of artificial intelligence, especially in terms of academic misconduct. We need to put some mechanisms in place to make sure that we don’t lose that control.”
So clearly universities are concerned about the implications for academic integrity. This does change the landscape for how evaluation can be done. But that’s something universities can adapt to.
And AI holds a lot of promise for research, making things that took a long time much quicker. I expect that as time goes on we will hear more about its promise than about the academic misconduct, but I could be wrong.
Is AI further deflating the value of a university degree? Are degrees now worth less?
Friesen: That’s something I’ve heard people speculating about. It’s so early at this point we can’t really know, but there are rumblings that employers may forego hiring in some positions that traditionally went to recent grads. That might be interpreted as a knock against getting a university degree. But over the longer term university degrees have tended to be fairly valuable for most people, most of the time. It’s always possible that it could change, but people may have asked the same thing when the Internet arrived, making knowledge available to anyone, anywhere, and universities have only grown since then. I would think that the critical thinking that’s required to interrogate AI results and direct the bots is the kind of thing that universities aim to foster.
Education
If we are going to build AI literacy into every level of learning, we must be able to measure it

Everywhere you look, someone is telling students and workers to “learn AI.”
It’s become the go-to advice for staying employable, relevant and prepared for the future. But here’s the problem: While definitions of artificial intelligence literacy are starting to emerge, we still lack a consistent, measurable framework to know whether someone is truly ready to use AI effectively and responsibly.
And that is becoming a serious issue for education and workforce systems already being reshaped by AI. Schools and colleges are redesigning their entire curriculums. Companies are rewriting job descriptions. States are launching AI-focused initiatives.
Yet we’re missing a foundational step: agreeing not only on what we mean by AI literacy, but on how we assess it in practice.
Two major recent developments underscore why this step matters, and why it is important that we find a way to take it before urging students to use AI. First, the U.S. Department of Education released its proposed priorities for advancing AI in education, guidance that will ultimately shape how federal grants will support K-12 and higher education. For the first time, we now have a proposed federal definition of AI literacy: the technical knowledge, durable skills and future-ready attitudes required to thrive in a world influenced by AI. Such literacy will enable learners to engage and create with, manage and design AI, while critically evaluating its benefits, risks and implications.
Second, we now have the White House’s American AI Action Plan, a broader national strategy aimed at strengthening the country’s leadership in artificial intelligence. Education and workforce development are central to the plan.
Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education.
What both efforts share is a recognition that AI is not just a technological shift, it’s a human one. In many ways, the most important AI literacy skills are not about AI itself, but about the human capacities needed to use AI wisely.
Sadly, the consequences of shallow AI education are already visible in workplaces. Some 55 percent of managers believe their employees are AI-proficient, while only 43 percent of employees share that confidence, according to the 2025 ETS Human Progress Report.
One can say that the same perception gap exists between school administrators and teachers. The disconnect creates risks for organizations and reveals how assumptions about AI literacy can diverge sharply from reality.
But if we’re going to build AI literacy into every level of learning, we have to ask the harder question: How do we both determine when someone is truly AI literate and assess it in ways that are fair, useful and scalable?
AI literacy may be new, but we don’t have to start from scratch to measure it. We’ve tackled challenges like this before, moving beyond check-the-box tests in digital literacy to capture deeper, real-world skills. Building on those lessons will help define and measure this next evolution of 21st-century skills.
Right now, we often treat AI literacy as a binary: You either “have it” or you don’t. But real AI literacy and readiness is more nuanced. It includes understanding how AI works, being able to use it effectively in real-world settings and knowing when to trust it. It includes writing effective prompts, spotting bias, asking hard questions and applying judgment.
This isn’t just about teaching coding or issuing a certificate. It’s about making sure that students, educators and workers can collaborate in and navigate a world in which AI is increasingly involved in how we learn, hire, communicate and make decisions.
Without a way to measure AI literacy, we can’t identify who needs support. We can’t track progress. And we risk letting a new kind of unfairness take root, in which some communities build real capacity with AI and others are left with shallow exposure and no feedback.
Related: To employers, AI skills aren’t just for tech majors anymore
What can education leaders do right now to address this issue? I have a few ideas.
First, we need a working definition of AI literacy that goes beyond tool usage. The Department of Education’s proposed definition is a good start, combining technical fluency, applied reasoning and ethical awareness.
Second, assessments of AI literacy should be integrated into curriculum design. Schools and colleges incorporating AI into coursework need clear definitions of proficiency. TeachAI’s AI Literacy Framework for Primary and Secondary Education is a great resource.
Third, AI proficiency must be defined and measured consistently, or we risk a mismatched state of literacy. Without consistent measurements and standards, one district may see AI literacy as just using ChatGPT, while another defines it far more broadly, leaving students unevenly ready for the next generation of jobs.
To prepare for an AI-driven future, defining and measuring AI literacy must be a priority. Every student will be graduating into a world in which AI literacy is essential. Human resources leaders confirmed in the 2025 ETS Human Progress Report that the No. 1 skill employers are demanding today is AI literacy. Without measurement, we risk building the future on assumptions, not readiness.
And that’s too shaky a foundation for the stakes ahead.
Amit Sevak is CEO of ETS, the largest private educational assessment organization in the world.
Contact the opinion editor at opinion@hechingerreport.org.
This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.
Education
“AI Is No Longer the Future, It’s Here: Education Must Embrace the Change”

Like every other sector, the field of education is no longer untouched by the sweeping transformation brought by Artificial Intelligence (AI). While educators worldwide are still debating how best to adapt to this new reality, a recent seminar in Kolkata underscored one clear message: AI is no longer the future—it is the present, and ignoring it is not an option. Souvik Ghosh reports
“Just like the invention of electricity saved us from studying under lamps, AI is only a tool that will help us in our education—we must adopt it,” said Mumbai-based Epiq Capital Director Navjot Mallika Kaur as she joined other panelists in stressing the importance of AI in the education system at a seminar in Kolkata titled “Future of Education in the Age of Artificial Intelligence.”
Organised by Muskaan, Education For All, the WFUNA Foundation, and the United Nations, the seminar was inaugurated by Darrin Farrant, Director of the United Nations Information Centre (UNIC), who felt AI should be embraced boldly.
Kaur emphasized the urgency of integrating AI into education, citing how thousands of schools in China are already using it to prepare children for the future.
“I have done a lot of research on what Chinese schools are doing. Around 2,000 schools there have adopted AI, and they’re not shying away from it. They’re actually using it to make children future-ready. That’s a reality we must embrace instead of judging or running away from it,” she said.
“AI gives us opportunities. We remain the masters. Irrespective of age, ChatGPT or any AI tool can act as an assistant, helping us sharpen our capacities to get things done,” she noted.
Kolkata-born Kaur further remarked: “The quality of schools and teachers here is already very high, but we must update ourselves in the age of AI. Teachers need to become friends with technology rather than fear it or only dabble in the basics.”
Samyak Chakrabarty, founder of Workverse, added: “West Bengal has always been a hub of vibrant conversations on art and culture, as it should be. But now it’s equally important to bring AI into the dialogue. With Bengal’s unparalleled creativity and intellectual fearlessness, combining this with the computing power of AI can produce extraordinary outcomes.”
The audience included students and teachers from schools like Don Bosco (Park Circus) and The BSS School. Many teachers expressed cautious optimism, acknowledging that AI’s rapid rise is reshaping traditional curricula.
Addressing the gap between traditional and technology-driven education, Bizongo co-founder Aniket Deb emphasized the enduring role of human agency.
“Learning has never been more important. Even with Google Maps, humans still need to input the start and end points. Education is about survival first, then thriving. Progress won’t stop just because jobs change—humanity doesn’t work that way,” he explained.
Deb, who co-founded Bizongo in 2015 inspired by Prime Minister Narendra Modi’s Make in India initiative, urged students to focus sharply on their interests. “Transitions always create new jobs. Students who consciously choose their subjects and directions will shine. The ability to choose—even deciding which AI tool to use—will define the future,” he stressed.
Entrepreneur Arjun Vaidya, founder of Dr. Vaidya’s and sixth-generation inheritor of a 150-year-old Ayurvedic legacy, raised questions about the relevance of rote learning in the AI age.
Recalling his own schooling, Vaidya said: “I used to paste chart papers full of dates and notes on my walls to memorize them. But now, students don’t need to mug up those dates—they’re just a click away. What matters is understanding the significance of those dates and how they shaped history.”
According to UNIC Director Darrin Farrant, the UN General Assembly this week announced two initiatives to enhance global cooperation on AI governance. First, the establishment of the UN Independent International Scientific Panel on AI; and second, a global dialogue on AI governance. These steps aim to harness AI’s benefits while managing its risks.
“India, home to one-sixth of humanity, will be a key player in this journey. We must embrace AI boldly, but also ethically and inclusively,” said Farrant, marking his first visit to Kolkata.
IBNS-TWF
Education
South Pasadena School Board to Discuss Student Smartphone Ban, AI in Classrooms & New Health Benefits | The South Pasadenan

The South Pasadena Unified School District (SPUSD) Board of Education will hold its next regular meeting on Tuesday, September 9, 2025. The meeting will address a wide range of topics, including the first reading of numerous new and revised district policies, approval of several student trips, and key financial decisions for the 2025-2026 school year.
The meeting will be held at the SPUSD District Office Board Room, located at 1100 El Centro Street, South Pasadena, CA 91030. The closed session begins at 5:30 p.m., followed by the open session at 6:30 p.m. The public is welcome to attend in person or watch the livestream.
For those wishing to address the Board, speaker cards must be submitted before the meeting begins. Comments are limited to three minutes per speaker. The full agenda and supporting materials are available on the district’s website.
Major Policy Revisions on the Agenda
The Board will conduct a first reading of updates to numerous district policies, driven by new state laws and recent court decisions. Key proposed changes include:
-
Student Smartphone Use: A new policy will be developed by July 1, 2026, to limit or prohibit student use of smartphones at school sites, in accordance with AB 3216.
-
Nondiscrimination and Harassment: Policies are being updated to reflect SB 1137, which expands the definition of discrimination to include the combination of two or more protected characteristics. Updates also address the Tennessee v. Cardona court decision related to Title IX regulations.
-
Instructional Materials: A new court ruling (Mahmoud v. Taylor) prompts updates to policies on religious beliefs and sexual health instruction, affirming parents’ right to be notified and opt their children out of certain instructional content that interferes with their religious development.
-
School Safety and Student Health: The Comprehensive Safety Plan will be updated to include high expectations for staff conduct and training. Other policies address suicide prevention strategies and opioid safety, including allowing students to carry fentanyl test strips and naloxone.
These policies will be presented for final approval at the October 14, 2025, board meeting.
Financial Decisions and Contracts
The Board is set to take action on several key financial items. It will vote to approve the 2024-2025 Unaudited Actuals Report, a state-required fiscal report that finalizes the previous year’s budget figures. Additionally, the Board will consider a resolution to adopt the annual Gann Limit, which is intended to constrain government spending growth.
Several significant contracts are also up for approval, including:
-
An agreement with the Los Angeles County Office of Education for $9,100 to provide professional development on generative artificial intelligence (AI) for middle and high school faculty.
-
Contracts with several non-public schools and agencies to provide services for special education students, totaling nearly $1.2 million.
-
Approval of commercial warrants totaling $2,499,234.93 issued between July 31 and August 25, 2025.
-
Resolutions to change the district’s health care provider to Self-Insured Schools of California III (SISC III) for all employee groups, a move expected to result in significant savings. The change would be effective January 1, 2026.
Student Enrichment and Recognitions
The agenda includes the approval of several overnight field trips for students across the district:
-
5th Grade: Students from Arroyo Vista, Marengo, and Monterey Hills elementary schools will attend Outdoor Science School in Wrightwood, California, in October.
-
7th Grade: Approximately 155 middle school students will travel to Pali Institute in Running Springs for an outdoor education camp from November 7-9, 2025.
-
High School: Three SkillsUSA students will travel to Washington, D.C., to participate in the Washington Leadership Training Institute Conference from September 19-24, 2025.
The costs for these trips will be covered by parent donations, PTA funds, and fundraising, with assurances that no student will be denied participation due to an inability to pay.
Finally, the Board will formally introduce the new Student Board Member, Maeve DeStefano, and recognize the District Teachers of the Year.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi