Connect with us

Education

At one elite college, over 80% of students now use AI – but it’s not all about outsourcing their work

Published

on


Over 80% of Middlebury College students use generative AI for coursework, according to a recent survey I conducted with my colleague and fellow economist Zara Contractor. This is one of the fastest technology adoption rates on record, far outpacing the 40% adoption rate among U.S. adults, and it happened in less than two years after ChatGPT’s public launch.

Although we surveyed only one college, our results align with similar studies, providing an emerging picture of the technology’s use in higher education.

Between December 2024 and February 2025, we surveyed over 20% of Middlebury College’s student body, or 634 students, to better understand how students are using artificial intelligence, and published our results in a working paper that has not yet gone through peer review.

What we found challenges the panic-driven narrative around AI in higher education and instead suggests that institutional policy should focus on how AI is used, not whether it should be banned.

Not just a homework machine

Contrary to alarming headlines suggesting that “ChatGPT has unraveled the entire academic project” and “AI Cheating Is Getting Worse,” we discovered that students primarily use AI to enhance their learning rather than to avoid work.

When we asked students about 10 different academic uses of AI – from explaining concepts and summarizing readings to proofreading, creating programming code and, yes, even writing essays – explaining concepts topped the list. Students frequently described AI as an “on-demand tutor,” a resource that was particularly valuable when office hours weren’t available or when they needed immediate help late at night.

We grouped AI uses into two types: “augmentation” to describe uses that enhance learning, and “automation” for uses that produce work with minimal effort. We found that 61% of the students who use AI employ these tools for augmentation purposes, while 42% use them for automation tasks like writing essays or generating code.

Even when students used AI to automate tasks, they showed judgment. In open-ended responses, students told us that when they did automate work, it was often during crunch periods like exam week, or for low-stakes tasks like formatting bibliographies and drafting routine emails, not as their default approach to completing meaningful coursework.

**In the graphic explainer, add “tasks” after

Of course, Middlebury is a small liberal arts college with a relatively large portion of wealthy students. What about everywhere else? To find out, we analyzed data from other researchers covering over 130 universities across more than 50 countries. The results mirror our Middlebury findings: Globally, students who use AI tend to be more likely to use it to augment their coursework, rather than automate it.

But should we trust what students tell us about how they use AI? An obvious concern with survey data is that students might underreport uses they see as inappropriate, like essay writing, while overreporting legitimate uses like getting explanations. To verify our findings, we compared them with data from AI company Anthropic, which analyzed actual usage patterns from university email addresses of their chatbot, Claude AI.

Anthropic’s data shows that “technical explanations” represent a major use, matching our finding that students most often use AI to explain concepts. Similarly, Anthropic found that designing practice questions, editing essays and summarizing materials account for a substantial share of student usage, which aligns with our results.

In other words, our self-reported survey data matches actual AI conversation logs.

Why it matters

As writer and academic Hua Hsu recently noted, “There are no reliable figures for how many American students use A.I., just stories about how everyone is doing it.” These stories tend to emphasize extreme examples, like a Columbia student who used AI “to cheat on nearly every assignment.”

But these anecdotes can conflate widespread adoption with universal cheating. Our data confirms that AI use is indeed widespread, but students primarily use it to enhance learning, not replace it. This distinction matters: By painting all AI use as cheating, alarmist coverage may normalize academic dishonesty, making responsible students feel naive for following rules when they believe “everyone else is doing it.”

Moreover, this distorted picture provides biased information to university administrators, who need accurate data about actual student AI usage patterns to craft effective, evidence-based policies.

What’s next

Our findings suggest that extreme policies like blanket bans or unrestricted use carry risks. Prohibitions may disproportionately harm students who benefit most from AI’s tutoring functions while creating unfair advantages for rule breakers. But unrestricted use could enable harmful automation practices that may undermine learning.

Instead of one-size-fits-all policies, our findings lead me to believe that institutions should focus on helping students distinguish beneficial AI uses from potentially harmful ones. Unfortunately, research on AI’s actual learning impacts remains in its infancy – no studies I’m aware of have systematically tested how different types of AI use affect student learning outcomes, or whether AI impacts might be positive for some students but negative for others.

Until that evidence is available, everyone interested in how this technology is changing education must use their best judgment to determine how AI can foster learning.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Education

If we are going to build AI literacy into every level of learning, we must be able to measure it

Published

on


Everywhere you look, someone is telling students and workers to “learn AI.” 

It’s become the go-to advice for staying employable, relevant and prepared for the future. But here’s the problem: While definitions of artificial intelligence literacy are starting to emerge, we still lack a consistent, measurable framework to know whether someone is truly ready to use AI effectively and responsibly. 

And that is becoming a serious issue for education and workforce systems already being reshaped by AI. Schools and colleges are redesigning their entire curriculums. Companies are rewriting job descriptions. States are launching AI-focused initiatives.  

Yet we’re missing a foundational step: agreeing not only on what we mean by AI literacy, but on how we assess it in practice. 

Two major recent developments underscore why this step matters, and why it is important that we find a way to take it before urging students to use AI. First, the U.S. Department of Education released its proposed priorities for advancing AI in education, guidance that will ultimately shape how federal grants will support K-12 and higher education. For the first time, we now have a proposed federal definition of AI literacy: the technical knowledge, durable skills and future-ready attitudes required to thrive in a world influenced by AI. Such literacy will enable learners to engage and create with, manage and design AI, while critically evaluating its benefits, risks and implications. 

Second, we now have the White House’s American AI Action Plan, a broader national strategy aimed at strengthening the country’s leadership in artificial intelligence. Education and workforce development are central to the plan. 

Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education. 

What both efforts share is a recognition that AI is not just a technological shift, it’s a human one. In many ways, the most important AI literacy skills are not about AI itself, but about the human capacities needed to use AI wisely. 

Sadly, the consequences of shallow AI education are already visible in workplaces. Some 55 percent of managers believe their employees are AI-proficient, while only 43 percent of employees share that confidence, according to the 2025 ETS Human Progress Report.  

One can say that the same perception gap exists between school administrators and teachers. The disconnect creates risks for organizations and reveals how assumptions about AI literacy can diverge sharply from reality. 

But if we’re going to build AI literacy into every level of learning, we have to ask the harder question: How do we both determine when someone is truly AI literate and assess it in ways that are fair, useful and scalable? 

AI literacy may be new, but we don’t have to start from scratch to measure it. We’ve tackled challenges like this before, moving beyond check-the-box tests in digital literacy to capture deeper, real-world skills. Building on those lessons will help define and measure this next evolution of 21st-century skills. 

Right now, we often treat AI literacy as a binary: You either “have it” or you don’t. But real AI literacy and readiness is more nuanced. It includes understanding how AI works, being able to use it effectively in real-world settings and knowing when to trust it. It includes writing effective prompts, spotting bias, asking hard questions and applying judgment. 

This isn’t just about teaching coding or issuing a certificate. It’s about making sure that students, educators and workers can collaborate in and navigate a world in which AI is increasingly involved in how we learn, hire, communicate and make decisions.  

Without a way to measure AI literacy, we can’t identify who needs support. We can’t track progress. And we risk letting a new kind of unfairness take root, in which some communities build real capacity with AI and others are left with shallow exposure and no feedback. 

Related: To employers,AIskills aren’t just for tech majors anymore 

What can education leaders do right now to address this issue? I have a few ideas.  

First, we need a working definition of AI literacy that goes beyond tool usage. The Department of Education’s proposed definition is a good start, combining technical fluency, applied reasoning and ethical awareness.  

Second, assessments of AI literacy should be integrated into curriculum design. Schools and colleges incorporating AI into coursework need clear definitions of proficiency. TeachAI’s AI Literacy Framework for Primary and Secondary Education is a great resource. 

Third, AI proficiency must be defined and measured consistently, or we risk a mismatched state of literacy. Without consistent measurements and standards, one district may see AI literacy as just using ChatGPT, while another defines it far more broadly, leaving students unevenly ready for the next generation of jobs. 

To prepare for an AI-driven future, defining and measuring AI literacy must be a priority. Every student will be graduating into a world in which AI literacy is essential. Human resources leaders confirmed in the 2025 ETS Human Progress Report that the No. 1 skill employers are demanding today is AI literacy. Without measurement, we risk building the future on assumptions, not readiness.  

And that’s too shaky a foundation for the stakes ahead. 

Amit Sevak is CEO of ETS, the largest private educational assessment organization in the world. 

Contact the opinion editor at opinion@hechingerreport.org. 

This story about AI literacy was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter. 

The Hechinger Report provides in-depth, fact-based, unbiased reporting on education that is free to all readers. But that doesn’t mean it’s free to produce. Our work keeps educators and the public informed about pressing issues at schools and on campuses throughout the country. We tell the whole story, even when the details are inconvenient. Help us keep doing that.

Join us today.



Source link

Continue Reading

Education

“AI Is No Longer the Future, It’s Here: Education Must Embrace the Change”

Published

on


Like every other sector, the field of education is no longer untouched by the sweeping transformation brought by Artificial Intelligence (AI). While educators worldwide are still debating how best to adapt to this new reality, a recent seminar in Kolkata underscored one clear message: AI is no longer the future—it is the present, and ignoring it is not an option. Souvik Ghosh reports

“Just like the invention of electricity saved us from studying under lamps, AI is only a tool that will help us in our education—we must adopt it,” said Mumbai-based Epiq Capital Director Navjot Mallika Kaur as she joined other panelists in stressing the importance of AI in the education system at a seminar in Kolkata titled “Future of Education in the Age of Artificial Intelligence.”

Organised by Muskaan, Education For All, the WFUNA Foundation, and the United Nations, the seminar was inaugurated by Darrin Farrant, Director of the United Nations Information Centre (UNIC), who felt AI should be embraced boldly.

Kaur emphasized the urgency of integrating AI into education, citing how thousands of schools in China are already using it to prepare children for the future.

“I have done a lot of research on what Chinese schools are doing. Around 2,000 schools there have adopted AI, and they’re not shying away from it. They’re actually using it to make children future-ready. That’s a reality we must embrace instead of judging or running away from it,” she said.

“AI gives us opportunities. We remain the masters. Irrespective of age, ChatGPT or any AI tool can act as an assistant, helping us sharpen our capacities to get things done,” she noted.

Kolkata-born Kaur further remarked: “The quality of schools and teachers here is already very high, but we must update ourselves in the age of AI. Teachers need to become friends with technology rather than fear it or only dabble in the basics.”

Samyak Chakrabarty, founder of Workverse, added: “West Bengal has always been a hub of vibrant conversations on art and culture, as it should be. But now it’s equally important to bring AI into the dialogue. With Bengal’s unparalleled creativity and intellectual fearlessness, combining this with the computing power of AI can produce extraordinary outcomes.”

The audience included students and teachers from schools like Don Bosco (Park Circus) and The BSS School. Many teachers expressed cautious optimism, acknowledging that AI’s rapid rise is reshaping traditional curricula.

Addressing the gap between traditional and technology-driven education, Bizongo co-founder Aniket Deb emphasized the enduring role of human agency.

“Learning has never been more important. Even with Google Maps, humans still need to input the start and end points. Education is about survival first, then thriving. Progress won’t stop just because jobs change—humanity doesn’t work that way,” he explained.

Deb, who co-founded Bizongo in 2015 inspired by Prime Minister Narendra Modi’s Make in India initiative, urged students to focus sharply on their interests. “Transitions always create new jobs. Students who consciously choose their subjects and directions will shine. The ability to choose—even deciding which AI tool to use—will define the future,” he stressed.

Entrepreneur Arjun Vaidya, founder of Dr. Vaidya’s and sixth-generation inheritor of a 150-year-old Ayurvedic legacy, raised questions about the relevance of rote learning in the AI age.

Recalling his own schooling, Vaidya said: “I used to paste chart papers full of dates and notes on my walls to memorize them. But now, students don’t need to mug up those dates—they’re just a click away. What matters is understanding the significance of those dates and how they shaped history.”

According to UNIC Director Darrin Farrant, the UN General Assembly this week announced two initiatives to enhance global cooperation on AI governance. First, the establishment of the UN Independent International Scientific Panel on AI; and second, a global dialogue on AI governance. These steps aim to harness AI’s benefits while managing its risks.

“India, home to one-sixth of humanity, will be a key player in this journey. We must embrace AI boldly, but also ethically and inclusively,” said Farrant, marking his first visit to Kolkata.

 

IBNS-TWF

 



Source link

Continue Reading

Education

South Pasadena School Board to Discuss Student Smartphone Ban, AI in Classrooms & New Health Benefits | The South Pasadenan

Published

on


The South Pasadena Unified School District (SPUSD) Board of Education will hold its next regular meeting on Tuesday, September 9, 2025. The meeting will address a wide range of topics, including the first reading of numerous new and revised district policies, approval of several student trips, and key financial decisions for the 2025-2026 school year.

The meeting will be held at the SPUSD District Office Board Room, located at 1100 El Centro Street, South Pasadena, CA 91030. The closed session begins at 5:30 p.m., followed by the open session at 6:30 p.m. The public is welcome to attend in person or watch the livestream.

For those wishing to address the Board, speaker cards must be submitted before the meeting begins. Comments are limited to three minutes per speaker. The full agenda and supporting materials are available on the district’s website.

Major Policy Revisions on the Agenda

The Board will conduct a first reading of updates to numerous district policies, driven by new state laws and recent court decisions. Key proposed changes include:

  • Student Smartphone Use: A new policy will be developed by July 1, 2026, to limit or prohibit student use of smartphones at school sites, in accordance with AB 3216.

  • Nondiscrimination and Harassment: Policies are being updated to reflect SB 1137, which expands the definition of discrimination to include the combination of two or more protected characteristics. Updates also address the Tennessee v. Cardona court decision related to Title IX regulations.

  • Instructional Materials: A new court ruling (Mahmoud v. Taylor) prompts updates to policies on religious beliefs and sexual health instruction, affirming parents’ right to be notified and opt their children out of certain instructional content that interferes with their religious development.

  • School Safety and Student Health: The Comprehensive Safety Plan will be updated to include high expectations for staff conduct and training. Other policies address suicide prevention strategies and opioid safety, including allowing students to carry fentanyl test strips and naloxone.

These policies will be presented for final approval at the October 14, 2025, board meeting.

Financial Decisions and Contracts

The Board is set to take action on several key financial items. It will vote to approve the 2024-2025 Unaudited Actuals Report, a state-required fiscal report that finalizes the previous year’s budget figures. Additionally, the Board will consider a resolution to adopt the annual Gann Limit, which is intended to constrain government spending growth.

Several significant contracts are also up for approval, including:

  • An agreement with the Los Angeles County Office of Education for $9,100 to provide professional development on generative artificial intelligence (AI) for middle and high school faculty.

  • Contracts with several non-public schools and agencies to provide services for special education students, totaling nearly $1.2 million.

  • Approval of commercial warrants totaling $2,499,234.93 issued between July 31 and August 25, 2025.

  • Resolutions to change the district’s health care provider to Self-Insured Schools of California III (SISC III) for all employee groups, a move expected to result in significant savings. The change would be effective January 1, 2026.

Student Enrichment and Recognitions

The agenda includes the approval of several overnight field trips for students across the district:

  • 5th Grade: Students from Arroyo Vista, Marengo, and Monterey Hills elementary schools will attend Outdoor Science School in Wrightwood, California, in October.

  • 7th Grade: Approximately 155 middle school students will travel to Pali Institute in Running Springs for an outdoor education camp from November 7-9, 2025.

  • High School: Three SkillsUSA students will travel to Washington, D.C., to participate in the Washington Leadership Training Institute Conference from September 19-24, 2025.

The costs for these trips will be covered by parent donations, PTA funds, and fundraising, with assurances that no student will be denied participation due to an inability to pay.

Finally, the Board will formally introduce the new Student Board Member, Maeve DeStefano, and recognize the District Teachers of the Year.



Source link

Continue Reading

Trending