Connect with us

Education

It’s true that my fellow students are embracing AI – but this is what the critics aren’t seeing | Elsie McDowell

Published

on


Reading about the role of artificial intelligence in higher education, the landscape looks bleak. Students are cheating en masse in our assessments or open-book, online exams using AI tools, all the while making ourselves stupider. The next generation of graduates, apparently, are going to complete their degrees without ever having so much as approached a critical thought.

Given that my course is examined entirely through closed-book exams, and I worry about the vast amounts of water and energy needed to power AI datacentres, I generally avoid using ChatGPT. But in my experience, students see it as a broadly acceptable tool in the learning process. Although debates about AI tend to focus on “cheating”, it is increasingly being used to assist with research, or to help structure essays.

There are valid concerns about the abuse and overuse of large language models (LLMs) in education. But if you want to understand why so many students are turning to AI, you need to understand what brought us to this point – and the educational context against which this is playing out.

In March 2020, I was about to turn 15. When the news broke that schools would be closing as part of the Covid lockdown, I remember cheers erupting in the corridors. As I celebrated what we all thought was just two weeks off school, I could not have envisioned the disruption that would mar the next three years of my education.

That year, GCSEs and A-levels were cancelled and replaced with teacher-assessed grades, which notoriously privileged those at already well-performing private schools. After further school closures, and a prolonged period of dithering, the then-education secretary, Gavin Williamson, cancelled them again in 2021. My A-level cohort in 2023 was the first to return to “normal” examinations – in England, at least – which resulted in a punitive crackdown on grade inflation that left many with far lower grades than expected.

At the same time, universities across the country were also grappling with how to assess students who were no longer physically on campus. The solution: open-book, online assessments for papers that were not already examined by coursework. When the students of the lockdown years graduated, the university system did not immediately return to its pre-Covid arrangements. Five years on, 70% of universities still use some form of online assessment.

This is not because, as some will have you believe, university has become too easy. These changes are a response to the fact that the large majority of current home students did not have the typical experience of national exams. Given the extensive periods of time we spent away from school during our GCSE and A-level years, there were inevitably parts of the curriculum that we were never able to cover. But beyond missed content, the government’s repeated backtracking and U-turning on the format of our exams from 2020 onwards bred uncertainty that continued to shape how we were assessed – even as we progressed on to higher education.

In my first year of university, half of my exams were online. This year, they all returned to handwritten, closed-book assessments. In both cases, I did not get confirmation about the format of my exams until well into the academic year. And, in one instance, third-year students sitting the exact same paper as me were examined online and in a longer timeframe, to recognise that they had not sat a handwritten exam at any point during their degree.

And so when ChatGPT was released in 2022, it landed in a university system in transition, characterised by yet more uncertainty. University exams had already become inconsistent and widely variable, between universities and within faculties themselves – only serving to increase the allure of AI for students who felt on the back foot, and make it harder to detect and monitor its use.

Even if it were not for our botched exams, being a student is more expensive than ever: 68% of students have part-time jobs, the highest rate in a decade. The student loan system, too, leaves those from the poorest backgrounds with the largest amounts of debt. I am already part of the first year to have to pay back our loans over 40, rather than 30, years. And that is before tuition fees rise again.

Students have less time than ever to actually be students. AI is a time-saving tool; if students don’t have the time or resources to fully engage with their studies, it is because something has gone badly wrong with the university system itself.

The use of AI is mushrooming because it’s convenient and fast, yes, but also because of the uncertainty that prevails around post-Covid exams, as well as the increasing financial precarity of students. Universities need to pick an exam format and stick to it. If this involves coursework or open-book exams, there needs to be clarity about what “proportionate” usage of AI looks like. For better or for worse, AI is here to stay. Not because students are lazy, but because what it means to be a student is changing just as rapidly as technology.

  • Elsie McDowell is a student. She was the 2023 winner of the Hugo Young award, 16-18 age category

  • Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Education

the skills equation for growth

Published

on


According to the World Economic Forum’s Future of Jobs report, 39% of existing skill sets will be transformed or become outdated within the next five years. For universities and colleges, that raises a practical question: how do we help learners transition into jobs that are evolving as they study?

Our analysis of labour market data indicates that inefficiencies in career transitions and skills mismatches impose a substantial, recurring cost on the economy. Whether you look at OECD research or UK business surveys, the signal is consistent: better alignment between skills and roles is a national growth lever.

A balanced skills strategy has to do two things at once:

  • Invest in homegrown capability at scale, from supporting educators with a future-facing curriculum to incentivising businesses to invest in skills development.
  • Attract and retain international talent in areas of genuine shortage, so employers can keep delivering while the domestic pipeline grows.

Language sits at the heart of how international talent is realised. English proficiency is not the only determinant of success—qualifications, work experience, employer practices, and student support all matter—but it is a critical enabler of academic attainment and workplace integration.

Accurately understanding what a learner can do with English in real contexts helps institutions place students on the right programs and target support, and it helps employers identify candidates who can contribute from day one. This is not only a UK story. Many international learners return home, where English and job relevant skills increase employability and earning power.

The rise of advanced technology raises opportunities for efficiency, but also makes testing more vulnerable to misuse, so confidence matters more than ever. From our work across the sector, three priorities stand out for assessments:

  • First, trusted results. Pair advanced AI scoring with human oversight and layered security. For higher‑stakes sittings, secure centres add the necessary extra assurance: biometric ID checks, trained invigilators in the room, and multi‑camera coverage.
  • Second, relevance to real academic life. Assess the communication students actually do: follow lectures and seminars, summarise complex spoken content, interpret visuals, and contribute to discussions.
  • Third, fairness. Use CEFR‑aligned scoring that’s independently validated and monitored, so admissions decisions are confident.

Crucially, better measurement is a means, not an end. Used well, it helps inform admissions and placement, so students start in the right place and get in‑sessional support where it will make the biggest difference. And it provides employers and careers services with clearer evidence that graduates can operate in the language demands of specific sectors.

The UK has a window to convert uncertainty into advantage. If we pair investment in homegrown skills with a welcoming, well‑governed approach to international talent—and if we use evidence to match people to courses and jobs more precisely—we can ease the drag of mismatch and accelerate growth. At the centre of that effort is something deceptively simple: the ability to connect in a shared language. When we get that right, opportunities multiply, for learners, for employers and for every region of the country.

The author: James Carmichael, country manager UK and Ireland, Pearson English Language Learning



Source link

Continue Reading

Education

Opinion | Global AI war will be won in the key arena of education and training

Published

on


In the global race for artificial intelligence (AI), nations rightly chase cutting-edge technologies, big data and data centres heavy with graphics processing units (GPUs). But thought leaders including OpenAI CEO Sam Altman and institutions from the Federation of American Scientists to China’s Ministry of Education are urging investment in educator training and AI literacy for all citizens. They argue for a more human-centred AI strategy.

Having taught AI and data analytics in China, I have seen the payoff: graduates join internet giants, leading electric-vehicle makers and the finance industry.

My case is simple: the country that best educates people to collaborate with AI will lead in productivity, innovation and competitiveness, achieving the highest level of augmented collective intelligence. This reframes the so-called AI war not as a contest of GPUs and algorithms, but as a race to build the most AI-capable human capital. Data and hardware are ammunition; the strategic weapon is AI education.

According to Norwegian Business School professor Vegard Kolbjørnsrud, six principles define how humans and AI can work together in organisations. These principles aren’t just for managers or tech executives; they form a core mindset that should be embedded in any national AI education strategy to improve productivity for professors, teachers and students.

Let’s briefly unpack each principle and how it relates to broader national competitiveness in AI education.

The first is what he calls the addition principle. Organisational intelligence grows when human and digital actors are added effectively. We need to teach citizens to migrate from low-value to higher-level tasks with AI. A nation doesn’t need every citizen to be a machine-learning engineer, but it needs most people to understand how AI augments roles in research and development, healthcare, logistics, manufacturing, finance and creative industries. Thus, governments should democratise AI by investing in platforms that reskill everyone, fast.



Source link

Continue Reading

Education

What counts as cheating? – NBC 5 Dallas-Fort Worth

Published

on


The book report is now a thing of the past. Take-home tests and essays are becoming obsolete.

High school and college educators say student use of artificial intelligence has become so prevalent that assigning writing outside of the classroom is like asking students to cheat.

“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” says Casey Cuny, who has taught English for 23 years. Educators are no longer wondering if students will outsource schoolwork to AI chatbots. “Anything you send home, you have to assume is being AI’ed.”

The question is how schools can adapt, because many of the teaching and assessment tools used for generations are no longer effective. As AI technology rapidly improves and becomes more entwined with daily life, it is transforming how students learn and study and how teachers teach, and it’s creating new confusion over what constitutes academic dishonesty.

“We have to ask ourselves, what is cheating?” says Cuny, a 2024 recipient of California’s Teacher of the Year award. “Because I think the lines are getting blurred.”

Cuny’s students at Valencia High School in Southern California now do most of their writing in class. He monitors student laptop screens from his desktop, using software that lets him “lock down” their screens or block access to certain sites. He’s also integrating AI into his lessons and teaching students how to use AI as a study aid “to get kids learning with AI instead of cheating with AI.”

In rural Oregon, high school teacher Kelly Gibson has made a similar shift to in-class writing. She also incorporates more verbal assessments to have students discuss their understanding of the assigned reading.

“I used to give a writing prompt and say, ‘In two weeks, I want a five-paragraph essay,’” says Gibson. “These days, I can’t do that. That’s almost begging teenagers to cheat.”

Take, for example, a once typical high school English assignment: Write an essay that explains the relevance of social class in “The Great Gatsby.” Many students say their first instinct is to ask ChatGPT for help “brainstorming.” Within seconds, ChatGPT yields a list of essay ideas, examples, and quotes to back them up. The chatbot ends by asking if it can do more: “Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!”

Students are uncertain when AI usage is out of bounds

Students say they often turn to AI with good intentions for things like research, editing or help reading difficult texts. But AI offers unprecedented temptation, and it’s sometimes hard to know where to draw the line.

College sophomore Lily Brown, a psychology major at an East Coast liberal arts school, relies on ChatGPT to help outline essays because she struggles putting the pieces together herself. ChatGPT also helped her through a freshman philosophy class, where assigned reading “felt like a different language” until she read AI summaries of the texts.

“Sometimes I feel bad using ChatGPT to summarize reading, because I wonder, is this cheating? Is helping me form outlines cheating? If I write an essay in my own words and ask how to improve it, or when it starts to edit my essay, is that cheating?”

Her class syllabi say things like: “Don’t use AI to write essays and to form thoughts,” she says, leaving a lot of grey area. Students say they often shy away from asking teachers for clarity because admitting to any AI use could flag them as cheaters.

Schools tend to leave AI policies to teachers, often meaning that rules vary widely within the same school. Some educators, for example, welcome the use of Grammarly.com, an AI-powered writing assistant, to check grammar. Others forbid it, noting the tool also offers to rewrite sentences.

“Whether you can use AI or not depends on each classroom. That can get confusing,” says Valencia 11th grader Jolie Lahey. She credits Cuny with teaching her sophomore English class various AI skills like uploading study guides to ChatGPT, having the chatbot quiz them, and then explaining problems they got wrong.

But this year, her teachers have strict “No AI” policies. “It’s such a helpful tool. And if we’re not allowed to use it that just doesn’t make sense,” Lahey says. “It feels outdated.”

Schools are introducing guidelines, gradually

Many schools initially banned use of AI after ChatGPT launched in late 2022. But views on the role of artificial intelligence in education have shifted dramatically. The term “AI literacy” has become a buzzword of the back-to-school season, with a focus on how to balance the strengths of AI with its risks and challenges.

Over the summer, several colleges and universities convened their AI task forces to draft more detailed guidelines or provide faculty with new instructions.

The University of California, Berkeley, emailed all faculty new AI guidance that instructs them to “include a clear statement on their syllabus about course expectations” regarding AI use. The guidance offered language for three sample syllabus statements: for courses that require AI, ban AI in and out of class, or allow some AI use.

“In the absence of such a statement, students may be more likely to use these technologies inappropriately,” the email said, stressing that AI is “creating new confusion about what might constitute legitimate methods for completing student work.”

Carnegie Mellon University has seen a huge uptick in academic responsibility violations due to AI, but often students aren’t aware they’ve done anything wrong, says Rebekah Fitzsimmons, chair of the AI faculty advising committee at the university’s Heinz College of Information Systems and Public Policy.

For example, one student learning English wrote an assignment in his native language and used DeepL, an AI-powered translation tool, to translate his work to English. But he didn’t realize the platform also altered his language, which was flagged by an AI detector.

Fitzsimmons said enforcing academic integrity policies has become more complicated since the use of AI is hard to spot and even harder to prove. Faculty are allowed flexibility when they believe a student has unintentionally crossed a line, but they are now more hesitant to point out violations because they don’t want to accuse students unfairly. Students worry that there is no way to prove their innocence if they are falsely accused.

Over the summer, Fitzsimmons helped draft detailed new guidelines for students and faculty that strive to create more clarity. Faculty have been told that a blanket ban on AI “is not a viable policy” unless instructors change how they teach and assess students. Many faculty members are doing away with take-home exams. Some have returned to pen-and-paper tests in class, she said, and others have moved to “flipped classrooms,” where homework is done in class.

Emily DeJeu, who teaches communication courses at Carnegie Mellon’s business school, has eliminated writing assignments as homework and replaced them with in-class quizzes done on laptops in “a lockdown browser” that blocks students from leaving the quiz screen.

“To expect an 18-year-old to exercise great discipline is unreasonable,” DeJeu said. “That’s why it’s up to instructors to put up guardrails.”



Source link

Continue Reading

Trending