Connect with us

Tools & Platforms

Lecturer Says AI Has Made Her Workload Skyrocket, Fears Cheating

Published

on


This as-told-to essay is based on a transcribed conversation with Risa Morimoto, a senior lecturer in economics at SOAS University of London, in England. The following has been edited for length and clarity.

Students always cheat.

I’ve been a lecturer for 18 years, and I’ve dealt with cheating throughout that time, but with AI tools becoming widely available in recent years, I’ve experienced a significant change.

There are definitely positive aspects to AI. It’s much easier to get access to information and students can use these tools to improve their writing, spelling, and grammar, so there are fewer badly written essays.

However, I believe some of my students have been using AI to generate essay content that pulls information from the internet, instead of using material from my classes to complete their assignments.

AI is supposed to help us work efficiently, but my workload has skyrocketed because of it. I have to spend lots of time figuring out whether the work students are handing in was really written by them.

I’ve decided to take dramatic action, changing the way I assess students to encourage them to be more creative and rely less on AI. The world is changing, so universities can’t stand still.

Cheating has become harder to detect because of AI

I’ve worked at SOAS University of London since 2012. My teaching focus is ecological economics.

Initially, my teaching style was exam-based, but I found that students were anxious about one-off exams, and their results wouldn’t always correspond to their performance.

I eventually pivoted to a focus on essays. Students chose their topic and consolidated theories into an essay. It worked well — until AI came along.

Cheating used to be easier to spot. I’d maybe catch one or two students cheating by copying huge chunks of text from internet sources, leading to a plagiarism case. Even two or three years ago, detecting inappropriate AI use was easier due to signs like robotic writing styles.

Now, with more sophisticated AI technologies, it’s harder to detect, and I believe the scale of cheating has increased.

I’ll read 100 essays and some of them will be very similar using identical case examples, that I’ve never taught.

These examples are typically referenced on the internet, which makes me think the students are using an AI tool that is incorporating them. Some of the essays will cite 20 pieces of literature, but not a single one will be something from the reading list I set.

While students can use examples from internet sources in their work, I’m concerned that some students have just used AI to generate the essay content without reading or engaging with the original source.

I started using AI detection tools to assess work, but I’m aware this technology has limitations.

AI tools are easy to access for students who feel pressured by the amount of work they have to do. University fees are increasing, and a lot of students work part-time jobs, so it makes sense to me that they want to use these tools to complete work more quickly.

There’s no obvious way to judge misconduct

During the first lecture of my module, I’ll tell students they can use AI to check grammar or summarize the literature to better understand it, but they can’t use it to generate responses to their assignments.

SOAS has guidance for AI use among students, which sets similar principles about not using AI to generate essays.

Over the past year, I’ve sat on an academic misconduct panel at the university, dealing with students who’ve been flagged for inappropriate AI use across departments.

I’ve seen students refer to these guidelines and say that they only used AI to support their learning and not to write their responses.

It can be hard to make decisions because you can’t be 100% sure from reading the essay whether it’s AI-generated or not. It’s also hard to draw a line between cheating and using AI to support learning.

Next year, I’m going to dramatically change my assignment format

My colleagues and I speak about the negative and positive aspects of AI, and we’re aware that we still have a lot to learn about the technology ourselves.

The university is encouraging lecturers to change their teaching and assessment practices. At the department level, we often discuss how to improve things.

I send my two young children to a school with an alternative, progressive education system, rather than a mainstream British state school. Seeing how my kids are educated has inspired me to try two alternative assessment methods this coming academic year. I had to go through a formal process with the university to get them approved.

I’ll ask my students to choose a topic and produce a summary of what they learned in the class about it. Second, they’ll create a blog, so they can translate what they’ve understood of the highly technical terms into a more communicable format.

My aim is to make sure the assignments are directly tied to what we’ve learned in class and make assessments more personal and creative.

The old assessment model, which involves memorizing facts and regurgitating them in exams, isn’t useful anymore. ChatGPT can easily give you a beautiful summary of information like this. Instead, educators need to help students with soft skills, communication, and out-of-the-box thinking.

In a statement to BI, a SOAS spokesperson said students are guided to use AI in ways that “uphold academic integrity.” They said the university encouraged students to pursue work that is harder for AI to replicate and have “robust mechanisms” in place for investigating AI misuse. “The use of AI is constantly evolving, and we are regularly reviewing and updating our policies to respond to these changes,” the spokesperson added.

Do you have a story to share about AI in education? Contact this reporter at ccheong@businessinsider.com.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Gelson’s adopts Upshop’s AI-powered tech

Published

on


Gelson’s Markets has gone all-in on artificial intelligence with plans to deploy Uphop’s total store platform to manage forecasting, ordering, inventory, and production planning, the Austin-based tech company announced Monday. 

Gelson’s, which operates 26 upscale supermarkets and one convenience store, ReCharge by Gelsons, in Southern California, said the partnership ensures that “every location is tuned into local demand dynamics.”

The Austin-based SaaS tech company has served as a leader in AI-powered inventory management with its suite of tools that streamline the process. That includes direct store delivery (DSD) future-proofing, food traceability, and food waste management, among others. 

“In a competitive grocery landscape, scale isn’t everything—intelligence is,” said Ryan Adams, president and CEO of Gelson’s Markets, in a press release. “With Upshop’s embedded platform and AI-driven capabilities, we’re empowering our stores to be hyper-responsive, efficient, and focused on the guest experience. It’s how Gelson’s can compete at the highest level.”

Implementing the new technology puts Gelson’s in league with “a market dominated by national chains,” according to Upshop.

The grocery retailer’s adoption of the platform will kick off with a focus on “eliminating food waste and optimizing fresh food production—especially within foodservice,” with the goals of reducing shrink, streamlining production, and enhancing quality, according to Upshop.

Related:Foxtrot added to Uber Eats app

The premium grocery chain’s announcement appears to build on its recent investment in technology. In January 2024, the grocer announced a partnership with Scottsdale, Ariz.-based Clear Demand, which specializes in so-called intelligent price management and optimization (IPMO). That partnership aims to manage retail pricing strategies for the grocer.
Gelson’s was sold to Tokyo-based Pan Pacific International Holdings (PPIH) from TPG Capital in 2021.

**
Join us at Grocery NEXT, September 10-12 at the Westin Chicago Northwest in Itasca, Ill., where industry leaders will explore the future of grocery technology, AI, automation and evolving consumer trends. Register now to be part of this groundbreaking event.





Source link

Continue Reading

Tools & Platforms

IT Summit focuses on balancing AI challenges and opportunities — Harvard Gazette

Published

on


Exploring the critical role of technology in advancing Harvard’s mission and the potential of generative AI to reshape the academic and operational landscape were the key topics discussed during University’s 12th annual IT Summit. Hosted by the CIO Council, the June 11 event attracted more than 1,000 Harvard IT professionals.

“Technology underpins every aspect of Harvard,” said Klara Jelinkova, vice president and University chief information officer, who opened the event by praising IT staff for their impact across the University.

That sentiment was echoed by keynote speaker Michael D. Smith, the John H. Finley Jr. Professor of Engineering and Applied Sciences and Harvard University Distinguished Service Professor, who described “people, physical spaces, and digital technologies” as three of the core pillars supporting Harvard’s programs. 

In his address, “You, Me, and ChatGPT: Lessons and Predictions,” Smith explored the balance between the challenges and the opportunities of using generative AI tools. He pointed to an “explainability problem” in generative AI tools and how they can produce responses that sound convincing but lack transparent reasoning: “Is this answer correct, or does it just look good?” Smith also highlighted the challenges of user frustration due to bad prompts, “hallucinations,” and the risk of overreliance on AI for critical thinking, given its “eagerness” to answer questions. 

In showcasing innovative coursework from students, Smith highlighted the transformative potential of “tutorbots,” or AI tools trained on course content that can offer students instant, around-the-clock assistance. AI is here to stay, Smith noted, so educators must prepare students for this future by ensuring they become sophisticated, effective users of the technology. 

Asked by Jelinkova how IT staff can help students and faculty, Smith urged the audience to identify early adopters of new technologies to “understand better what it is they are trying to do” and support them through the “pain” of learning a new tool. Understanding these uses and fostering collaboration can accelerate adoption and “eventually propagate to the rest of the institution.” 

The spirit of innovation and IT’s central role at Harvard continued throughout the day’s programming, which was organized into four pillars:  

  • Teaching, Learning, and Research Technology included sessions where instructors shared how they are currently experimenting with generative AI, from the Division of Continuing Education’s “Bot Club,” where instructors collaborate on AI-enhanced pedagogy, to the deployment of custom GPTs and chatbots at Harvard Business School.
  • Innovation and the Future of Services included sessions onAI video experimentation, robotic process automation, ethical implementation of AI, and a showcase of the University’s latest AI Sandbox features. 
  • Infrastructure, Applications, and Operations featured a deep dive on the extraordinary effort to bring the new David Rubenstein Treehouse conference center to life, including testing new systems in a physical “sandbox” environment and deploying thousands of feet of network cabling. 
  • And the Skills, Competencies, and Strategies breakout sessions reflected on the evolving skillsets required by modern IT — from automation design to vendor management — and explored strategies for sustaining high-functioning, collaborative teams, including workforce agility and continuous learning. 

Amid the excitement around innovation, the summit also explored the environmental impact of emerging technologies. In a session focused on Harvard’s leadership in IT sustainability — as part of its broader Sustainability Action Plan — presenters explored how even small individual actions, like crafting more effective prompts, can meaningfully reduce the processing demands of AI systems. As one panelist noted, “Harvard has embraced AI, and with that comes the responsibility to understand and thoughtfully assess its impact.” 



Source link
Continue Reading

Tools & Platforms

Tennis players criticize AI technology used by Wimbledon

Published

on


Some tennis players are not happy with Wimbledon’s new AI line judges, as reported by The Telegraph. 

This is the first year the prestigious tennis tournament, which is still ongoing, replaced human line judges, who determine if a ball is in or out, with an electronic line calling system (ELC).

Numerous players criticized the AI technology, mostly for making incorrect calls, leading to them losing points. Notably, British tennis star Emma Raducanu called out the technology for missing a ball that her opponent hit out, but instead had to be played as if it were in. On a television replay, the ball indeed looked out, the Telegraph reported. 

Jack Draper, the British No. 1, also said he felt some line calls were wrong, saying he did not think the AI technology was “100 percent accurate.”

Player Ben Shelton had to speed up his match after being told that the new AI line system was about to stop working because of the dimming sunlight. Elsewhere, players said they couldn’t hear the new automated speaker system, with one deaf player saying that without the human hand signals from the line judges, she was unable to tell when she won a point or not. 

The technology also met a blip at a key point during a match this weekend between British player Sonay Kartal and the Russian Anastasia Pavlyuchenkova, where a ball went out, but the technology failed to make the call. The umpire had to step in to stop the rally and told the players to replay the point because the ELC failed to track the point. Wimbledon later apologized, saying it was a “human error,” and that the technology was accidentally shut off during the match. It also adjusted the technology so that, ideally, the mistake could not be repeated.

Debbie Jevans, chair of the All England Club, the organization that hosts Wimbledon, hit back at Raducanu and Draper, saying, “When we did have linesmen, we were constantly asked why we didn’t have electronic line calling because it’s more accurate than the rest of the tour.” 

We’ve reached out to Wimbledon for comment.

This is not the first time the AI technology has come under fire as tennis tournaments continue to either partially or fully adopt automated systems. Alexander Zverev, a German player, called out the same automated line judging technology back in April, posting a picture to Instagram showing where a ball called in was very much out. 

The critiques reveal the friction in completely replacing humans with AI, making the case for why a human-AI balance is perhaps necessary as more organizations adopt such technology. Just recently, the company Klarna said it was looking to hire human workers after previously making a push for automated jobs. 



Source link

Continue Reading

Trending