Connect with us

AI Research

Are you cheating by using generative AI to complete your coursework?

Published

on


As generative artificial intelligence tools like ChatGPT, Claude, and Gemini become increasingly accessible, college campuses across the country are grappling with a new academic dilemma: What happens when students turn to AI to write their essays?

It’s a salient question, but I didn’t write it. ChatGPT did, which is why the opening paragraph is bland and uninspiring. 

I asked the program to “write a news article about what college professors think of students who use generative artificial intelligence to write their essays,” and out spat 645 mostly grammatically correct words and paragraphs with subheads such as “A Divided Faculty,” because generative AI apparently has a need to prove some faculty defend it; “Detecting AI Use: A Growing Arms Race,” because AI apparently wants you to feel sorry for it because it’s under attack; “Students Say AI is a Tool—Not a Substitute,” because AI wants to make you feel better about resorting to it instead of your own intellect and hard work; and “Reimagining the Role of Writing in Education,” to drive home the point that AI is a genie out of its bottle and there’s no putting it back.

The AI-generated news article went on to quote college professors that don’t exist, like “Dr. Lisa Chen,” who it claims is a UCLA Philosophy Department faculty professor: “We have to rethink what we’re asking students to do. If AI can easily generate an answer, maybe the question wasn’t challenging enough in the first place.”

See! It’s not students’ fault they’re lazing out and resorting to AI to complete their assignments. It’s teachers’ fault for not creating assignments that AI is incapable of completing. 

The thing is, AI is incapable of writing acceptable college essays. It can’t unearth effective and compelling research from academically acceptable sources and properly cite them both intext and on a correctly formatted works cited page or bibliography. It can’t insightfully analyze the research it presents and cogently synthesize the data into a thoroughly developed thesis. Worse still, AI hallucination—when AI produces information not based on real data—is a well-established side effect, and when students turn in essays with fake sources, data, quotations, and evidence, that should mean an instant F … if the teacher catches it.

Courtney Brogno, a college lecturer who does exist and who teaches writing at both Cal Poly and Cuesta College, said, “Sometimes it’s really obvious. A couple paragraphs will sound like a student’s voice and then all of a sudden the essay sounds like a Ph.D. dissertation. Citations are different, huge words, different font. Those are really easy to catch.”

Brogno admits, however, that some probably aren’t easy to catch. One quarter she paid out of her own pocket for the cost of an AI detection program

“It took forever. I was spending more time on these essays looking [for evidence of cheating] than probably they had spent writing them,” Brogno said. “I just don’t have that time, not in a 10-week quarter, and not with six classes.”

Which brings up another point. Students who resort to AI will quite possibly go undetected because most college teachers are overworked to the point of not caring. They believe that when students cheat, they’re cheating themselves out of educational experiences. 

“I like to remind students that ‘essay’ comes from the verb ‘assay,’ meaning ‘to attempt or try,’” Cal Poly lecturer Lauren Henley explained. “We have since commodified the word into a noun, a product to be stamped with a letter grade. 

“When students use AI to generate or refine, they are slinking past ‘the try,’ and what’s worse is that they may be receiving high praise from unwitting instructors—a double thievery. I’m not a scientist, but I can surmise that bypassing ‘the try’ over and over while the brain is still forming can’t be good for the development of empathy, reasoning, and resilience, which our students will surely need for a future wherein the very AI they’ve been relying upon may likely steal or augment their jobs.” 

It also doesn’t help that few teachers believe their schools have a clearly articulated policy on AI use. Henley calls this current college AI experiment “the Wild West.”

“Did you know that Canvas [the student-teacher interface used at Cal Poly] has a ChatGPT EDU button?” English Department lecturer Leslie St. John asked. “[It says,] ‘Get answers, solve problems, generate content instantly.’ ‘Generate content’ … boy, that leaves a bad taste in my mouth. And ‘instantly’? There’s nothing ‘instant’ about the writing [and] creative process.”

Cal Poly has a webpage devoted to artificial intelligence, but it’s more about its uses rather than the ethical implications of its use.

“My thought is it’s cheating, obviously,” Brogno said, “but then we’re confused by Cal Poly, because Cal Poly has [ChatGPT accounts] for free for students. Are we saying this is OK now? What is our policy? Nobody seems to know. Nobody knows.”

“I’m pretty old-school,” St. John admitted. “I want students to learn to read, write, and think for themselves—not outsource their creativity and intelligence to ChatGPT. Right now, I’m urging them not to use it, but I recognize how normalized it’s become already.” 

Brogno can’t afford to play AI sleuth with every paper.

“When an essay’s turned in, I’m going to grade this as it is,” Brogno said. “I don’t have the time or energy to be a detective.”

She notes that lot of sororities and fraternities have historically kept files of previously turned-in essays that various members recycle over the years and turn in as original work: “Maybe it’s no different than that. And those kids probably got away with it, too. I’ve just decided I’m not going to spend an hour on an essay trying to prove my point.”

The implications of this kind of cheating are vast. The kinds of classes Brogno, Henley, and St. John teach are designed to train students in critical thinking, research, and argumentation—essential skills for everyone. If students don’t gain these experiences and learn to think critically, recognize logical fallacies, and analyze arguments, they’ll be more susceptible to the disinformation inundating media.

“I think it’s terrible,” Brogno said. “I think it takes away critical thinking and independent thought. It’s everything that’s wrong with America.”

As an assignment, she asks her students to use ChatGPT to generate an MLA (Modern Language Association) formatted works cited page on a specific topic and bring it to class.

“They do it, and then they come in the next day, and I say, ‘I want you to check all these citations,’” Brogno said. “Eighty percent of them don’t exist. And the citations are wrong. Then they go looking for the article. They can’t find it. And I say, ‘Well, if you can’t find it, how am I supposed to find it to check up on you?’”

Like college teachers, college students tend to be overworked and overwhelmed, and cheating—or taking “shortcuts” if it makes you feel better—seems inevitable. 

“There are so many reasons why a young person without a fully formed frontal lobe would be drawn to shortcuts,” Henley said. “I think that in part we can blame a steady lowering of academic expectations over the past decade—if it’s uncomfortable or hard, you don’t have to do it—and concomitant grade inflation.”

Yet all these college teachers remain committed to their vocation.

“I’m going to do my job the best I can and try to make students better writers,” Brogno said.

The question is: Will students hold up their end of the bargain? ∆

Contact Arts Editor Glen Starkey at gstarkey@newtimesslo.com.

Local News: Committed to You, Fueled by Your Support.

Local news strengthens San Luis Obispo County. Help New Times continue delivering quality journalism with a contribution to our journalism fund today.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

University Spinout TransHumanity secures £400k | News and events

Published

on


TransHumanity Ltd., a spinout from Loughborough University, has secured approximately £400,000 in pre-seed investment. The round was led by SFC Capital, the UK’s most active seed-stage investor, with additional investment from Silicon Valley-based Plug and Play.

TransHumanity’s vision is to empower faster, smarter human decisions by transforming data into accessible intelligence using large language model based agentic AI. 

Agentic AI refers to artificial intelligence systems that collaborate with people to reach specific goals, understanding and responding in plain English. These systems use AI “agents” — models that can gather information, make suggestions, and carry out tasks in real time — helping people solve problems more quickly and effectively.

TransHumanity’s first product, AptIq, is designed to help transport authorities quickly analyse transport data and models, turning days of analysis into seconds. 

By simply asking questions in plain English, users can gain instant insights to support key initiatives like congestion reduction, road safety, creation of business cases and net-zero targets.

Dr Haitao He, Co-founder and Director of TransHumanity, said: “I am proud to see my rigorous research translated into trusted real-world AI innovation for the transport sector. With this investment, we can now realise my Future Leaders Fellowship vision, scaling a technology that empowers authorities across the UK to deliver integrated, net-zero transport.”

Developed from rigorous research by Dr Haitao He, a UKRI Future Leaders Fellow in Transport AI at Loughborough University, AptIq, previously known as TraffEase, has already garnered significant recognition. 

The technology was named a Top 10 finalist for the 2024 Manchester Prize for AI innovation and was recently highlighted as one of the Top 40 UK tech start-ups at London Tech Week by the UK Department for Business and Trade.

Adam Beveridge, Investment Principal at SFC Capital, said: “We are excited to back TransHumanity. The combination of cutting-edge research, a proven founding team, clear market demand, and positive societal impact makes this exactly the kind of high-growth venture we are committed to supporting.”

AptIq is currently in a test deployment with Nottingham City Council and Transport for Greater Manchester, with plans to expand to other city, regional, and national authorities across the UK within the next 12 months.

With a product roadmap that includes diverse data sources, advanced analytics and giving the user full control over the AI tool when required, interest from the transport sector is already high. Professor Nick Jennings, Vice-Chancellor and President of Loughborough University, noted: “I am delighted to see TransHumanity fast-tracked from lab to investment-ready spinout.

This journey was accelerated by TransHumanity’s selection as a finalist in the prestigious Manchester Prize and shows what’s possible when the University’s ambition aligns with national innovation policy.”



Source link

Continue Reading

AI Research

Legal-Ready AI: 7 Tips for Engineers Who Don’t Want to Be Caught Flat-Footed

Published

on


An oversimplified approach I have taken in the past to explain wisdom is to share that “We don’t know what we don’t know until we know it.” This absolutely applies to the fast-moving AI space, where unknowingly introducing legal and compliance risk through an organization’s use of AI is a top concern among IT leaders. 

We’re now building systems that learn and evolve on their own, and that raises new questions along with new kinds of risk affecting contracts, compliance, and brand trust.

At Broadcom, we’ve adopted what I’d call a thoughtful ‘move smart and then fast’’ approach. Every AI use case requires sign-off from both our legal and information security teams. Some folks may complain, saying it slows them down. But if you’re moving fast with AI and putting sensitive data at risk, you’re also inviting trouble if you don’t also move smart.

Here are seven things I’ve learned about collaborating with legal teams on AI projects.

1. Partner with Legal Early On

Don’t wait until the AI service is built to bring legal in. There’s always the risk that choices you make about data, architecture, and system behavior can create regulatory headaches or break contracts later on.

Besides, legal doesn’t need every answer on day one. What they do need is visibility into the gray areas. What data are you using and producing? How does the model make decisions? Could those decisions shift over time? Walk them through what you’re building and flag the parts that still need figuring out.

2. Document Your Decisions as You Go

AI projects move fast with teams needing to make dozens of early decisions on everything from data sources to training logic. So, it’s only natural that a few months later, chances are no one remembers why those choices were made. Then someone from compliance shows up with questions about those choices, and you’ve got nothing to point to.

To avoid that situation, keep a simple log as you work. Then, should a subsequent audit or inquiry occur, you’ll have something solid to help answer any questions.

3. Build Systems You Can Explain

Legal teams need to understand your system so they can explain it to regulators, procurement officers, or internal risk reviewers. If they can’t, there’s the risk that your project could stall or even fail after it ships.

I’ve seen teams consume SaaS-based AI services  without realizing the provider could swap out a backend AI model without their knowledge. If that leads to changes in the system’s behavior behind the scenes, it could redirect your data in ways you didn’t intend. That’s one reason why you’ve got to know your AI supply chain, top to bottom. Ensure that services you build or consume have end-to-end auditability of the AI software supply chain. Legal can’t defend a system if they don’t understand how it works.

4. Watch Out for Shadow AI

Any engineer can subscribe to an AI service and accept the provider’s terms without knowing they don’t have the authority to do that on behalf of the company.

That exposes the organization to major risk. An engineer might accidentally agree to data-sharing terms that violate regulatory restrictions or expose sensitive customer data to a third party.

And it’s not just deliberate use anymore. Run a search in Google and you’re already getting AI output. It’s everywhere. The best way to avoid this is by building a culture where employees are aware of the legal boundaries. You can give teams a safe place to experiment, but at the same time, make sure you know what tools they’re using and what data they’re touching.

5. Help Legal Navigate Contract Language

AI systems get tangled in contract language; there are ownership rights, retraining rules, model drift, and more. Most engineers aren’t trained to spot those issues, but we’re the ones who understand how the systems behave.

That’s another reason why you’ve got to know your AI supply chain, top to bottom. In this case, when legal needs our help in reviewing vendor or customer agreements to put the contractual language into the appropriate technical context. What happens when the model changes? How are sensitive data sets safeguarded from being indexed or accessed via AI agents such as those that use Model Context Protocol (MCP)? We can translate the technical behavior into simple English—and that goes a long way toward helping the lawyers write better contracts.

6. Design with Auditability in Mind

AI is developing rapidly, with legal frameworks, regulatory requirements, and customer expectations evolving to keep pace. You need to be prepared for what might come next. 

Can you explain where your training data came from? Can you show how the model was tested for bias? Can you justify how it works? If someone from a regulatory body walked in tomorrow, would you be ready?

Design with auditability in mind. Especially when AI agents are chained together, you need to be able to prove that identity and access controls are enforced end-to-end. 

7. Handle Customer Data with Care

We don’t get to make decisions on behalf of our customers about how their data gets used. It’s their data. And when it’s private, it shouldn’t be fed to a model. Period. 

You’ve got to be disciplined about what data gets ingested. If your AI tool indexes everything by default, that can get messy fast. Are you touching private logs or passing anything to a hosted model without realizing it? Support teams might need access to diagnostic logs but that doesn’t mean third-party models should touch them. Tools are rapidly evolving that can generate comparable synthetic data devoid of any customer private data that could help with support use cases for example, but these tools and techniques should be fully vetted with your legal and CISO organizations prior to using them. 

The Reality

The engineering ethos is to move fast. But since safety and trust are on the line, you need to move smart, which means it’s okay if things take a little longer. The extra steps are worth it when they help protect your customers and your company.

Nobody has this all figured out. So ask questions by talking to people who’ve handled this kind of work before. The goal isn’t perfection—it’s to make smart, careful progress. For enterprises, the AI race isn’t a question of “Who’s best?” but rather “Who’s leveraging AI safely to drive the best business outcomes.” 



Source link

Continue Reading

AI Research

Progress Unveils Subsidiary for AI-Driven Digital Upgrade

Published

on


Progress Software, a company offering artificial intelligence-powered digital experience and infrastructure software, has launched Progress Federal Solutions, a wholly owned subsidiary that aims to deliver AI-powered technologies to the federal, defense and public sectors.

Progress Federal Solutions to Boost Digital Transformation

The company said Monday the new subsidiary, announced during the Progress Data Platform Summit at the International Spy Museum in Washington, D.C., is intended to fast-track federal agencies’ digital modernization efforts, meet compliance requirements, and advance AI and data initiatives. The subsidiary leverages MarkLogic’s data management and integration expertise, a platform that Progress Software acquired in 2023.

Progress Federal Solutions functions independently but will offer the company’s full technology portfolio, including Progress Data Platform, Progress Sitefinity, Progress Chef, Progress LoadMaster and Progress MOVEit. These will be available to the public sector through Carahsoft Technology‘s reseller partners and contract vehicles.

Remarks From Progress Federal Solutions, Carahsoft Executives 

“Federal and defense agencies are embracing data-centric strategies and modernizing legacy systems at a faster pace than ever. That’s why we focus on enabling data-driven decision-making, faster time to value and measurable ROI,” said Cori Moore, president of Progress Federal Solutions.

“Progress is a trusted provider of AI-enabled solutions that address complex data, infrastructure and digital experience needs. Their technologies empower government agencies to build high-impact applications, automate operations and scale securely to meet program goals,” said Michael Shrader, vice president of intelligence and innovative solutions at Carahsoft.





Source link

Continue Reading

Trending