Connect with us

Tools & Platforms

Duke University pilot project examining pros and cons of using artificial intelligence in college

Published

on


DURHAM, N.C. — As generative artificial intelligence tools like ChatGPT have become increasingly prevalent in academic settings, faculty and students have been forced to adapt.

The debut of OpenAI’s ChatGPT in 2022 spread uncertainty across the higher education landscape. Many educators scrambled to create new guidelines to prevent academic dishonesty from becoming the norm in academia, while some emphasized the strengths of AI as a learning aid.

As part of a new pilot with OpenAI, all Duke undergraduate students, as well as staff, faculty, and students across the University’s professional schools, gained free, unlimited access to ChatGPT-4o beginning June 2. The University also announced DukeGPT, a University-managed AI interface that connects users to resources for learning and research and ensures “maximum privacy and robust data protection.”

Duke launched a new Provost’s Initiative to examine the opportunities and challenges AI brings to student life on May 23. The initiative will foster campus discourse on the use of AI tools and present recommendations in a report by the end of the fall 2025 semester.

The Chronicle spoke to faculty members and students to understand how generative AI is changing the classroom.

ALSO SEE Job seekers, HR professionals grapple with use of artificial intelligence

Embraced or banned

Although some professors are embracing AI as a learning aid, others have implemented blanket bans and expressed caution regarding the implications of AI on problem-solving and critical thinking.

David Carlson, associate professor of civil and environmental engineering, took a “lenient” approach to AI usage in the classroom. In his machine learning course, the primary learning objective is to utilize these tools to understand and analyze data.

Carlson permits his students to use generative AI as long as they are transparent about their purpose for using the technology.

“You take credit for all of (ChatGPT’s) mistakes, and you can use it to support whatever you do,” Carlson said.

He added that although AI tools are “not flawless,” they can help provide useful secondary explanations of lectures and readings.

Matthew Engelhard, assistant professor of biostatistics and bioinformatics, said he also adopted “a pretty hands-off approach” by encouraging the use of AI tools in his classroom.

“My approach is not to say you can’t use these different tools,” Engelhard said. “It’s actually to encourage it, but to make sure that you’re working with these tools interactively, such that you understand the content.”

Engelhard emphasized that the use of these tools should not prevent students from learning the fundamental principles “from the ground up.” Engelhard noted that students, under the pressure to perform, have incentives to rely on AI as a shortcut. However, he said using such tools might be “short-circuiting the learning process for yourself.” He likened generative AI tools to calculators, highlighting that relying on a calculator hinders one from learning how addition works.

Like Engelhard, Thomas Pfau, Alice Mary Baldwin distinguished professor of English, believes that delegating learning to generative AI means students may lose the ability to evaluate the process and validity of receiving information.

“If you want to be a good athlete, you would surely not try to have someone else do the working out for you,” Pfau said.

Pfau recognized the role of generative AI in the STEM fields, but he believes that such technologies have no place in the humanities, where “questions of interpretation … are really at stake.” When students rely on AI to complete a sentence or finish an essay for them, they risk “losing (their) voice.” He added that AI use defeats the purpose of a university education, which is predicated on cultivating one’s personhood.

Henry Pickford, professor of German studies and philosophy, said that writing in the humanities serves the dual function of fostering “self-discovery” and “self-expression” for students. But with increased access to AI tools, Pickford believes students will treat writing as “discharging a duty” rather than working through intellectual challenges.

“(Students) don’t go through any kind of self-transformation in terms of what they believe or why they believe it,” Pickford said.

Additionally, the use of ChatGPT has broadened opportunities for plagiarism in his classes, leading him to adopt a stringent AI policy.

Faculty echoed similar concerns at an Aug. 4 Academic Council meeting, including Professor of History Jocelyn Olcott, who said that students who learn to use AI without personally exploring more “humanistic questions” risk being “replaced” by the technology in the future.

How faculty are adapting to generative AI

Many of the professors The Chronicle interviewed expressed difficulty in discerning whether students have used AI on standard assignments. Some are resorting to a range of alternative assessment methods to mitigate potential AI usage.

Carlson, who shared that he has trouble detecting student AI use in written or coding assignments, has introduced oral presentations to class projects, which he described as “very hard to fake.”

Pickford has also incorporated oral assignments into his class, including having students present arguments through spoken defense. He has also added in-class exams to lectures that previously relied solely on papers for grading.

“I have deemphasized the use of the kind of writing assignments that invite using ChatGPT because I don’t want to spend my time policing,” Pickford said.

However, he recognized that ChatGPT can prove useful in generating feedback throughout the writing process, such as when evaluating whether one’s outline is well-constructed.

A ‘tutor that’s next to you every single second’

Students noted that AI chatbots can serve as a supplemental tool to learning, but they also cautioned against over-relying on such technologies.

Junior Keshav Varadarajan said he uses ChatGPT to outline and structure his writing, as well as generate code and algorithms.

“It’s very helpful in that it can explain concepts that are filled with jargon in a way that you can understand very well,” Varadarajan said.

Varadarajan has found it difficult at times to internalize concepts when utilizing ChatGPT because “you just go straight from the problem to the answer” without paying much thought to the problem. Varadarajan acknowledged that while AI can provide shortcuts at times, students should ultimately bear the responsibility for learning and performing critical thinking tasks.

For junior Conrad Qu, ChatGPT is like a “tutor that’s next to you every single second.” He said that generative AI has improved his productivity and helped him better understand course materials.

Both Varadarajan and Qu agreed that AI chatbots come in handy during time crunches or when trying to complete tasks with little effort. However, they said they avoid using AI when it comes to content they are genuinely interested in exploring deeper.

“If it is something I care about, I will go back and really try to understand everything (and) relearn myself,” Qu said.

The future of generative AI in the classroom

As generative AI technologies continue evolving, faculty members have yet to reach consensus on AI’s role in higher education and whether its benefits for students outweigh the costs.

“To me, it’s very clear that it’s a net positive,” Carlson said. “Students are able to do more. Students are able to get support for things like debugging … It makes a lot of things like coding and writing less frustrating.”

Pfau is less optimistic about generative AI’s development, raising concerns that the next generation of high school graduates will be too accustomed to chatbots coming into the college classroom. He added that many students find themselves at a “competitive disadvantage” when the majority of their peers are utilizing such tools.

Pfau placed the responsibility on students to decide whether the use of generative AI will contribute to their intellectual growth.

“My hope remains that students will have enough self-respect and enough curiosity about discovering who they are, what their gifts are, what their aptitudes are,” Pfau said. “… something we can only discover if we apply ourselves and not some AI system to the tasks that are given to us.”
___

This story was originally published by The Chronicle and distributed through a partnership with The Associated Press.

Featured video is ABC11 24/7 Livestream

Copyright © 2025 by The Associated Press. All Rights Reserved.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Godfather of AI Geoffrey Hinton warns AI to create ‘massive unemployment’, but only these will be hardest hit

Published

on


Geoffrey Hinton, often called the “godfather of AI,” warned that artificial intelligence will trigger “massive unemployment and a huge rise in profits,” further deepening inequality between the rich and the poor. In a recent interview with the Financial Times, the Nobel Prize winner and former Google scientist said AI will be used by the wealthy to cut jobs and maximize profits.“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton said. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”During the interview, Geoffrey Hinton said industries that rely on routine tasks will be hardest hit, while highly skilled roles and healthcare could benefit. “If you could make doctors five times as efficient, we could all have five times as much health care for the same price,” he said in an earlier interview.

Godfather of AI’s disagreement with Sam Altman

Hinton also dismissed OpenAI CEO Sam Altman’s idea of a universal basic income, arguing it “won’t deal with human dignity” or replace the sense of value people get from work.A survey from the New York Fed recently found that companies using AI are more likely to retrain workers than fire them, though layoffs are expected to rise.Hinton reiterated concerns about AI’s risks, saying there is a 10% to 20% chance of the technology wiping out humanity after the emergence of superintelligence. He also warned AI could be misused to create bioweapons, adding that while China is taking the threat seriously, the Trump administration has resisted tighter regulation.“We don’t know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly,” he said. “We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad.”

Geoffrey Hinton on leaving Google

Geoffrey Hinton also explained why he left Google in 2023, rejecting reports that he quit to speak more freely about AI’s risks. “I left because I was 75, I could no longer program as well as I used to, and there’s a lot of stuff on Netflix I haven’t had a chance to watch,” he said.While he mostly uses AI for research, Hinton admitted OpenAI’s ChatGPT even played a role in his personal life. He revealed that a former girlfriend used the chatbot “to tell me what a rat I was” during their breakup.“I didn’t think I had been a rat, so it didn’t make me feel too bad,” he quipped.





Source link

Continue Reading

Tools & Platforms

Sam Altman or Elon Musk: AI godfather Geoffrey Hinton’s 6-word reply on who he trusts more

Published

on


When asked to choose who he trusts more between Tesla CEO Elon Musk and OpenAI chief executive Sam Altman, AI “godfather” Geoffrey Hinton offered a different kind of answer. The choice seemed so difficult that Hinton didn’t use an analogy from science. Instead, he recalled a quote from Republican Senator Lindsey Graham, when asked to pick between two presidential candidates.He remembered a moment from the 2016 presidential race when Graham was asked to choose between Donald Trump or Ted Cruz. Graham’s response, delivered with a wry honesty, was a line Hinton had never forgotten: “It’s like being shot or poisoned.”

Hinton’s warning on potential of AI technology in destruction

Hinton was speaking to The Financial Times during an interview where he sounded the alarm about AI’s potential dangers. Once a key figure in accelerating AI development, Hinton has shifted to expressing deep concerns about its future. He believes that AI poses a grave threat to humanity, arguing that the technology could soon help an average person create bioweapons.“A normal person assisted by AI will soon be able to build bioweapons and that is terrible,” Hinton said, adding, “Imagine if an average person in the street could make a nuclear bomb.”During a two-hour interview, Hinton discussed various topics, including AI’s “nuclear-level” threats, his own use of AI tools, and even how a chatbot contributed to his recent breakup. He also recently warned that AI could soon surpass human abilities, including the power to manipulate emotions by learning from vast datasets to influence feelings and behaviors more effectively than people.Hinton’s concerns stem from his belief that AI is genuinely intelligent. He argues that “by any definition of intelligence, AI is intelligent.” Using several analogies, he explained that an AI’s experience of reality is not so different from a human’s.“It seems very obvious to me. If you talk to these things and ask them questions, it understands,” Hinton stated. He added that there is “very little doubt in the technical community that these things will get smarter.”

Made in India Space Chip: Vikram-32 Explained





Source link

Continue Reading

Tools & Platforms

Anthropic’s Claude restrictions put overseas AI tools backed by China in limbo

Published

on


An abrupt decision by American artificial intelligence firm Anthropic to restrict service to Chinese-owned entities anywhere in the world has cast uncertainty over some Claude-dependent overseas tools backed by China’s tech giants.

After Anthropic’s notice on Friday that it would upgrade access restrictions to entities “more than 50 per cent owned … by companies headquartered in unsupported regions” such as China, regardless of where they are, Chinese users have fretted over whether they could still access the San Francisco-based firm’s industry-leading AI models.

While it remains unknown how many entities could be affected and how the restrictions would be implemented, anxiety has started to spread among some users.

Do you have questions about the biggest topics and trends from around the world? Get the answers with SCMP Knowledge, our new platform of curated content with explainers, FAQs, analyses and infographics brought to you by our award-winning team.

Singapore-based Trae, an AI-powered code editor launched by Chinese tech giant ByteDance for overseas users, is a known user of OpenAI’s GPT and Anthropic’s Claude models. A number of users of Trae have raised the issue of refunds to Trae staff on developer platforms over concerns that their access to Claude would no longer be available.

Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP alt=Dario Amodei, CEO and cofounder of Anthropic, speaks at the International Network of AI Safety Institutes in San Francisco, November 20, 2024. Photo: AP>

A Trae manager responded by saying that Claude was still available, urging users not to consider refunds “for the time being”. The company had just announced a premium “Max Mode” on September 2, which boasted access to significantly more powerful coding abilities “fully supported” by Anthropic’s Claude models.

Other Chinese tech giants offer Claude on their coding agents marketed to international users, including Alibaba Group Holding’s Qoder and Tencent Holdings’ CodeBuddy, which is still being beta tested. Alibaba owns the South China Morning Post.

ByteDance and Trae did not respond to requests for comment.

Amid the confusion, some Chinese AI companies have taken the opportunity to woo disgruntled users. Start-up Z.ai, formerly known as Zhipu AI, said in a statement on Friday that it was offering special offers to Claude application programming interface users to move over to its models.

Anthropic’s decision to restrict access to China-owned entities is the latest evidence of an increasingly divided AI landscape.

In China, AI applications and tools for the domestic market are almost exclusively based on local models, as the government has not approved any foreign large language model for Chinese users.

Anthropic faced pressure to take action as a number of Chinese companies have established subsidiaries in Singapore to access US technology, according to a report by The Financial Times on Friday.

Anthropic’s flagship Claude AI models are best known for their strong coding capabilities. The company’s CEO Dario Amodei has repeatedly called for stronger controls on exports of advanced US semiconductor technology to China.

Anthropic completed a US$13 billion funding round in the past week that tripled its valuation to US$183 billion. On Wednesday, the company said its software development tool Claude Code, launched in May, was generating more than US$500 million in run-rate revenue, with usage increasing more than tenfold in three months.

The firm’s latest Claude Opus 4.1 coding model achieved an industry-leading score of 74.5 per cent on SWE-bench Verified – a human-validated subset of the large language model benchmark, SWE-bench, that is supposed to more reliably evaluate AI models’ capabilities.

This article originally appeared in the South China Morning Post (SCMP), the most authoritative voice reporting on China and Asia for more than a century. For more SCMP stories, please explore the SCMP app or visit the SCMP’s Facebook and Twitter pages. Copyright © 2025 South China Morning Post Publishers Ltd. All rights reserved.

Copyright (c) 2025. South China Morning Post Publishers Ltd. All rights reserved.





Source link

Continue Reading

Trending