Tools & Platforms
Duke University pilot project examining pros and cons of using artificial intelligence in college

DURHAM, N.C. — As generative artificial intelligence tools like ChatGPT have become increasingly prevalent in academic settings, faculty and students have been forced to adapt.
The debut of OpenAI’s ChatGPT in 2022 spread uncertainty across the higher education landscape. Many educators scrambled to create new guidelines to prevent academic dishonesty from becoming the norm in academia, while some emphasized the strengths of AI as a learning aid.
As part of a new pilot with OpenAI, all Duke undergraduate students, as well as staff, faculty, and students across the University’s professional schools, gained free, unlimited access to ChatGPT-4o beginning June 2. The University also announced DukeGPT, a University-managed AI interface that connects users to resources for learning and research and ensures “maximum privacy and robust data protection.”
Duke launched a new Provost’s Initiative to examine the opportunities and challenges AI brings to student life on May 23. The initiative will foster campus discourse on the use of AI tools and present recommendations in a report by the end of the fall 2025 semester.
The Chronicle spoke to faculty members and students to understand how generative AI is changing the classroom.
ALSO SEE Job seekers, HR professionals grapple with use of artificial intelligence
Embraced or banned
Although some professors are embracing AI as a learning aid, others have implemented blanket bans and expressed caution regarding the implications of AI on problem-solving and critical thinking.
David Carlson, associate professor of civil and environmental engineering, took a “lenient” approach to AI usage in the classroom. In his machine learning course, the primary learning objective is to utilize these tools to understand and analyze data.
Carlson permits his students to use generative AI as long as they are transparent about their purpose for using the technology.
“You take credit for all of (ChatGPT’s) mistakes, and you can use it to support whatever you do,” Carlson said.
He added that although AI tools are “not flawless,” they can help provide useful secondary explanations of lectures and readings.
Matthew Engelhard, assistant professor of biostatistics and bioinformatics, said he also adopted “a pretty hands-off approach” by encouraging the use of AI tools in his classroom.
“My approach is not to say you can’t use these different tools,” Engelhard said. “It’s actually to encourage it, but to make sure that you’re working with these tools interactively, such that you understand the content.”
Engelhard emphasized that the use of these tools should not prevent students from learning the fundamental principles “from the ground up.” Engelhard noted that students, under the pressure to perform, have incentives to rely on AI as a shortcut. However, he said using such tools might be “short-circuiting the learning process for yourself.” He likened generative AI tools to calculators, highlighting that relying on a calculator hinders one from learning how addition works.
Like Engelhard, Thomas Pfau, Alice Mary Baldwin distinguished professor of English, believes that delegating learning to generative AI means students may lose the ability to evaluate the process and validity of receiving information.
“If you want to be a good athlete, you would surely not try to have someone else do the working out for you,” Pfau said.
Pfau recognized the role of generative AI in the STEM fields, but he believes that such technologies have no place in the humanities, where “questions of interpretation … are really at stake.” When students rely on AI to complete a sentence or finish an essay for them, they risk “losing (their) voice.” He added that AI use defeats the purpose of a university education, which is predicated on cultivating one’s personhood.
Henry Pickford, professor of German studies and philosophy, said that writing in the humanities serves the dual function of fostering “self-discovery” and “self-expression” for students. But with increased access to AI tools, Pickford believes students will treat writing as “discharging a duty” rather than working through intellectual challenges.
“(Students) don’t go through any kind of self-transformation in terms of what they believe or why they believe it,” Pickford said.
Additionally, the use of ChatGPT has broadened opportunities for plagiarism in his classes, leading him to adopt a stringent AI policy.
Faculty echoed similar concerns at an Aug. 4 Academic Council meeting, including Professor of History Jocelyn Olcott, who said that students who learn to use AI without personally exploring more “humanistic questions” risk being “replaced” by the technology in the future.
How faculty are adapting to generative AI
Many of the professors The Chronicle interviewed expressed difficulty in discerning whether students have used AI on standard assignments. Some are resorting to a range of alternative assessment methods to mitigate potential AI usage.
Carlson, who shared that he has trouble detecting student AI use in written or coding assignments, has introduced oral presentations to class projects, which he described as “very hard to fake.”
Pickford has also incorporated oral assignments into his class, including having students present arguments through spoken defense. He has also added in-class exams to lectures that previously relied solely on papers for grading.
“I have deemphasized the use of the kind of writing assignments that invite using ChatGPT because I don’t want to spend my time policing,” Pickford said.
However, he recognized that ChatGPT can prove useful in generating feedback throughout the writing process, such as when evaluating whether one’s outline is well-constructed.
A ‘tutor that’s next to you every single second’
Students noted that AI chatbots can serve as a supplemental tool to learning, but they also cautioned against over-relying on such technologies.
Junior Keshav Varadarajan said he uses ChatGPT to outline and structure his writing, as well as generate code and algorithms.
“It’s very helpful in that it can explain concepts that are filled with jargon in a way that you can understand very well,” Varadarajan said.
Varadarajan has found it difficult at times to internalize concepts when utilizing ChatGPT because “you just go straight from the problem to the answer” without paying much thought to the problem. Varadarajan acknowledged that while AI can provide shortcuts at times, students should ultimately bear the responsibility for learning and performing critical thinking tasks.
For junior Conrad Qu, ChatGPT is like a “tutor that’s next to you every single second.” He said that generative AI has improved his productivity and helped him better understand course materials.
Both Varadarajan and Qu agreed that AI chatbots come in handy during time crunches or when trying to complete tasks with little effort. However, they said they avoid using AI when it comes to content they are genuinely interested in exploring deeper.
“If it is something I care about, I will go back and really try to understand everything (and) relearn myself,” Qu said.
The future of generative AI in the classroom
As generative AI technologies continue evolving, faculty members have yet to reach consensus on AI’s role in higher education and whether its benefits for students outweigh the costs.
“To me, it’s very clear that it’s a net positive,” Carlson said. “Students are able to do more. Students are able to get support for things like debugging … It makes a lot of things like coding and writing less frustrating.”
Pfau is less optimistic about generative AI’s development, raising concerns that the next generation of high school graduates will be too accustomed to chatbots coming into the college classroom. He added that many students find themselves at a “competitive disadvantage” when the majority of their peers are utilizing such tools.
Pfau placed the responsibility on students to decide whether the use of generative AI will contribute to their intellectual growth.
“My hope remains that students will have enough self-respect and enough curiosity about discovering who they are, what their gifts are, what their aptitudes are,” Pfau said. “… something we can only discover if we apply ourselves and not some AI system to the tasks that are given to us.”
___
This story was originally published by The Chronicle and distributed through a partnership with The Associated Press.
Featured video is ABC11 24/7 Livestream
Copyright © 2025 by The Associated Press. All Rights Reserved.
Tools & Platforms
Agentic AI, Fintech Innovation, and Ethical Risks

The Rise of Agentic AI in 2025
As the technology sector gears up for 2025, industry leaders are focusing on transformative shifts driven by artificial intelligence, particularly the emergence of agentic AI systems. These autonomous agents, capable of planning and executing complex tasks without constant human oversight, are poised to redefine operational efficiencies across enterprises. According to a recent analysis from McKinsey, agentic AI ranks among the top trends, enabling “virtual coworkers” that handle everything from data analysis to strategic decision-making.
This evolution builds on the generative AI boom of previous years, but agentic systems introduce a layer of independence that could slash costs and accelerate innovation. Insiders note that companies like Google and Microsoft are already integrating these capabilities into their cloud platforms, signaling a broader industry pivot toward AI that acts rather than just generates.
Monetizing AI Infrastructure Amid Surging Demand
Cloud giants such as Amazon, Google, and Microsoft have subsidized AI development to attract builders, but 2025 is expected to mark a turning point toward aggressive monetization. Posts found on X highlight this shift, with predictions that these firms will capitalize on the explosive demand for AI infrastructure, potentially driving significant revenue growth. For instance, TechCrunch reports on how startups and enterprises are increasingly reliant on these platforms, fueling a market projected to reach trillions.
The push comes as AI applications expand into IoT, blockchain, and 5G integrations, creating hybrid ecosystems that enhance real-time business operations. However, challenges like data governance and compliance loom large, with BigID‘s insights via X emphasizing the need for robust strategies to manage AI-related risks.
Fintech Disruption and Digital Banking Evolution
Fintech is set to disrupt traditional sectors further in 2025, with digital banks rapidly gaining ground through AI-driven personalization and seamless services. X discussions point to a $70 trillion wealth transfer boosting assets under management for registered investment advisors, while innovations in decentralized finance leverage blockchain for secure, efficient transactions. CNBC covers how companies like those in Silicon Valley are leading this charge, integrating AI for fraud detection and customer engagement.
Emerging sectors such as AI-driven diagnostics and telemedicine are also on the rise, as noted in trends from UpGrad, promising to revolutionize healthcare delivery. Yet, regulatory hurdles, including new rules on data privacy and cybersecurity, could temper this growth, requiring fintech players to navigate a complex web of compliance demands.
Sustainability and Energy Innovations Take Center Stage
Sustainability emerges as a core theme, with small nuclear reactors and decentralized renewable energy addressing the power needs of AI data centers. X posts underscore the potential of these technologies to provide clean energy, projecting a 15% increase in capacity by 2030. WIRED explores how this aligns with broader environmental goals, as tech firms face pressure to reduce carbon footprints amid climate-driven challenges like urban density increasing pest infestations—a macro tailwind for related industries.
Bio-based materials and agri-tech manufacturing are gaining traction, fostering micro-factories that minimize waste. Industry insiders, as reported in ITPro Today, predict these innovations will drive revenue growth for forward-thinking companies, much like Tesla’s impact on electric vehicles.
Navigating Challenges in a Quantum-Leap Era
The IT industry in 2025 will grapple with quantum computing’s potential, which could revolutionize fields like cryptography and materials science. Gartner, via insights shared on X, highlights agentic AI’s role in this, but warns of cybersecurity threats from advanced attacks. Reuters details ongoing concerns, including the fight against deepfakes through AI watermarking, estimated to save billions in trust-related losses.
Mental health apps and 3D printing for goods represent niche growth areas, blending technology with human-centric solutions. As Fox Business notes, these trends underscore the need for ethical AI deployment, ensuring innovations benefit society without exacerbating inequalities.
Strategic Imperatives for Tech Executives
For executives, the key lies in balancing innovation with risk management. Ad Age discusses how brands are adopting AI for marketing, including revenue-sharing models with publishers like those piloted by Perplexity. Remote work’s permanence, as per X trends, demands AI tools for collaboration, while sustainability mandates investment in green tech.
Ultimately, 2025’s tech environment promises unprecedented opportunities, but success hinges on adaptive strategies. Companies that integrate AI with
Tools & Platforms
Anthropic Bans Chinese Entities from Claude AI Over Security Risks

In a move that underscores escalating tensions in the global artificial intelligence arena, Anthropic, the San Francisco-based AI startup backed by tech giants like Amazon, has tightened its service restrictions to exclude companies majority-owned or controlled by Chinese entities. This policy update, effective immediately, extends beyond China’s borders to include overseas subsidiaries and organizations, effectively closing what the company described as a loophole in access to its Claude chatbot and related AI models.
The decision comes amid growing concerns over national security, with Anthropic citing risks that its technology could be co-opted for military or intelligence purposes by adversarial nations. As reported by Japan Today, the company positions itself as a guardian of ethical AI development, emphasizing that the restrictions target “authoritarian regions” to prevent misuse while promoting U.S. leadership in the field.
Escalating Geopolitical Frictions in AI Access This clampdown is not isolated but part of a broader pattern of U.S. tech firms navigating the fraught U.S.-China relationship. Anthropic’s terms of service now prohibit access for entities where more than 50% ownership traces back to Chinese control, a threshold that could impact major players like ByteDance, Tencent, and Alibaba, even through their international arms. Industry observers note this as a first-of-its-kind explicit ban in the AI sector, potentially setting a precedent for competitors.
According to Tom’s Hardware, the policy cites “legal, regulatory, and security risks,” including the possibility of data coercion by foreign governments. This reflects heightened scrutiny from U.S. regulators, who have increasingly viewed AI as a strategic asset akin to semiconductor technology, where export controls have already curtailed shipments to China.
Implications for Global Tech Ecosystems and Innovation For Chinese-owned firms operating globally, the restrictions could disrupt operations reliant on advanced AI tools, forcing a pivot to domestic alternatives or open-source options. Posts on X highlight a mix of sentiments, with some users decrying it as an attempt to monopolize AI development in a “unipolar world,” while others warn of retaliatory measures that might accelerate China’s push toward self-sufficiency in AI.
Anthropic’s move aligns with similar actions in the tech industry, such as restrictions on chip exports, which have spurred Chinese innovation in areas like Huawei’s Ascend processors. As detailed in coverage from MediaNama, this policy extends to other unsupported regions like Russia, North Korea, and Iran, but the focus on China underscores the AI arms race’s intensity.
Industry Reactions and Potential Ripple Effects Executives and analysts are watching closely to see if rivals like OpenAI or Google DeepMind follow suit, potentially forgoing significant revenue streams. One X post from a technology commentator suggested this could pressure competitors into similar decisions, given the geopolitical stakes, while another lamented the fragmentation of global AI access, arguing it denies “AI sovereignty” to nations outside the U.S. sphere.
The financial backing of Anthropic—valued at over $18 billion—includes heavy investments from Amazon and Google, which may influence its alignment with U.S. interests. Reports from The Manila Times indicate that the company frames this as a proactive step to safeguard democratic values, but critics argue it could stifle international collaboration and innovation.
Navigating Future Uncertainties in AI Governance Looking ahead, this development raises questions about the balkanization of AI technologies, where access becomes a tool of foreign policy. Industry insiders speculate that Chinese firms might accelerate investments in proprietary models, as evidenced by recent open-source releases that challenge Western dominance. Meanwhile, Anthropic’s stance could invite scrutiny from antitrust regulators, who might view it as consolidating power among U.S. players.
Ultimately, as the AI sector evolves, such restrictions highlight the delicate balance between security imperatives and the open exchange that has driven technological progress. With ongoing U.S. sanctions and China’s rapid advancements, the coming years may see a more divided global AI ecosystem, where strategic decisions like Anthropic’s redefine competitive boundaries and influence the trajectory of innovation worldwide.
Tools & Platforms
Community Editorial Board: Considering Colorado’s AI law

Members of our Community Editorial Board, a group of community residents who are engaged with and passionate about local issues, respond to the following question: During the recent special session, Colorado legislators failed to agree on an update to the state’s yet-to-be-implemented artificial intelligence law, despite concerns from the tech industry that the current law will make compliance onerous. Your take?
Colorado’s artificial intelligence law, passed in 2024 but not yet in effect, aims to regulate high-risk AI systems by requiring companies to assess risk, disclose how AI is used and avoid discriminatory outcomes. But as its 2026 rollout approaches, tech companies and Governor Polis argue the rules are too vague and costly to implement. Polis has pushed for a delay to preserve Colorado’s competitiveness, and the Trump administration’s AI Action Plan has added pressure by threatening to withhold federal funds from states with “burdensome” AI laws. The failure to update the law reflects a deeper tension: how to regulate fast-moving technology without undercutting economic growth.
Progressive lawmakers want people to have rights to see, correct and challenge the data that AI systems use against them. If an algorithm denies you a job, a loan or health coverage, you should be able to understand why. On paper, this sounds straightforward. In practice, it runs into the way today’s AI systems actually work.
Large language models like ChatGPT illustrate the challenge. They don’t rely on fixed rules that can be traced line by line. Instead, they are trained on massive datasets and learn statistical patterns in language. Input text is broken into words or parts of a word (tokens), converted into numbers, and run through enormous matrices containing billions of learned weights. These weights capture how strongly tokens relate to one another and generate probabilities for what word is most likely to come next. From that distribution, the model picks an output, sometimes the top choice, sometimes a less likely one. In other words, there are two layers of uncertainty: first in the training data, which bakes human biases into the model, and then in the inference process, which selects from a range of outputs. The same input can therefore yield different results, and even when it doesn’t, there is no simple way to point to a specific line of data that caused the outcome. Transparency is elusive because auditing a model at this scale is less like tracing a flowchart and more like untangling billions of connections.
These layers of uncertainty combine with two broader challenges. Research has not yet shown whether AI systems discriminate more or less than humans making similar decisions. The risks are real, but so is the uncertainty. And without federal rules, states are locked in competition. Companies can relocate to jurisdictions with looser standards. That puts Colorado in a bind: trying to protect consumers without losing its tech edge.
Here’s where I land: Regulating AI is difficult because neither lawmakers nor the engineers who build these systems can fully explain how specific outputs are produced. Still, in sensitive areas like housing, employment, or public benefits, companies should not be allowed to hide behind complexity. Full transparency may be impossible, but clear rules are not. Disclosure of AI use should be mandatory today, and liability should follow: If a system produces discriminatory results, the company should face lawsuits as it would for any other harmful product. It is striking that a technology whose outputs cannot be traced to clear causes is already in widespread use; in most industries, such a product would never be released, but AI has become too central to economic competitiveness to wait for full clarity. And since we lack evidence on whether AI is better or worse than human decision-making, banning it outright is not realistic. These models will remain an active area of research for years, and regulation will have to evolve with them. For now, disclosure should come first. The rest can wait, but delay must not become retreat.
Hernán Villanueva, chvillanuevap@gmail.com
Years ago, during a Senate hearing into Facebook, senators were grilling Mark Zuckerberg, and it was clear they had no idea how the internet works. One senator didn’t understand why Facebook had to run ads. It took Zuckerberg a minute to understand the senator’s question, as he couldn’t imagine anyone being that ignorant on the subject of the hearing! Yet these senators write and enact laws governing Facebook.
Society does a lot of that. Boulder does this with homelessness and climate change. They understand neither, yet create and pass laws, which, predictably, do nothing, or sometimes, make the problem worse. Colorado has done it before, as well, when it enacted a law requiring renewable energy and listed hydrogen as an energy source. Hydrogen is only an energy source when it is separated from oxygen, like in the sun. On Earth, hydrogen is always bound to another element and, therefore, it is not an energy source; it is an energy carrier. Colorado continued regulating things it doesn’t understand with the Colorado AI Act (CAIA), which shows a fundamental misunderstanding of how deep learning and Large Language Models, the central technologies of AI today, work.
The incentive to control malicious AI behavior is understandable. If AI companies are creating these on purpose, let’s get after them. But they aren’t. But bias does exist in AI programs. The bias comes from the data used to train the AI model. Biased in what way, though? Critics contend that loan applications are biased against people of color, even when a person’s race is not represented in the data. The bias isn’t on race. It is possibly based on the person’s address, education or credit score. Banks want to bias applicants based on these factors. Why? Because it correlates with the applicant’s ability to pay back the loan.
If the CAIA makes it impossible for banks to legally use AI to screen loan applicants, are we better off? Have we eliminated bias? Absolutely not. If a human is involved, we have bias. In fact, our only hope to eliminate bias is with AI, though we aren’t there yet because of the aforementioned data issue. So we’d still have bias, but now loans would take longer to process.
Today, there is little demand for ditch diggers. We have backhoes and bulldozers that handle most of that work. These inventions put a lot of ditch diggers out of work. Are we, as a society, better for these inventions? I think so. AI might be fundamentally different from heavy equipment, but it might not be. AI is a tool that can help eliminate drudgery. It can speed up the reading of X-rays and CT scans, thereby giving us better medical care. AI won’t be perfect. Nothing created by humans can be. But we need to be cautious in slowing the development of these life-transforming tools.
Bill Wright, bill@wwwright.com
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi