Education
East Meets West: Reimagining education in the age of AI
AI is transforming education worldwide, prompting a bold reimagining of how we learn, teach, and preserve the human connection at the heart of every classroom.
At the WSIS+20 High-Level Event in Geneva, the session ‘AI (and) education: Convergences between Chinese and European pedagogical practices’ brought together educators, students, and industry experts to examine how AI reshapes global education.
Led by Jovan Kurbalija of Diplo and Professor Hao Liu of Beijing Institute of Technology (BIT), with industry insights from Deloitte’s Norman Sze, the discussion focused on the future of universities and the evolving role of professors amid rapid AI developments.
Drawing on philosophical traditions from Confucius to Plato, the session emphasised the need for a hybrid approach that preserves the human essence of learning while embracing technological transformation.
Professor Liu showcased BIT’s ‘intelligent education’ model, a human-centred system integrating time, space, knowledge, teachers, and students. Moving beyond rigid, exam-focused instruction, BIT promotes creativity and interdisciplinary learning, empowering students with flexible academic paths and digital tools.
Meanwhile, Norman Sze highlighted how AI has accelerated industry workflows and called for educational alignment with real-world demands. He argued for reorienting learning around critical thinking, ethical literacy, and collaboration—skills that AI cannot replicate and remain central to personal and professional growth.
A key theme was whether teachers and universities remain relevant in an AI-driven future. Students from around the world contributed compelling reflections: AI may offer efficiency, but it cannot replace the emotional intelligence, mentorship, and meaning-making that only human educators provide.
As one student said, ‘I don’t care about ChatGPT—it’s not human.’ The group reached a consensus: professors must shift from ‘sages on the stage’ to ‘guides on the side,’ coaching students through complexity rather than merely transmitting knowledge.
The session closed on an optimistic note, asserting that while AI is a powerful catalyst for change, the heart of education lies in human connection, dialogue, and the ability to ask the right questions. Participants agreed that a truly forward-looking educational model will emerge not from choosing between East and West or human and machine, but from integrating the best of all to build a more inclusive and insightful future of learning.
Track all key events from the WSIS+20 High-Level Event 2025 on our dedicated page.
Education
Educators lack clarity on how to deal with AI in classrooms
An artificial intelligence furore that’s consuming Singapore’s academic community reveals how we’ve lost the plot over the role the hyped-up technology should play in higher education.
A student at Nanyang Technological University said in a Reddit post that she used a digital tool to alphabetize her citations for a term paper. When it was flagged for typos, she was then accused of breaking the rules over the use of Generative AI for the assignment. It snowballed when two more students came forward with similar complaints, one alleging that she was penalized for using ChatGPT to help with initial research, even though she says she did not use the bot to draft the essay.
The school, which publicly states it embraces AI for learning, initially defended its zero-tolerance stance in this case in statements to local media. But internet users rallied around the original Reddit poster and rejoiced at an update that she won an appeal to rid her transcript of the ‘academic fraud’ label.
Also Read: Rahul Matthan: AI models aren’t copycats but learners just like us
It may sound like a run-of-the-mill university dispute. But there’s a reason the saga went so viral, garnering thousands of upvotes and heated opinions from online commentators. It has laid bare the strange new world we’ve found ourselves in, as students and faculty are rushing to keep pace with how AI should or shouldn’t be used in universities.
It’s a global conundrum, but the debate has especially roiled Asia. Stereotypes of math nerds and tiger moms aside, a rigorous focus on tertiary studies is often credited for the region’s dramatic economic rise. The importance of education—and long hours of studying—is instilled from the earliest age. So how does this change in the AI era? The reality is that nobody has the answer yet.
Despite promises from ed-tech leaders that we’re on the cusp of ‘the biggest positive transformation that education has ever seen,’ the data on academic outcomes hasn’t kept pace with the technology’s adoption. There are no long-term studies on how AI tools impact learning and cognitive functions—and viral headlines that it could make us lazy and dumb only add to the anxiety. Meanwhile, the race to not be left behind in implementing the technology risks turning an entire generation of developing minds into guinea pigs.
For educators navigating this moment, the answer is not to turn a blind eye. Even if some teachers discourage the use of AI, it has become all but unavoidable for many scholars doing research in the internet age.
Also Read: You’re absolutely right, as the AI chatbot says
Most Google searches now lead with automated summaries. Scrolling through these should not count as academic dishonesty. An informal survey of 500 Singaporean students from secondary school through university conducted by a local news outlet this year found that 84% were using products like ChatGPT for homework on a weekly basis.
In China, many universities are turning to AI cheating detectors, even though the technology is imperfect. Some students are reporting on social media that they have to dumb down their writing to pass these tests or shell out cash for such detection tools themselves to ensure they beat them before submitting their papers.
It doesn’t have to be this way. The chaotic moment of transition has put new onus on educators to adapt and shift the focus on the learning process as much as the final results, Yeow Meng Chee, the provost and chief academic and innovation officer at the Singapore University of Technology and Design, tells me. This does not mean villainizing AI, but treating it as a tool and ensuring a student understands how they arrived at their final conclusion even if they used technology. This process also helps ensure the AI outputs, which remain imperfect and prone to hallucinations (or typos), are checked and understood.
Also Read: Technobabble: We need a whole new vocabulary to keep up with the evolution of AI
Ultimately, professors who make the biggest difference aren’t those who improve exam scores but who build trust, teach empathy and instil confidence in students to solve complex problems. The most important parts of learning still can’t be optimized by a machine.
The Singapore saga shows how everyone is on edge and whether a reference-sorting website even counts as a generative AI tool isn’t clear. It also exposed another irony: Saving time on a tedious task would likely be welcomed when the student enters the workforce—if the technology hasn’t already taken her entry-level job.
Demand for AI literacy in the labour market is becoming a must-have and universities ignoring it would do a disservice to student cohorts entering the real world.
We’re still a few years away from understanding the full impact of AI on teaching and how it can best be used in higher education. But let’s not miss the forest for the trees as we figure it out. ©Bloomberg
The author is a Bloomberg Opinion columnist covering Asia tech.
Education
Cambridge Judge Business School Executive Education launches four-month Cambridge AI Leadership Programme — EdTech Innovation Hub
Launched in collaboration with Emeritus, a provider of short courses, degree programmes, professional certificates, and senior executive programs, the Cambridge Judge Business School Executive Education course is now available for a September 2025 start.
The Cambridge AI Leadership Programme aims to help participants navigate the complexities of AI adoptions, identify scalable opportunities and build a strategic roadmap for successful implementation.
Using a blend of in-person and online learning, the course covers AI concepts, applications, and best practice to improve decision-making skills. It also covers digital transformation and ethical AI governance.
The program is aimed at senior leaders looking to lead their organizations through transformations and integrate AI technologies.
“AI is a transformative force reshaping business strategy, decision-making and leadership. Senior executives must not only understand AI but also use it to drive business goals, efficiency and new revenue opportunities,” explains Professor David Stillwell, Co-Academic Programme Director.
“The Cambridge AI Leadership Programme offers a strategic road map, equipping leaders with the skills and mindset to integrate AI into their organisations and lead in an AI-driven world.”
“The Cambridge AI Leadership Programme empowers decision-makers to harness AI in ways that align with their organisation’s goals and prepare for the future,” says Vesselin Popov, Co-Academic Programme Director.
“Through a comprehensive learning experience, participants gain strategic insights and practical knowledge to drive transformation, strengthen decision-making and navigate technological shifts with confidence.”
RTIH AI in Retail Awards
Our sister title, RTIH, organiser of the industry leading RTIH Innovation Awards, proudly brings you the first edition of the RTIH AI in Retail Awards, which is now open for entries.
As we witness a digital transformation revolution across all channels, AI tools are reshaping the omnichannel game, from personalising customer experiences to optimising inventory, uncovering insights into consumer behaviour, and enhancing the human element of retailers’ businesses.
With 2025 set to be the year when AI and especially gen AI shake off the ‘heavily hyped’ tag and become embedded in retail business processes, our newly launched awards celebrate global technology innovation in a fast moving omnichannel world and the resulting benefits for retailers, shoppers and employees.
Our 2025 winners will be those companies who not only recognise the potential of AI, but also make it usable in everyday work – resulting in more efficiency and innovation in all areas.
Winners will be announced at an evening event at The Barbican in Central London on Wednesday, 3rd September.
Education
AI, Irreality and the Liberal Educational Project (opinion)
I work at Marquette University. As a Roman Catholic, Jesuit university, we’re called to be an academic community that, as Pope John Paul II wrote, “scrutinize[s] reality with the methods proper to each academic discipline.” That’s a tall order, and I remain in the academy, for all its problems, because I find that job description to be the best one on offer, particularly as we have the honor of practicing this scrutinizing along with ever-renewing groups of students.
This bedrock assumption of what a university is continues to give me hope for the liberal educational project despite the ongoing neoliberalization of higher education and some administrators’ and educators’ willingness to either look the other way regarding or uncritically celebrate the generative software (commonly referred to as “generative artificial intelligence”) explosion over the last two years.
In the time since my last essay in Inside Higher Ed, and as Marquette’s director of academic integrity, I’ve had plenty of time to think about this and to observe praxis. In contrast to the earlier essay, which was more philosophical, let’s get more practical here about how access to generative software is impacting higher education and our students and what we might do differently.
At the academic integrity office, we recently had a case in which a student “found an academic article” by prompting ChatGPT to find one for them. The chat bot obeyed, as mechanisms do, and generated a couple pages of text with a title. This was not from any actual example of academic writing but instead was a statistically probable string of text having no basis in the real world of knowledge and experience. The student made a short summary of that text and submitted it. They were, in the end, not found in violation of Marquette’s honor code, since what they submitted was not plagiarized. It was a complex situation to analyze and interpret, done by thoughtful people who care about the integrity of our academic community: The system works.
In some ways, though, such activity is more concerning than plagiarism, for, at least when students plagiarize, they tend to know the ways they are contravening social and professional codes of conduct—the formalizations of our principles of working together honestly. In this case, the student didn’t see the difference between a peer-reviewed essay published by an academic journal and a string of probabilistically generated text in a chat bot’s dialogue box. To not see the difference between these two things—or to not care about that difference—is more disconcerting and concerning to me than straightforward breaches of an honor code, however harmful and sad such breaches are.
I already hear folks saying: “That’s why we need AI literacy!” We do need to educate our students (and our colleagues) on what generative software is and is not. But that’s not enough. Because one also needs to want to understand and, as is central to the Ignatian Pedagogical Paradigm that we draw upon at Marquette, one must understand in context.
Another case this spring term involved a student whom I had spent several months last fall teaching in a writing course that took “critical AI” as its subject matter. Yet this spring term the student still used a chat bot to “find a quote in a YouTube video” for an assignment and then commented briefly on that quote. The problem was that the quote used in the assignment does not appear in the selected video. It was a simulacrum of a quote; it was a string of probabilistically generated text, which is all generative software can produce. It did not accurately reflect reality, and the student did not cite the chat bot they’d copied and pasted from, so they were found in violation of the honor code.
Another student last term in the Critical AI class prompted Microsoft Copilot to give them quotations from an essay, which it mechanically and probabilistically did. They proceeded to base their three-page argument on these quotations, none of which said anything like what the author in question actually said (not even the same topic); their argument was based in irreality. We cannot scrutinize reality together if we cannot see reality. And many of our students (and colleagues) are, at least at times, not seeing reality right now. They’re seeing probabilistic text as “good enough” as, or conflated with, reality.
Let me point more precisely to the problem I’m trying to put my finger on. The student who had a chat bot “find” a quote from a video sent an email to me, which I take to be completely in earnest and much of which I appreciated. They ended the email by letting me know that they still think that “AI” is a really powerful and helpful tool, especially as it “continues to improve.” The cognitive dissonance between the situation and the student’s assertion took me aback.
Again: the problem with the “We just need AI literacy” argument. People tend not to learn what they do not want to learn. If our students (and people generally) do not particularly want to do work, and they have been conditioned by the use of computing and their society’s habits to see computing as an intrinsic good, “AI” must be a powerful and helpful tool. It must be able to do all the things that all the rich and powerful people say it does. It must not need discipline or critical acumen to employ, because it will “supercharge” your productivity or give you “10x efficiency” (whatever that actually means). And if that’s the case, all these educators telling you not to offload your cognition must be behind the curve, or reactionaries. At the moment, we can teach at least some people all about “AI literacy” and it will not matter, because such knowledge refuses to jibe with the mythology concerning digital technology so pervasive in our society right now.
If we still believe in the value of humanistic, liberal education, we cannot be quiet about these larger social systems and problems that shape our pupils, our selves and our institutions. We cannot be quiet about these limits of vision and questioning. Because not only do universities exist for the scrutinizing of reality with the various methods of the disciplines as noted at the outset of this essay, but liberal education also assumes a view of the human person that does not see education as instrumental but as formative.
The long tradition of liberal education, for all its complicity in social stratification down the centuries, assumes that our highest calling is not to make money, to live in comfort, to be entertained. (All three are all right in their place, though we must be aware of how our moneymaking, comfort and entertainment derive from the exploitation of the most vulnerable humans and the other creatures with whom we share the earth, and how they impact our own spiritual health.)
We are called to growth and wisdom, to caring for the common good of the societies in which we live—which at this juncture certainly involves caring for our common home, the Earth, and the other creatures living with us on it. As Antiqua et nova, the note released from the Vatican’s Dicastery for Culture and Education earlier this year (cited commendingly by secular ed-tech critics like Audrey Watters) reiterates, education plays its role in this by contributing “to the person’s holistic formation in its various aspects (intellectual, cultural, spiritual, etc.) … in keeping with the nature and dignity of the human person.”
These objectives of education are not being served by students using generative software to satisfy their instructors’ prompts. And no amount of “literacy” is going to ameliorate the situation on its own. People have to want to change, or to see through the neoliberal, machine-obsessed myth, for literacy to matter.
I do believe that the students I’ve referred to are generally striving for the good as they know how. On a practical level, I am confident they’ll go on to lead modestly successful lives as our society defines that term with regard to material well-being. I assume their motivation is not to cause harm or dupe their instructors; they’re taking part in “hustle” culture, “doing school” and possibly overwhelmed by all their commitments. Even if all this is indeed the case, liberal education calls us to more, and it’s the role of instructors and administrators to invite our students into that larger vision again and again.
If we refuse to give up on humanistic, liberal education, then what do we do? The answer is becoming clearer by the day, with plenty of folks all over the internet weighing in, though it is one many of us do not really want to hear. Because at least one major part of the answer is that we need to make an education genuinely oriented toward our students. A human-scale education, not an industrial-scale education (let’s recall over and over that computers are industrial technology). The grand irony of the generative software moment for education in neoliberal, late-capitalist society is that it is revealing so many of the limits we’ve been putting on education in the first place.
If we can’t “AI literacy” our educational problems away, we have to change our pedagogy. We have to change the ways we interact with our students inside the classroom and out: to cultivate personal relationships with them whenever possible, to model the intellectual life as something that is indeed lived out with the whole person in a many-partied dialogue stretching over millennia, decidedly not as the mere ability to move information around. This is not a time for dismay or defeat but an incitement to do the experimenting, questioning, joyful intellectual work many of us have likely wanted to do all along but have not had a reason to go off script for.
This probably means getting creative. Part of getting creative in our day probably means de-computing (as Dan McQuillan at the University of London labels it). To de-compute is to ask ourselves—given our ambient maximalist computing habits of the last couple decades—what is of value in this situation? What is important here? And then: Does a computer add value to this that it is not detracting from in some other way? Computers may help educators collect assignments neatly and read them clearly, but if that convenience is outweighed by constantly having to wonder if a student has simply copied and pasted or patch-written text with generative software, is the value of the convenience worth the problems?
Likewise, getting creative in our day probably means looking at the forms of our assessments. If the highly structured student essay makes it easier for instructors to assess because of its regularity and predictability, yet that very regularity and predictability make it a form that chat bots can produce fairly readily, well: 1) the value for assessing may not be worth the problems of teeing up chat bot–ifiable assignments and 2) maybe that wasn’t the best form for inviting genuinely insightful and exciting intellectual engagement with our disciplines’ materials in the first place.
I’ve experimented with research journals rather than papers, with oral exams as structured conversations, with essays that focus intently on one detail of a text and do not need introductions and conclusions and that privilege the student’s own voice, and other in-person, handmade, leaving-the-classroom kinds of assessments over the last academic year. Not everything succeeded the way I wanted, but it was a lively, interactive year. A convivial year. A year in which mostly I did not have to worry about whether students were automating their educations.
We have a chance as educators to rethink everything in light of what we want for our societies and for our students; let’s not miss it because it’s hard to redesign assignments and courses. (And it is hard.) Let’s experiment, for our own sakes and for our students’ sakes. Let’s experiment for the sakes of our institutions that, though they are often scoffed at in our popular discourse, I hope we believe in as vibrant communities in which we have the immense privilege of scrutinizing reality together.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Jobs & Careers5 days ago
Ilya Sutskever Takes Over as CEO of Safe Superintelligence After Daniel Gross’s Exit