Education
Navigating the interplay between artificial intelligence, philosophy, education, and governance
The rapid development of AI presents profound philosophical, educational, and governance challenges and opportunities for the global community. To address these dimensions, we will organise a young leaders and experts discussion to explore how AI can be governed in a way that reflects shared human values and ensures inclusive, sustainable, and ethical development.
This event will be organised by DiploFoundation and Beijing Institute of Technology on the sidelines of the WSIS Forum and AI for Good Global Summit in Geneva, providing a timely opportunity to address the relevance of the AI era of European, Chinese, and other philosophical and educational traditions.
Objectives
- Examine the philosophical foundations and ethical implications of AI development and deployment.
- Explore the interplay between AI and education and discuss how to best integrate AI in educational systems to support critical thinking, creativity, adaptability, and prepare students for the AI era.
- Promote inclusive dialogue among policymakers, academia, youth, civil society, and private sector stakeholders.
- Provide a platform for youth and emerging scholars to engage in open dialogue on AI governance, share perspectives, and contribute to shaping inclusive and forward-looking AI policy discussions.
Program
14:00 – 14:10 | Opening Remarks
A brief welcome to set the stage for a half-day of insightful discussions on the multifaceted implications of AI in various sectors of society.
14:10 – 14:40 | AI: Development and Application
This session invites emerging scholars to share their perspectives, experiences, and aspirations regarding the development, deployment, and application of AI technologies. Through their insights, participants will explore divergent trajectories in AI innovation, evaluate its societal and environmental impacts—both positive and negative—and discuss the competencies and mindsets the younger generation must cultivate. Special attention will be given to those working at the intersection of science, technology, and engineering.
14:40 – 15:30 | AI and Education
Focusing on the transformative potential of AI in reshaping education systems, this session addresses the dual dimensions of “AI for Education” and “Education for AI.” Discussions will highlight how AI can enhance teaching, learning, and capacity-building, ensuring that educational goals remain central. Simultaneously, panelists and participants will consider necessary educational reforms to prepare learners and institutions for the AI era—ensuring that education not only adapts to AI but also guides its responsible use.
15:30 – 16:20 | AI Governance
This panel will delve into critical issues in AI governance, analysing differences across regulatory regimes and identifying areas for potential consensus in the near, medium, and long term. The session will provide a platform for reflecting on legal, ethical, and institutional frameworks essential for fostering accountable and inclusive AI governance.
16:20 – 16:40 | AI and Philosophy
This segment explores AI’s intersection with fundamental philosophical inquiries, including ethics, consciousness, and societal values.
16:40 – 17:00 | Closing Reflections and Refreshments
An opportunity to share final thoughts, reflect on the day’s discussions, and network with fellow participants in an informal setting.
Education
Anthropic Continue The Push For AI In Education
Anthropic Continue The Push For AI In Education
Let’s be honest. AI has already taken a seat in the classroom. Google, Microsoft, OpenAI, Anthropic have all been pushing hard. Today brings more announcements from Athropic, the company behind the AI chatbot Claude, adding even more momentum. The shift isn’t subtle anymore. It’s fast, loud and it’s happening whether schools are ready or not.
It’s not only big tech. The U.S. government is also driving efforts to integrate A1 into education.
The Balance of Innovation and Safety
There’s real concern, and for good reason. Sure, the benefits are hard to ignore. AI tutoring, lighter workloads for teachers, more personalized learning paths for students. It all sounds great. But there’s a flip side. Missteps here could make existing education gaps worse. And once the damage is done, it’s tough to undo.
Many policymakers are stepping in early. They’re drafting ethical guardrails, pushing for equitable access, and starting to fund research into what responsible use of AI in education really looks like. Not as a PR move, but because the stakes are very real.
Meanwhile, the tech companies are sprinting. Google is handing out AI tools for schools at no cost, clearly aiming for reach. The strategy is simple: remove barriers and get in early. Just yesterday Microsoft, OpenAI, and Anthropic teamed up to build a national AI academy for teachers. An acknowledgment that it’s not the tools, but the people using them, that determine success. Teachers aren’t optional in this equation. They’re central.
Claude’s New Education Efforts
Claude for Education’s recent moves highlight what effective integration could look like. Its Canvas integration means students don’t need to log into another platform or juggle windows. Claude just works inside what they’re already using. That kind of invisible tech, could be the kind that sticks.
Then there’s the Panopto partnership. Students can now access lecture transcripts directly in their Claude conversations. Ask a question about a concept from class and Claude can pull the relevant sections right away. No need to rewatch an entire lecture or scrub through timestamps. It’s like giving every student their own research assistant.
And they’ve gone further. Through Wiley, Claude can now pull from a massive library of peer-reviewed academic sources. That’s huge. AI tools are often criticized for producing shaky or misleading information. But with access to vetted, high-quality content, Claude’s answers become more trustworthy. In a world overflowing with misinformation, that matters more than ever.
Josh Jarrett, senior vice president of AI growth at Wiley, emphasized this: “The future of research depends on keeping high-quality, peer-reviewed content central to AI-powered discovery. This partnership sets the standard for integrating trusted scientific content with AI platforms.”
Claude for Education are building a grassroots movement on campuses, too. Their student ambassador program is growing fast and new Claude Builder Clubs are popping up at universities around the world. Rather than being coding bootcamps or formal classes, they’re open spaces where students explore what they can actually make with AI. Workshops, demo nights and group builds.
These clubs are for everyone. Not just computer science majors. Claude’s tools are accessible enough that students in any field, from philosophy to marketing, can start building. That kind of openness helps make AI feel less like elite tech and more like something anyone can use creatively.
Privacy is a big theme here, too. Claude seems to be doing things right. Conversations are private, they’re not used for model training and any data-sharing with schools requires formal approvals.cStudents need to feel safe using AI tools. Without that trust, none of this works long term.
At the University of San Francisco School of Law, students are working with Claude to analyze legal arguments, map evidence and prep for trial scenarios. This is critical training for the jobs they’ll have after graduation. In the UK, Northumbria University is also leaning in. Their focus is on equity, digital access and preparing students for a workplace that’s already being shaped by AI
Graham Wynn, vice-chancellor for education at Northumbria University, puts the ethical side of AI front and center: “The availability of secure and ethical AI tools is a
significant consideration for our applicants, and our investment in Claude for Education
will position Northumbria as a forward-thinking leader in ethical AI innovation.”
They see tools like Claude not just as educational add-ons, but as part of a broader strategy to drive social mobility and reduce digital poverty. If you’re serious about AI in education, that’s the level of thinking it takes.
Avoiding Complexity and Closing Gaps
The core truth here is simple. AI’s role in education is growing whether we plan for it or not. The technology is getting more capable. The infrastructure is being built. But what still needs to grow, is a culture of responsible use. The challenge for education isn’t chasing an even smarter tool, but ensuring the tools we have serve all students equally.
That means listening to educators. It means designing for inclusion from the ground up. It means making sure AI becomes something that empowers students, not just another layer of complexity.
The next few years will shape everything. If we get this right, AI could help close long-standing gaps in education. If we don’t, we risk deepening them in ways we’ll regret later.
This is more than a tech story. It’s a human one. And the decisions being made today will echo for a long time.
Education
AI can access your school courses
Using genAI software like ChatGPT for school makes perfect sense, considering how sophisticated the software has become. It’s not about cheating on exams or having the AI do your homework, though some people will use it that way. It’s about having an AI tutor that understands natural language and can guide you while you learn.
It’s like taking your professors home with you to explain the topics you’re still struggling with. Combined with human teachers, AI tools can make a real difference in education.
OpenAI is already working on a ChatGPT Study Together model that will act as an AI tutor, but you don’t have to wait for that product to launch. Anthropic is already ahead, having released a Claude for Education product back in April.
The AI firm is now ready to give Claude for Education a major upgrade. Anthropic on Wednesday announced new tools for Claude that let the AI access school courses and materials more easily, along with new university partnerships that will bring Claude to even more students.
Canvas, Panopto, and Wiley support
The current Learning Mode experience in Claude for Education involves turning the AI into a teacher-like persona. Instead of providing direct answers or solutions, Claude uses Socratic questioning to help students find the answers on their own.
“How would you approach this problem?” or “What evidence supports your conclusions?” are examples of questions Claude will ask in this mode.
The July update will let users give Claude more context by connecting it to three student-friendly data sources: Canvas, Panopto, and Wiley.
Claude will use MCP servers to gather information from Panopto, and Wiley. Panopto offers lecture transcripts. Wiley provides access to peer-reviewed content that can support learning with Claude.
Canvas contains course materials. Claude will also support Canvas LTI (Learning Tools Interoperability), letting students use the AI directly within their Canvas courses.
New partnerships
Anthropic also announced two new partnerships with “forward-thinking institutions” that want to give students access to AI tools built for education. These schools are the University of San Francisco School of Law and Northumbria University.
The former is especially notable in a world where some lawyers have used AI in legal matters, only for it to fumble legal citations. Future lawyers need to learn how AI can be used effectively and where its limits are.
Dean Johanna Kalb explained how Claude will be used at the University of San Francisco School of Law to actually help students:
We’re excited to introduce students to the practical use of LLMs in litigation. One way we’re doing this is through our Evidence course, where this fall, students will gain direct experience applying LLMs to analyze claims and defenses, map evidence to elements of each cause of action, identify evidentiary gaps to inform discovery, and develop strategies for admission and exclusion of evidence at trial.
That’s certainly better than having genAI write your legal documents and risk hallucinating key details.
Finally, Anthropic is expanding its student ambassadors program, giving more passionate students the chance to contribute to the Claude community. Claude Builder Clubs will launch on campuses around the world, offering hackathons, workshops, and demo nights for students interested in AI.
Education
HBK trustee Harsh Kapadia shares vision for AI in education

New Delhi [India], July 9: Harsh Kapadia, Trustee of The HB Kapadia New High School, represented the institution at the prestigious Economic Times Annual Education Summit 2025 in New Delhi. The summit, themed “Fuelling the Education Economy with AI: The India Story”, brought together some of the country’s most influential voices in education, technology, and policymaking.
Sharing the stage with national leaders such as Sanjay Jain, Head of Google for Education, India, Aanchal Chopra, Regional Head, North, LinkedIn, Shishir Jaipuria, Chairman of Jaipuria Group of Schools, and Shantanu Prakash, Founder of Millennium Schools, Mr. Kapadia highlighted the critical role of Artificial Intelligence in shaping the future of Indian education.
In his remarks, Mr. Kapadia emphasised the urgent need to integrate AI into mainstream schooling. He also said that this will begin not with advanced algorithms but with teachers.
“AI does not begin with algorithms. It begins with empowered educators,” he said, calling for schools to prioritise teacher readiness alongside technological upgrades.
He elaborated on HBK’s progressive steps under its FuturEdge Program, a future-readiness initiative that integrates academics with emerging technologies and life skills.
“Artificial Intelligence will soon be as essential to education as electricity and the internet,” he said, emphasising that while AI is a powerful technological tool, its greatest impact lies in how teachers and students use it collaboratively. He noted that AI won’t replace teachers, but teachers who use AI will replace those who don’t.
His recommendations included weekly AI training periods for teachers, AI-infused school curriculum, infrastructure upgrades, and cross-industry collaborations to expose students to real-world applications of AI.
Mr. Kapadia shared that HBK has already begun incorporating AI into its school assemblies and is planning to introduce a dedicated “AI Period” in the academic calendar. The school is also conceptualising an annual “AI Fest” for students, where innovation and problem-solving will take centre stage. In terms of infrastructure, the school is actively upgrading classrooms with AI-enabled digital panels and computer labs designed for hands-on learning.
Calling for greater collaboration between schools and industry, Mr. Kapadia also proposed regular expert-led sessions with professionals from Google, LinkedIn, IBM, and AI startups.
Concluding his address, he reaffirmed HBK’s commitment to pioneering responsible and human-centred use of technology in education, saying, “AI is not a separate subject. It is a way of thinking, creating, and teaching. If we want future-ready students, we must begin with future-ready schools.”
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Funding & Business6 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%