Connect with us

Education

The AFT launches a national academy for AI in New York

Published

on


In a move aimed at bringing artificial intelligence into the heart of US classrooms, the American Federation of Teachers (AFT) has launched the National Academy for AI Instruction, a $23 million joint initiative with Microsoft, OpenAI, Anthropic, and the United Federation of Teachers (UFT).

The initiative, unveiled in New York City, aims to provide free, comprehensive AI training to all 1.8 million AFT members—starting with K-12 teachers—via a new physical and digital hub housed in Manhattan.

It marks the first major partnership between a US teachers’ union and the technology sector on this scale, offering a national model as educators worldwide grapple with how to adapt to the rapid rise of AI in classrooms.

The announcement comes amid growing global concern about the pace of AI adoption in education, with governments and unions in Canada, Australia, the UK, and Singapore all launching varying forms of AI literacy programs for teachers.

In the UK, the Department for Education has funded pilot projects to embed AI tools into school leadership and lesson planning. In South Korea, the government has pledged to provide AI education in all schools by 2027. But the US initiative stands out for its union-led structure and its strong public-private coalition.

“Educators are overwhelmed by the speed of change in AI,” said AFT President Randi Weingarten. “This academy puts them in the driver’s seat. It’s not about replacing teachers—it’s about giving them the tools and ethical frameworks to use AI to enhance what they already do best.”

The academy will operate from a purpose-built centre in New York, with plans to scale nationwide. Within five years, it aims to train 400,000 educators—roughly 10 per cent of the US teaching workforce—and reach more than 7 million students.

The curriculum will offer credentialed pathways and ongoing professional development, with both in-person and virtual components.

Educators as architects of AI

 

Brad Smith, vice chair and president of Microsoft, called the project “a model for responsible AI integration” in schools. “This partnership will not only help teachers learn to use AI—it gives them a voice in shaping how we build it,” he said.

Microsoft and the AFT began laying the groundwork for the initiative two years ago in collaboration with the AFL-CIO, through summer symposiums aimed at exploring AI’s role in labour and education.

OpenAI, whose technology underpins popular tools like ChatGPT, echoed the call for teachers to take the lead. “AI should be a coach, not a critic,” said Chris Lehane, chief global affairs officer. “This academy will ensure AI is being deployed to support the educator’s mission—not disrupt it.”

Anthropic, known for its AI model Claude, said the partnership reflects the urgency of responsible AI adoption in schools. “We’re at a pivotal moment,” said co-founder Jack Clark. “How we teach AI now will shape the next generation’s relationship with it.”

The curriculum will cover AI literacy, ethics, classroom applications, and workflow enhancements—from grading and lesson planning to generating differentiated instructional materials. Innovation labs will allow educators to co-design tools with AI developers, and feedback from classroom use will inform future updates.

For some teachers, the initiative is reminiscent of previous technological shifts. “It’s like when we first got word processors, but ten times bigger,” said Vincent Plato, a K–8 educator in New York City. “AI can become a teacher’s thought partner—especially when you’re lesson planning at midnight.”

Marlee Katz, a teacher for deaf and hard-of-hearing students, noted how AI tools are already enhancing communication. “Sometimes you struggle to find the right tone or phrase—these tools don’t replace your voice, they help you express it better.”

The initiative’s roots lie with Roy Bahat, a venture capitalist and AFT member who proposed the idea after helping facilitate early dialogues between Microsoft and the labour movement. Bahat, who leads Bloomberg Beta, will join the academy’s board.

The launch underscores growing awareness that educational AI cannot be left solely to the tech sector. The union-led approach offers a counterbalance to top-down government mandates or unregulated edtech rollouts seen elsewhere.

Fremantle partners with Multiverse AI training platform

Across Europe, AI guidelines for schools have largely been issued by education ministries with limited teacher consultation. In contrast, the AFT initiative positions educators not as adopters but as co-designers.

“Too often, new technologies are weaponised against teachers,” said UFT President Michael Mulgrew. “This time, we’re building something that works for educators.”

The academy is expected to begin instruction this autumn. With bipartisan support from policymakers and a groundswell of demand from schools already experimenting with generative AI, its success may serve as a blueprint for how unions and industry might collaborate more broadly on the future of work.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Education

Anthropic Continue The Push For AI In Education

Published

on


Let’s be honest. AI has already taken a seat in the classroom. Google, Microsoft, OpenAI, Anthropic have all been pushing hard. Today brings more announcements from Athropic, the company behind the AI chatbot Claude, adding even more momentum. The shift isn’t subtle anymore. It’s fast, loud and it’s happening whether schools are ready or not.

It’s not only big tech. The U.S. government is also driving efforts to integrate A1 into education.

The Balance of Innovation and Safety

There’s real concern, and for good reason. Sure, the benefits are hard to ignore. AI tutoring, lighter workloads for teachers, more personalized learning paths for students. It all sounds great. But there’s a flip side. Missteps here could make existing education gaps worse. And once the damage is done, it’s tough to undo.

Many policymakers are stepping in early. They’re drafting ethical guardrails, pushing for equitable access, and starting to fund research into what responsible use of AI in education really looks like. Not as a PR move, but because the stakes are very real.

Meanwhile, the tech companies are sprinting. Google is handing out AI tools for schools at no cost, clearly aiming for reach. The strategy is simple: remove barriers and get in early. Just yesterday Microsoft, OpenAI, and Anthropic teamed up to build a national AI academy for teachers. An acknowledgment that it’s not the tools, but the people using them, that determine success. Teachers aren’t optional in this equation. They’re central.

Claude’s New Education Efforts

Claude for Education’s recent moves highlight what effective integration could look like. Its Canvas integration means students don’t need to log into another platform or juggle windows. Claude just works inside what they’re already using. That kind of invisible tech, could be the kind that sticks.

Then there’s the Panopto partnership. Students can now access lecture transcripts directly in their Claude conversations. Ask a question about a concept from class and Claude can pull the relevant sections right away. No need to rewatch an entire lecture or scrub through timestamps. It’s like giving every student their own research assistant.

And they’ve gone further. Through Wiley, Claude can now pull from a massive library of peer-reviewed academic sources. That’s huge. AI tools are often criticized for producing shaky or misleading information. But with access to vetted, high-quality content, Claude’s answers become more trustworthy. In a world overflowing with misinformation, that matters more than ever.

Josh Jarrett, senior vice president of AI growth at Wiley, emphasized this: “The future of research depends on keeping high-quality, peer-reviewed content central to AI-powered discovery. This partnership sets the standard for integrating trusted scientific content with AI platforms.”

Claude for Education are building a grassroots movement on campuses, too. Their student ambassador program is growing fast and new Claude Builder Clubs are popping up at universities around the world. Rather than being coding bootcamps or formal classes, they’re open spaces where students explore what they can actually make with AI. Workshops, demo nights and group builds.

These clubs are for everyone. Not just computer science majors. Claude’s tools are accessible enough that students in any field, from philosophy to marketing, can start building. That kind of openness helps make AI feel less like elite tech and more like something anyone can use creatively.

Privacy is a big theme here, too. Claude seems to be doing things right. Conversations are private, they’re not used for model training and any data-sharing with schools requires formal approvals.cStudents need to feel safe using AI tools. Without that trust, none of this works long term.

At the University of San Francisco School of Law, students are working with Claude to analyze legal arguments, map evidence and prep for trial scenarios. This is critical training for the jobs they’ll have after graduation. In the UK, Northumbria University is also leaning in. Their focus is on equity, digital access and preparing students for a workplace that’s already being shaped by AI

Graham Wynn, vice-chancellor for education at Northumbria University, puts the ethical side of AI front and center: “The availability of secure and ethical AI tools is a
significant consideration for our applicants, and our investment in Claude for Education
will position Northumbria as a forward-thinking leader in ethical AI innovation.”

They see tools like Claude not just as educational add-ons, but as part of a broader strategy to drive social mobility and reduce digital poverty. If you’re serious about AI in education, that’s the level of thinking it takes.

Avoiding Complexity and Closing Gaps

The core truth here is simple. AI’s role in education is growing whether we plan for it or not. The technology is getting more capable. The infrastructure is being built. But what still needs to grow, is a culture of responsible use. The challenge for education isn’t chasing an even smarter tool, but ensuring the tools we have serve all students equally.

That means listening to educators. It means designing for inclusion from the ground up. It means making sure AI becomes something that empowers students, not just another layer of complexity.

The next few years will shape everything. If we get this right, AI could help close long-standing gaps in education. If we don’t, we risk deepening them in ways we’ll regret later.

This is more than a tech story. It’s a human one. And the decisions being made today will echo for a long time.



Source link

Continue Reading

Education

HBK trustee Harsh Kapadia shares vision for AI in education

Published

on


New Delhi [India], July 9: Harsh Kapadia, Trustee of The HB Kapadia New High School, represented the institution at the prestigious Economic Times Annual Education Summit 2025 in New Delhi. The summit, themed “Fuelling the Education Economy with AI: The India Story”, brought together some of the country’s most influential voices in education, technology, and policymaking.

Sharing the stage with national leaders such as Sanjay Jain, Head of Google for Education, India, Aanchal Chopra, Regional Head, North, LinkedIn, Shishir Jaipuria, Chairman of Jaipuria Group of Schools, and Shantanu Prakash, Founder of Millennium Schools, Mr. Kapadia highlighted the critical role of Artificial Intelligence in shaping the future of Indian education.

In his remarks, Mr. Kapadia emphasised the urgent need to integrate AI into mainstream schooling. He also said that this will begin not with advanced algorithms but with teachers.

“AI does not begin with algorithms. It begins with empowered educators,” he said, calling for schools to prioritise teacher readiness alongside technological upgrades.

He elaborated on HBK’s progressive steps under its FuturEdge Program, a future-readiness initiative that integrates academics with emerging technologies and life skills.

“Artificial Intelligence will soon be as essential to education as electricity and the internet,” he said, emphasising that while AI is a powerful technological tool, its greatest impact lies in how teachers and students use it collaboratively. He noted that AI won’t replace teachers, but teachers who use AI will replace those who don’t.

His recommendations included weekly AI training periods for teachers, AI-infused school curriculum, infrastructure upgrades, and cross-industry collaborations to expose students to real-world applications of AI.

Mr. Kapadia shared that HBK has already begun incorporating AI into its school assemblies and is planning to introduce a dedicated “AI Period” in the academic calendar. The school is also conceptualising an annual “AI Fest” for students, where innovation and problem-solving will take centre stage. In terms of infrastructure, the school is actively upgrading classrooms with AI-enabled digital panels and computer labs designed for hands-on learning.

Calling for greater collaboration between schools and industry, Mr. Kapadia also proposed regular expert-led sessions with professionals from Google, LinkedIn, IBM, and AI startups.

Concluding his address, he reaffirmed HBK’s commitment to pioneering responsible and human-centred use of technology in education, saying, “AI is not a separate subject. It is a way of thinking, creating, and teaching. If we want future-ready students, we must begin with future-ready schools.”

 



Source link

Continue Reading

Education

AI is now allowed in IITs and IIMs, has the ethics debate reached its end?

Published

on


In IITs, IIMs, and universities across the country, the use of AI sits in a grey zone. Earlier this year, IIM Kozhikode Director Prof Debashis Chatterjee said that there was no harm in using ChatGPT to write research papers. What started as a whisper has now become a larger question: not whether AI can be used, but how it should be.

Students and professors alike are now open to using it. Many already do, but without clear guidelines. The real issue now isn’t intent, but the lack of defined boundaries that need to be set.

Across India’s top institutions, including IITs, IIMs, and others, the debate is no longer theoretical. It’s practical; real; urgent. From IIT Delhi to IIM Sambalpur, from classrooms to coding labs, students and faculty are confronting the same reality: AI is not just here. It’s working. And it’s working fast.

“There’s no denying AI is here to stay, and the real question is not if it should be used, but how. Students are already using it to support their learning, so it’s vital they understand both its strengths and its limits, including ethical concerns and the cognitive cost of over-reliance,” said Professor Dr Srikanth Sugavanam, IIT Mandi, responding to a question to India Today Digital.

“Institutions shouldn’t restrict AI use, but they must set clear guardrails so that both teachers and students can navigate it responsibly,” he further added.

INITIATIVE BY IIT DELHI

In a changing but firm step, IIT Delhi has issued guidelines for the ethical use of AI by students and faculty. The institute conducted an internal survey before framing them. What they found was striking.

Over 80 percent of students admitted to using tools like ChatGPT, GitHub Copilot, Perplexity AI, Claude, and Chatbots.

On the other hand, more than half the faculty members said they too were using AI — some for drafting, some for coding, some for academic prep.

The new rules are not about banning AI. It is more about drawing a line that says: use it, but don’t outsource your thinking.

ON CAMPUS, A SHIFT IS UNDERWAY

At IIM Jammu, students say the policy is strict: no more than 10 percent AI use is allowed in any assignment.

One student put it simply: “We’re juggling lectures, committees, and eight assignments in three months. Every day feels like a new ball added to the juggling act. In that heat, AI feels like a bit of rain.”

They’re not exaggerating. There are tools now that can read PDFs aloud, prepare slide decks, even draft ideas. The moment you’re stuck, you can ‘chat’ your way out. The tools are easy, accessible, and, for many, essential.

But here’s the other side: some students now build their entire workflow around AI. They use AI to write, AI to humanise, AI to bypass AI detectors.

“Using of plagiarism detection tools, like Turnitin, which claim to detect the Gen-AI content. However, with Gen-AI being so fast evolving, it is difficult for these tools to keep up with its pace. We don’t have a detailed policy framework to clearly distinguish between the ethical and lazy use of Gen-AI,” said Prof Dr Indu Joshi, IIT Mandi.

NOT WHAT AI DOES, BUT WHAT IT REPLACES

At IIM Sambalpur, the administration isn’t trying to hold back AI. They’re embracing it. The institute divides AI use into three pillars:

  • Cognitive automation – for tasks like writing and coding
  • Cognitive insight – for performance assessment
  • Cognitive engagement – for interaction and feedback

Students are encouraged to use AI tools, but with one condition: transparency. They must declare their sources. If AI is used, it must be cited. Unacknowledged use is academic fraud.

“At IIM Sambalpur, we do not prohibit AI tools for research, writing, or coding. We encourage students to use technology as much as possible to enhance their performance. AI is intended to help enhance, not shortcut,” IIM Sambalpur Director Professor Mahadeo Jaiswal told India Today.

But even as tools evolve, a deeper issue is emerging: Are students losing the ability to think for themselves?

MIT’s recent research says yes, too much dependence on AI weakens critical thinking.

It slows down the brain’s ability to analyse, compare, question, and argue. And these are the very skills institutions are supposed to build.

“AI has levelled the field. Earlier, students in small towns didn’t have mentors or exposure. Now, they can train for interviews, get feedback, build skills, all online. But it depends how you use it,” said Samarth Bhardwaj, an IIM Jammu student.

TEACHERS ARE UNDER PRESSURE TOO

The faculty are not immune any more. AI is now turning mentor and performing stuff that even teachers cannot do. With AI around, teaching methods must change.

The old model — assign, submit, grade — works no more. Now, there’s a shift toward ‘guide on the side’ teaching.

Less lecture, more interaction. Instead of essays, group discussions. Instead of theory, hackathons.

It is all about creating real-world learning environments where students must think, talk, solve, and explain why they did what they did. AI can assist, but not answer for them.

SO, WHERE IS THE LINE?

There’s no clear national rule yet. But the broad consensus across IITs and IIMs is this:

  • AI should help, not replace.

  • Declare what you used.

  • Learn, don’t just complete.

Experts like John J Kennedy, former dean at Christ University, say India needs a forward-looking framework.

Not one that fears AI, but one that defines boundaries, teaches ethics, and rewards original thinking.

Today’s students know they can’t ignore AI. Not in tier-1 cities. Not in tier-2 towns either.

Institutions will keep debating policies. Tools will keep evolving. But for students, and teachers, the real test will be one of discipline, not access. Of intent, not ability.

Because AI can do a lot. But it cannot ask the questions that matter.

– Ends

Published By:

Rishab Chauhan

Published On:

Jul 9, 2025



Source link

Continue Reading

Trending