Tools & Platforms
China’s first bachelor’s program in AI education to address teacher shortage-Xinhua
BEIJING, July 9 (Xinhua) — Beijing Normal University (BNU) has launched China’s first undergraduate program dedicated to artificial intelligence (AI) education, aiming to address a critical shortage of specialized teachers in the field.
The initiative aligns with government directives to boost AI education across primary and secondary schools nationwide, the Science and Technology Daily reported on Wednesday.
In late 2024, the Ministry of Education issued a directive calling for measures to advance AI education in primary and secondary schools.
This year, Beijing’s municipal authorities issued a dedicated AI education plan for 2025-2027, demanding the establishment of regular teaching systems and standardized curricula.
“The shortage of qualified instructors and the lack of specialized training remain major obstacles,” said an official from BNU’s Faculty of Education, adding that the program’s core mission is to train educators equipped with both advanced AI technical skills and strong pedagogical expertise.
“We are leveraging our unique interdisciplinary strengths to cultivate talent that supports the country’s strategic drive toward intelligent education,” the official said.
Unlike purely technical AI degrees, BNU’s AI Education program integrates two essential knowledge streams. The curriculum combines an AI technology module, covering generative AI, machine learning, natural language processing, and educational data mining, with a foundation in education science, including learning theory, psychology, curriculum design and assessment.
The program also plans to introduce practical innovation courses, such as the application of AI technologies in education.
In addition, compulsory courses cover topics such as AI ethics and data security, instilling in students a core philosophy of “technology serving education,” according to BNU.
Beyond theory, the program fosters practical skills through a unique “university-enterprise-school” collaborative training model, which helps immerse students in real-world teaching environments and technical development projects.
Career prospects in the AI education field are broad, with graduates well-prepared to become AI or information technology teachers, driving digital transformation in primary and secondary schools.
The booming educational technology sector also needs their expertise to develop and refine AI-driven learning platforms and courses. Further career paths include academic research, educational management, and shaping AI education policy, according to the BNU’s Faculty of Education.
“AI advancements are reshaping society at unprecedented rates, profoundly altering education,” said Yu Shengquan, executive director of BNU’s Advanced Innovation Center for Future Education.
“Developing ‘digital citizens’ equipped for this new reality is now a central educational imperative,” Yu added.
The center has previously partnered with Chinese tech giant Tencent to develop a comprehensive AI knowledge framework and curriculum spanning elementary, middle and high school levels, according to Yu. ■
Tools & Platforms
Empowering, not replacing: A positive vision for AI in executive recruiting
Image courtesy of Terri Davis
Tamara is a thought leader in Digital Journal’s Insight Forum (become a member).
“So, the biggest long‑term danger is that, once these artificial intelligences get smarter than we are, they will take control — they’ll make us irrelevant.” — Geoffrey Hinton, Godfather of AI
Modern AI often feels like a threat, especially when the warnings come from the very people building it. Sam Altman, the salesman behind ChatGPT (not an engineer, but the face of OpenAI and someone known for convincing investors), has said with offhand certainty, as casually as ordering toast or predicting the sun will rise, that entire categories of jobs will be taken over by AI. That includes roles in health, education, law, finance, and HR.
Some companies now won’t hire people unless AI fails at the given task, even though these models hallucinate, invent facts, and make critical errors. They’re replacing people with a tool we barely understand.
Even leaders in the field admit they don’t fully understand how AI works. In May 2025, Dario Amodei, CEO of Anthropic, said the quiet part out loud:
“People outside the field are often surprised and alarmed to learn that we do not understand how our own AI creations work. They are right to be concerned. This lack of understanding is essentially unprecedented in the history of technology.”
In short, no one is fully in control of AI. A handful of Silicon Valley technocrats have appointed themselves arbiters of the direction of AI, and they work more or less in secret. There is no real government oversight. They are developing without any legal guardrails. And those guardrails may not arrive for years, by which time they may be too late to have any effect on what’s already been let out of Pandora’s Box.
So we asked ourselves: Using the tools available to us today, why not model something right now that can in some way shape the discussion around how AI is used? In our case, this is in the HR space.
What if AI didn’t replace people, but instead helped companies discover them?
Picture a CEO in a post-merger fog. She needs clarity, not another résumé pile. Why not introduce her to the precise leader she didn’t know she needed, using AI?
Instead of turning warm-blooded professionals into collateral damage, why not use AI to help, thoughtfully, ethically, and practically solve problems that now exist across the board in HR, recruitment, and employment?
An empathic role for AI
Most job platforms still rely on keyword-stuffed resumés and keyword matching algorithms. As a result, excellent candidates often get filtered out simply for using the “wrong” terms. That’s not just inefficient, it’s fundamentally malpractice. It’s hurting companies and candidates. It’s an example of technology poorly applied, but this is the norm today.
Imagine instead a platform that isn’t keyword driven, that instead guides candidates through discovery to create richer, more dimensional profiles that showcase unique strengths, instincts, and character that shape real-world impact. This would go beyond skillsets or job titles to deeper personal qualities that differentiate equally experienced candidates, resulting in a better fitted leadership candidate to any given role.
One leader, as an example, may bring calm decisiveness in chaos. Another may excel at building unity across silos. Another might be relentless at rooting out operational bloat and uncovering savings others missed.
A system like this that helps uncover those traits, guides candidates to articulate them clearly, and discreetly learns about each candidate to offer thoughtful, evolving insights, would see AI used as an advocate, not a gatekeeping nemesis.
For companies, this application would reframe job descriptions around outcomes, not tasks. Instead of listing qualifications, the tool helps hiring teams articulate what they’re trying to achieve: whether it’s growth, turnaround, post-M&A integration, or cost efficiency, and then finds the most suitable candidate match.
Fairness by design
Bias is endemic in HR today: ageism, sexism, disability, race. Imagine a platform that actively discourages bias. Gender, race, age, and even profile photos are optional. The system doesn’t reward those who include a photo, unlike most recruiting platforms. It doesn’t penalize those who don’t know how to game a résumé.
Success then becomes about alignment. Deep expertise. Purposeful outcomes.
This design gives companies what they want: competence. And gives candidates what they want: a fair chance.
This is more than an innovative way to use current AI technology. It’s a value statement about prioritizing people.
Why now
We’re at an inflection point.
Researchers like Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean forecast in AI 2027 that superhuman AI (AGI, then superintelligence) will bring changes in the next decade more disruptive than the Industrial Revolution.
If they’re even a little right, then the decisions being made today by a small circle in Silicon Valley will affect lives everywhere.
It’s important to step into the conversation now to help shape AI’s real-world role. The more human-centred, altruistic, practical uses of AI we build and model now, the more likely these values will help shape laws, norms, and infrastructure to come.
This is a historic moment. How we use AI now will shape the future.
People-first design
Every technology revolution sparks fear. But this one with AI is unique. It’s the first since the Industrial Revolution where machines are being designed to replace people as an explicit goal. Entire roles and careers may vanish.
But that isn’t inevitable either. It’s a choice.
AI can be built to assist, not erase. It can guide a leader to their next opportunity. It can help a CEO find a partner who unlocks transformation. It can put people out front, not overshadow them.
We invite others in talent tech and AI to take a similar stance. Let’s build tools for people. Let’s avoid displacement and instead elevate talent. Let’s embed honesty, fairness, clarity, and alignment in everything we make.
We don’t control the base models. But we do control how we use them. And how we build with them.
AI should amplify human potential, not replace it. That’s the choice I’m standing behind.
Tools & Platforms
Big Tech, NYC teachers union join forces in new AI initiative that’s drawing concerns
A new partnership between New York City’s teachers union and Big Tech companies has some educators wondering whether they’re at the forefront of improving instruction through artificial intelligence or welcoming a Trojan horse that threatens learning.
The American Federation of Teachers, the umbrella organization for the local United Federation of Teachers union, announced Tuesday it’s teaming up with Microsoft, OpenAI and Anthropic on a $23 million initiative to offer free AI training and software to AFT members. The investment, which is being covered by the companies, includes creating a new training space dubbed the “National Center for AI” on a floor of the UFT headquarters in Lower Manhattan.
UFT President Michael Mulgrew said at a press conference that some of his union’s educators started trainings this month, adding that the initiative will expand nationally over the next year. The initiative is aimed at K-12 teachers, is voluntary and focuses on tasks like lesson planning, according to the union and companies. AI can summarize texts and create worksheets and assessments.
“This tool could truly be a great gift to the children of this country and to education overall,” Mulgrew said. “But we’re not going to get there unless it’s driven by the people doing the work in the most important place in education, which is the classroom.”
Some teachers said they are skeptical about the initiative. Jia Lee, a special education teacher at the Earth School in the East Village, likened the arrangement to “letting the fox in the henhouse” and said she was “horrified” to see the union linking arms with the tech companies.
“I think a lot of educators would say we’re not anti-AI, we just have concerns about a lot of things that have not been explained or researched yet,” Lee said.
City education officials have sent mixed signals about integrating AI in classrooms. The local education department initially blocked OpenAI tool ChatGPT in schools in 2023, then lifted the ban. Schools spokesperson Nicole Brownstein said the agency is working on a “framework” for AI use, but declined to comment on the union’s new initiative.
Gerry Petrella, Microsoft’s general manager for U.S. policy, said the partnership would help the company figure out how to integrate AI into education “in a responsible and safe way.” He said he hoped AI tools would save teachers time so they could focus more on students and their individual needs.
National surveys show the technology is already creeping into students’ and teachers’ lives. A Harvard University survey last fall found half of high-school and college students use AI for some schoolwork, while a new Gallup poll found 60% of teachers reported using AI at some point over the past school year.
Annie Read Boyle, a fourth-grade teacher at P.S. 276 in Battery Park, said she hasn’t used AI much but is impressed with what she’s seen so far. Last year, she used a product called Diffit when she was teaching about the American Revolution.
“I said, ‘I want an article that’s fourth-grade level,’ and in 10 seconds [it] spit out this beautiful worksheet that would’ve taken me hours to create,” she said. “I was like, ‘Wow, this is really impressive and it just saved me so much time.’”
Boyle said she could imagine similar tools differentiating assignments based on students’ learning styles, abilities or language. Still, she cited concerns about data privacy, copyright infringement in materials and encouraging students to take shortcuts instead of developing critical-thinking skills.
“It’s such an important tool for teachers to know how to use so that we can teach the kids but it could really hurt the development process for kids,” she said, adding that she is also concerned about AI’s environmental impact and potential to drive job loss.
AFT President Randi Weingarten said Tuesday she hoped to learn from past mistakes involving technology, including social media’s harms on young people’s mental health. She said the union’s partnership with tech companies is a way to influence how AI is used with children.
“We can also make sure we have the guardrails we need to protect the safety and security of kids,” said Weingarten, whose union includes 1.8 million members nationwide. “That is now becoming our job. … We have to have a phone line back to [tech hub] Seattle.”
Tools & Platforms
G7 leaders reaffirm support for responsible AI deployment | Insights
As artificial intelligence (AI) transforms every sector of the global economy, the leaders of the G7 nations, at the 2025 G7 Summit in Kananaskis, Alberta, issued a strong, unified statement reaffirming their commitment to ensuring this transformation benefits people, promotes inclusive prosperity and supports responsible innovation. In their “G7 leaders’ statement on AI for prosperity”, the world’s leading democracies laid out a roadmap for adopting trustworthy AI at scale – balancing economic opportunity with ethical stewardship and energy sustainability.
From awareness to adoption: Public sector AI with purpose
One of the core pillars of the G7’s new vision is leveraging AI in the public sector. Governments are being called to not only regulate AI but to actively use it to improve public services, drive efficiency and better respond to citizens’ needs – all while maintaining privacy, human rights and democratic values.
To lead this effort, Canada, in its role as G7 president, has announced the GovAI Grand Challenge. This initiative includes a series of “rapid solution labs” that will develop creative, practical AI solutions to accelerate public sector transformation. These efforts will be coordinated through the to-be-established G7 AI Network (GAIN), which will connect expertise across member countries and curate a catalogue of open-source, shareable AI tools. Additional details on these programs are forthcoming.
Empowering SMEs to compete in the AI economy
The G7 leaders also acknowledged a key truth: small- and medium-sized enterprises (SMEs) are the lifeblood of modern economies. These businesses generate jobs, drive innovation and build resilient local economies. Yet they often face significant barriers to AI adoption, from lack of access to computing infrastructure to gaps in digital skills.
To close this gap, the G7 launched the AI Adoption Roadmap – a practical guide to help businesses, particularly SMEs, move from understanding AI to implementing it. The roadmap includes:
- Sustained investment in AI readiness programs for SMEs
- A blueprint for scalable, proven adoption strategies
- Cross-border talent exchanges to boost in-house AI capabilities
- New trust-building tools to give businesses and consumers confidence in AI systems
This comprehensive approach is designed to help SMEs not only catch up but leap ahead – adopting AI in ways that are ethical, productive and secure.
To support this initiative, and as part of the broader $2-billion Canadian Sovereign AI Compute Strategy, on June 25, 2025, the Government of Canada announced a fund that will support Canadian SMEs in accessing high-performance compute capacity to develop made-in-Canada AI products and solutions. Applications for the AI Compute Access Fund can now be submitted.
A workforce ready for the AI era
The shift to an AI-powered economy will demand a new kind of workforce. The G7 leaders reaffirmed their support for the 2024 Action Plan for safe and human-centered AI in the workplace. This includes investing in AI literacy and job transition programs, especially for those in sectors likely to be most affected.
Crucially, the G7 also emphasized equity and inclusion – particularly encouraging girls and underrepresented communities to pursue STEM education and grow their presence in the AI talent pipeline. As AI reshapes our economies, building a diverse and resilient workforce is not only a moral imperative but an economic one.
Tackling the energy footprint of AI
With the exponential growth of large AI models comes a steep rise in energy consumption. The G7 acknowledged the environmental toll and vowed to address it head-on. In a first-of-its-kind commitment, member nations will work together on a comprehensive workplan on AI and energy, due by the end of 2025.
This work will focus on developing energy-efficient AI systems, optimizing data center operations and using AI itself to drive clean energy innovation. The goal: ensure that the AI revolution doesn’t come at the cost of our planet – but instead helps to preserve it.
Partnering for global inclusion
Finally, the G7 turned their focus outward to the developing world, where digital divides threaten to leave billions behind. Leaders committed to expanding AI access in emerging markets through trusted technology, targeted investment and local collaboration.
From the AI for Development Funders Collaborative to partnerships with universities and international organizations, the G7 aims to build mutually beneficial partnerships that bridge capacity gaps and support locally driven AI innovation.
The technology, intellectual property and privacy group at MLT Aikins are tracking developments in the regulation, governance and deployment of AI in today’s modern economy and can give you the advice you need to navigate the ever– changing world of AI.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.
Share
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education2 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education4 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle