Tools & Platforms
G7 leaders reaffirm support for responsible AI deployment | Insights
As artificial intelligence (AI) transforms every sector of the global economy, the leaders of the G7 nations, at the 2025 G7 Summit in Kananaskis, Alberta, issued a strong, unified statement reaffirming their commitment to ensuring this transformation benefits people, promotes inclusive prosperity and supports responsible innovation. In their “G7 leaders’ statement on AI for prosperity”, the world’s leading democracies laid out a roadmap for adopting trustworthy AI at scale – balancing economic opportunity with ethical stewardship and energy sustainability.
From awareness to adoption: Public sector AI with purpose
One of the core pillars of the G7’s new vision is leveraging AI in the public sector. Governments are being called to not only regulate AI but to actively use it to improve public services, drive efficiency and better respond to citizens’ needs – all while maintaining privacy, human rights and democratic values.
To lead this effort, Canada, in its role as G7 president, has announced the GovAI Grand Challenge. This initiative includes a series of “rapid solution labs” that will develop creative, practical AI solutions to accelerate public sector transformation. These efforts will be coordinated through the to-be-established G7 AI Network (GAIN), which will connect expertise across member countries and curate a catalogue of open-source, shareable AI tools. Additional details on these programs are forthcoming.
Empowering SMEs to compete in the AI economy
The G7 leaders also acknowledged a key truth: small- and medium-sized enterprises (SMEs) are the lifeblood of modern economies. These businesses generate jobs, drive innovation and build resilient local economies. Yet they often face significant barriers to AI adoption, from lack of access to computing infrastructure to gaps in digital skills.
To close this gap, the G7 launched the AI Adoption Roadmap – a practical guide to help businesses, particularly SMEs, move from understanding AI to implementing it. The roadmap includes:
- Sustained investment in AI readiness programs for SMEs
- A blueprint for scalable, proven adoption strategies
- Cross-border talent exchanges to boost in-house AI capabilities
- New trust-building tools to give businesses and consumers confidence in AI systems
This comprehensive approach is designed to help SMEs not only catch up but leap ahead – adopting AI in ways that are ethical, productive and secure.
To support this initiative, and as part of the broader $2-billion Canadian Sovereign AI Compute Strategy, on June 25, 2025, the Government of Canada announced a fund that will support Canadian SMEs in accessing high-performance compute capacity to develop made-in-Canada AI products and solutions. Applications for the AI Compute Access Fund can now be submitted.
A workforce ready for the AI era
The shift to an AI-powered economy will demand a new kind of workforce. The G7 leaders reaffirmed their support for the 2024 Action Plan for safe and human-centered AI in the workplace. This includes investing in AI literacy and job transition programs, especially for those in sectors likely to be most affected.
Crucially, the G7 also emphasized equity and inclusion – particularly encouraging girls and underrepresented communities to pursue STEM education and grow their presence in the AI talent pipeline. As AI reshapes our economies, building a diverse and resilient workforce is not only a moral imperative but an economic one.
Tackling the energy footprint of AI
With the exponential growth of large AI models comes a steep rise in energy consumption. The G7 acknowledged the environmental toll and vowed to address it head-on. In a first-of-its-kind commitment, member nations will work together on a comprehensive workplan on AI and energy, due by the end of 2025.
This work will focus on developing energy-efficient AI systems, optimizing data center operations and using AI itself to drive clean energy innovation. The goal: ensure that the AI revolution doesn’t come at the cost of our planet – but instead helps to preserve it.
Partnering for global inclusion
Finally, the G7 turned their focus outward to the developing world, where digital divides threaten to leave billions behind. Leaders committed to expanding AI access in emerging markets through trusted technology, targeted investment and local collaboration.
From the AI for Development Funders Collaborative to partnerships with universities and international organizations, the G7 aims to build mutually beneficial partnerships that bridge capacity gaps and support locally driven AI innovation.
The technology, intellectual property and privacy group at MLT Aikins are tracking developments in the regulation, governance and deployment of AI in today’s modern economy and can give you the advice you need to navigate the ever– changing world of AI.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.
Share
Tools & Platforms
Polimorphic Raises $18.6M as It Beefs Up Public-Sector AI
The latest best on public-sector AI involves Polimorphic, which has raised $18.6 million in a Series A funding round led by General Catalyst.
The round also included M13 and Shine.
The company raised $5.6 million in a seed round in late 2023.
New York-based Polimorphic sells such products as artificial intelligence-backed chatbots and search tools, voice AI for calls, constituent relationship management (CRM) and workflow software, and permitting and licensing tech.
The new capital will go toward tripling the company’s sales and engineering staff and building more AI product features.
For instance, that includes the continued development of the voice AI offering, which can now work with live data — a bonus when it comes to utility billing — and even informs callers to animal services which pets might be up for adoption, CEO and co-founder Parth Shah told Government Technology in describing his vision for such tech.
The company also wants to bring more AI to CRM and workflow software to help catch errors on applications and other paperwork earlier than before, Shah said.
“We are more than just a chatbot,” he said.
Challenges of public-sector AI include making sure that public agencies truly understand the technology and are “not just slapping AI on what you already do,” Shah said.
As he sees it, working in governments in such a way has helped Polimorphic to nearly double its customer count every six months. The company has more than 200 public-sector departments at the city, county and state levels using the company’s products, he said — and such growth is among the reasons the company attracted this new round of investment.
The company’s general sales pitch is increasingly familiar to public-sector tech buyers: Software and AI can help agencies deal with “repetitive, manual tasks, including answering the same questions by phone and email,” according to a statement, and help people find civic and bureaucratic information more quickly.
For instance, the company says it has helped customers reduce voicemails by up to 90 percent, with walk-in requests cut by 75 percent. Polimorphic clients include the city of Pacifica, Calif.; Tooele County, Utah; Polk County, N.C.; and the town of Palm Beach, Fla.
The fresh funding also will help the company expand in the company’s top markets, which include Wisconsin, New Jersey, North Carolina, Texas, Florida and California.
The company’s investors are familiar to the gov tech industry. Earlier this year, for example, General Catalyst led an $80 million Series C funding round for Prepared, a public safety tech supplier focused on bringing more assistive AI capabilities to emergency dispatch.
“Polimorphic has the potential to become the next modern system of record for local and state government. Historically, it’s been difficult to drive adoption of these foundational platforms beyond traditional ERP and accounting in the public sector,” said Sreyas Misra, partner at General Catalyst, in the statement. “AI is the jet fuel that accelerates this adoption.”
Tools & Platforms
AI enters the classroom as law schools prep students for a tech-driven practice
When it comes to using artificial intelligence in legal education and beyond, the key is thoughtful integration.
“Think of it like a sandwich,” said Dyane O’Leary, professor at Suffolk University Law School. “The student must be the bread on both sides. What the student puts in, and how the output is assessed, matters more than the tool in the middle.”
Suffolk Law is taking a forward-thinking approach to integrating generative AI into legal education starting with requiring an AI course for all first-year students to equip them to use AI, understand it and critique it as future lawyers.
O’Leary, a long-time advocate for legal technology, said there is a need to balance foundational skills with exposure to cutting-edge tools.
“Some schools are ignoring both ends of the AI sandwich,” she said. “Others don’t have the resources to do much at the upper level.”
One major initiative at Suffolk Law is the partnership with Hotshot, a video-based learning platform used by top law firms, corporate lawyers and litigators.
“The Hotshot content is a series of asynchronous modules tailored for 1Ls,” O’Leary said, “The goal is not for our students to become tech experts but to understand the usage and implication of AI in the legal profession.”
The Hotshot material provides a practical introduction to large language models, explains why generative AI differs from tools students are used to, and uses real-world examples from industry professionals to build credibility and interest.
This structured introduction lays the groundwork for more interactive classroom work when students begin editing and analyzing AI-generated legal content. Students will explore where the tool succeeded, where it failed and why.
“We teach students to think critically,” O’Leary said. “There needs to be an understanding of why AI missed a counterargument or produced a junk rule paragraph.”
These exercises help students learn that AI can support brainstorming and outlining but isn’t yet reliable for final drafting or legal analysis.
Suffolk Law is one of several law schools finding creative ways to bring AI into the classroom — without losing sight of the basics. Whether it’s through required 1L courses, hands-on tools or new certificate programs, the goal is to help students think critically and stay ready for what’s next.
Proactive online learning
Case Western Reserve University School of Law has also taken a proactive step to ensure that all its students are equipped to meet the challenge. In partnership with Wickard.ai, the school recently launched a comprehensive AI training program, making it a mandatory component for the entire first-year class.
“We knew AI was going to change things in legal education and in lawyering,” said Jennifer Cupar, professor of lawyering skills and director of the school’s Legal Writing, Leadership, Experiential Learning, Advocacy, and Professionalism program. “By working with Wickard.ai, we were able to offer training to the entire 1L class and extend the opportunity to the rest of the law school community.”
The program included pre-class assignments, live instruction, guest speakers and hands-on exercises. Students practiced crafting prompts and experimenting with various AI platforms. The goal was to familiarize students with tools such as ChatGPT and encourage a thoughtful, critical approach to their use in legal settings.
Oliver Roberts, CEO and co-founder of Wickard.ai, led the sessions and emphasized the importance of responsible use.
While CWRU Law, like many law schools, has general prohibitions against AI use in drafting assignments, faculty are encouraged to allow exceptions and to guide students in exploring AI’s capabilities responsibly.
“This is a practice-readiness issue,” Cupar said. “Just like Westlaw and Lexis changed legal research, AI is going to be part of legal work going forward. Our students need to understand it now.”
Balanced approach
Starting with the Class of 2025, Washington University School of Law is embedding generative AI instruction into its first-year Legal Research curriculum. The goal is to ensure that every 1L student gains fluency in both traditional legal research methods and emerging AI tools.
Delivered as a yearlong, one-credit course, the revamped curriculum maintains a strong emphasis on core legal research fundamentals, including court hierarchy, the distinction between binding and persuasive authority, primary and secondary sources and effective strategies for researching legislative and regulatory history.
WashU Law is integrating AI as a tool to be used critically and effectively, not as a replacement for human legal reasoning.
Students receive hands-on training in legal-specific generative AI platforms and develop the skills needed to evaluate AI-generated results, detect hallucinated or inaccurate content, and compare outcomes with traditional research methods.
“WashU Law incorporates AI while maintaining the basics of legal research,” said Peter Hook,associate dean. “By teaching the basics, we teach the skills necessary to evaluate whether AI-produced legal research results are any good.”
Stefanie Lindquist, dean of WashU Law, said this balanced approach preserves the rigor and depth that legal employers value.
“The addition of AI instruction further sharpens that edge by equipping students with the ability to responsibly and strategically apply new technologies in a professional context,” Lindquist said.
Forward-thinking vision
Drake University Law School has launched a new AI Law Certificate Program for J.D. students.
The program is a response to the growing need for legal professionals who understand both the promise and complexity of AI.
Designed for completion during a student’s second and third years, the certificate program emphasizes interdisciplinary collaboration, drawing on expertise from across Drake Law School’s campus, including computer science, art and the Institute for Justice Reform & Innovation.
Students will engage with advanced topics such as machine vision and trademark law, quantum computing and cybersecurity, and the broader ethical and regulatory challenges posed by AI.
Roscoe Jones, Jr., dean of Drake Law School, said the AI Law Certificate empowers students to lead at the intersection of law and technology, whether in private practice, government, nonprofit, policymaking or academia.
“Artificial Intelligence is not just changing industries; it’s reshaping governance, ethics and the very framework of legal systems,” he said.
Simulated, but realistic
Suffolk Law has also launched an online platform that allows students to practice negotiation skills with AI bots programmed to simulate the behavior of seasoned attorneys.
“They’re not scripted. They’re human-like,” she said. “Sometimes polite, sometimes bananas. It mimics real negotiation.”
These interactive experiences in either text or voice mode allow students to practice handling the messiness of legal dialogue, which is an experience hard to replicate with static casebooks or classroom hypotheticals.
Unlike overly accommodating AI assistants, these bots shift tactics and strategies, mirroring the adaptive nature of real-world legal negotiators.
Another tool on the platform supports oral argument prep. Created by Suffolk Law’s legal writing team in partnership with the school’s litigation lab, the AI mock judge engages students in real-time argument rehearsals, asking follow-up questions and testing their case theories.
“It’s especially helpful for students who don’t get much out of reading their outline alone,” O’Leary said. “It makes the lights go on.”
O’Leary also emphasizes the importance of academic integrity. Suffolk Law has a default policy that prohibits use of generative AI on assignments unless a professor explicitly allows it. Still, she said the policy is evolving.
“You can’t ignore the equity issues,” she said, pointing to how students often get help from lawyers in the family or paid tutors. “To prohibit [AI] entirely is starting to feel unrealistic.”
Tools & Platforms
Microsoft pushes billions at AI education for the masses • The Register
After committing more than $13 billion in strategic investments to OpenAI, Microsoft is splashing out billions more to get people using the technology.
On Wednesday, Redmond announced a $4 billion donation of cash and technology to schools and non-profits over the next five years. It’s branding this philanthropic mission as Microsoft Elevate, which is billed as “providing people and organizations with AI skills and tools to thrive in an AI-powered economy.” It will also start the AI Economy Institute (AIEI), a so-called corporate think tank stocked with academics that will be publishing research on how the workforce needs to adapt to AI tech.
The bulk of the money will go toward AI and cloud credits for K-12 schools and community colleges, and Redmond claims 20 million people will “earn an in-demand AI skilling credential” under the scheme, although Microsoft’s record on such vendor-backed certifications is hardly spotless.
“Working in close coordination with other groups across Microsoft, including LinkedIn and GitHub, Microsoft Elevate will deliver AI education and skilling at scale,” said Brad Smith, president and vice chair of Microsoft Corporation, in a blog post. “And it will work as an advocate for public policies around the world to advance AI education and training for others.”
It’s not an entirely new scheme – Redmond already had its Microsoft Philanthropies and Tech for Social Impact charitable organizations, but they are now merging into Elevate. Smith noted Microsoft has already teamed up with North Rhine-Westphalia in Germany to train students on AI, and says similar partnerships across the US education system will follow.
Microsoft is also looking to recruit teachers to the cause.
On Tuesday, Microsoft, along with Anthropic and OpenAI, said it was starting the National Academy for AI Instruction with the American Federation of Teachers to train teachers in AI skills and to pass them on to the next generation. The scheme has received $23 million in funding from the tech giants spread over five years, and aims to train 400,000 teachers at training centers across the US and online.
“AI holds tremendous promise but huge challenges—and it’s our job as educators to make sure AI serves our students and society, not the other way around,” said AFT President Randi Weingarten in a canned statement.
“The direct connection between a teacher and their kids can never be replaced by new technologies, but if we learn how to harness it, set commonsense guardrails and put teachers in the driver’s seat, teaching and learning can be enhanced.”
Meanwhile, the AIEI will sponsor and convene researchers to produce publications, including policy briefs and research reports, on applying AI skills in the workforce, leveraging a global network of academic partners.
Hopefully they can do a better job of it than Redmond’s own staff. After 9,000 layoffs from Microsoft earlier this month, largely in the Xbox division, Matt Turnbull, an executive producer at Xbox Game Studios Publishing, went viral with a spectacularly tone-deaf LinkedIn post (now removed) to former staff members offering AI prompts “to help reduce the emotional and cognitive load that comes with job loss.” ®
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle