Connect with us

Tools & Platforms

How Some Nonprofits Are Turning to AI As a Tool for Good

Published

on


As millions of young people worldwide increasingly rely on AI chatbots to acquire knowledge as part of their learning — and even complete assignments for them — one organization is concerned that those in developing countries without access to the tech could be put at an unfair disadvantage.

And it’s using the very technology it believes is causing this problem to fix it.

Education Above All, a nonprofit based in Qatar, believes that because most of the world’s popular AI chatbots are created in Silicon Valley, they aren’t equipped to understand the linguistic and ethnic nuances of non-English-speaking countries, creating education inequities on a global scale. But its team sees AI as a way to tackle this problem.

In January 2025, the charity teamed up with MIT, Harvard, and the United Nations Development Programme to introduce a free and open-source AI literacy program called Digi-Wise. Delivered in partnership with educators in the developing world, it encourages children to spot AI-fueled misinformation, use AI tools responsibly in the classroom, and even develop their own AI tools from scratch.

As part of this, the charity has developed its own generative AI chatbot called Ferby. It allows users to access and personalize educational resources from the Internet-Free Education Resource Bank, an online library containing hundreds of free and open-source learning materials.

Education Above All said it’s already being used by over 5 million Indian children to access “project-based learning” in partnership with Indian nonprofit Mantra4Change. More recently, Education Above All has embedded Ferby into edtech platform SwiftChat, which is used by 124 million students and teachers across India.

“Ferby curates, customizes, and creates learning materials to fit local realities, so a teacher in rural Malawi can run the right science experiment as easily as a teacher in downtown Doha,” said Aishwarya Shetty, an education specialist at Education Above All. “By marrying offline ingenuity with AI convenience, we make learning local, low-resource, and always within reach, yet at scale.”

Education Above All is among a group of organizations using AI to tackle global inequality and work toward realizing the United Nations Sustainable Development Goals. Created in 2015, the UN SDGs comprise 17 social, economic, and environmental targets that serve as guidelines for nations, businesses, and individuals to follow to help achieve a more peaceful and prosperous world. Education Above All’s projects fall under SDG 4: inclusive and equitable education.

A global effort

A range of other organizations are using AI to augment and enhance their education programming.

Tech To The Rescue, a global nonprofit that connects charities with pro-bono software development teams to meet their goals, is another organization using AI in support of the UN SDGs. Last year, it launched a three-year AI-for-good accelerator program to help NGOs meet the various UN SDGs using AI.

One organization to benefit from the program is Mercy Corps, a humanitarian group that works across over 40 countries to tackle crises like poverty, the climate crisis, natural disasters, and violence. Through the accelerator, it created an AI strategy tool that helps first responders predict disasters and coordinate resources. The World Institute on Disability AI also participated in the accelerator program, creating a resource-matching system that helps organizations allocate support to people with disabilities in hours rather than weeks.

Similarly, the International Telecommunication Union — the United Nations’ digital technology agency, and one of its oldest arms — is supporting organizations using technology to achieve the UN SDGs through its AI for Good Innovation Factory startup competition. For example, an Indian applicant — a startup called Bioniks — has enabled a teenager to reclaim the ability to do simple tasks like writing and getting dressed through the use of AI-powered prosthetics.

Challenges to consider

While AI may prove to be a powerful tool for achieving the UN SDGs, it comes with notable risks. Again, as AI models are largely developed by American tech giants in an industry already constrained by gender and racial inequality, unconscious bias is a major flaw of AI systems.

To address this, Shetty said layered prompts for non-English users, human review of underlying AI datasets, and the creation of indigenous chatbots are paramount to achieving Education Above All’s goals.

AI models are also power-intensive, making them largely inaccessible to the populations of developing countries. That’s why Shetty urges AI companies to provide their solutions via less tech-heavy methods, like SMS, and to offer offline features so users can still access AI resources when their internet connections drop. Open-source, free-of-charge subscriptions can help, too, she added.

AI as a source for good

Challenges aside, Shetty is confident that AI can be a force for good over the next few years, particularly around education. She told BI, “We are truly energized by how the global education community is leveraging AI in education: WhatsApp-based math tutors reaching off-grid learners; algorithms that optimize teacher deployment in shortage areas; personalized content engines that democratize education; chatbots that offer psychosocial support in crisis zones and more.”

But Shetty is clear that AI should augment, rather than displace, human educators. And she said the technology should only be used if it can solve challenges faced by humans and add genuine value.

“Simply put,” she said, “let machines handle the scale, let humans handle the soul, with or without AI tools.”





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

A Conference Where Platforms Couldn’t Escape the AI Hype 

Published

on


I Was recently invited to participate in an analyst panel at PlatformCon 25 in New York City. The conference was not huge, but still delivered impact and featured a mix of vendor booths ranging from industry giants like Google to ambitious startups. The audience was a blend of platform professionals from industries as diverse as healthcare, professional sports, and video gaming. The featured guest speaker was engineer Kelsey Hightower. 

Here are my key takeaways from the event: 

Don’t Be Afraid To Look Under The Hood of AI 

Hightower kicked off the day with a compelling talk which challenged attendees to critically evaluate AI and its capabilities. He emphasized the importance of viewing AI as another piece of technology —there’s nothing mystical about it. Hightower encouraged the audience to dig into the details and not simply buy into the hype. 

Hightower also touched on how the rise of technologies such as Anthropic’s Model Context Protocol has shifted corporate attitudes. For decades, companies maintained strict control over their internal resources, but now many are rushing to API-enable their entire ecosystems with little caution. He posed a thought-provoking question: “Imagine if they had done this 10 years ago—what could have been accomplished?” 

If you’re curious to learn more about AI, MCP and its implications, refer to this blog. 

The Tension Between Developers, Operations, And Platform Teams Is Real

One of the liveliest discussions at the conference centered around the persistent struggles between developers, operations teams, and platform engineers. During the Developer Productivity roundtable, which I had the honor of joining alongside other fellow industry analysts, this tension was laid bare. 

Far from a dry technical discussion, the session felt more like group therapy for platform leaders. Many attendees shared candid stories about the tug-of-war between developers seeking speed and agility, and platform engineers urging patience and structure. It’s clear that the question of whether platform engineering can fully resolve this dynamic is still open. 

Several actionable strategies emerged during the conversation: 

  • Adopt a “platform as a product” approach. Treat your platform as a product designed to serve your internal stakeholders. Read this report for more insights. 
  • Set clear expectations. When building a platform, align all stakeholders from the outset. Refer to my report for practical guidance. 
  • Define common goals based on value streams. Establish shared objectives to bridge the gap between teams. Check out this webinar for actionable advice. 

The tension may never fully disappear, but fostering collaboration and setting shared goals can help mitigate the friction. 

Final Thoughts 

PlatformCon 25 offered a unique window into the evolving world of platform engineering set against the backdrop of AI’s growing influence.  

Whether you’re a developer, an operations leader, or a platform engineer, one thing is clear: the platform landscape is shifting rapidly, and AI is playing a central role.  

Clients of Forrester that have questions on developer platforms or portals are welcome to request an inquiry or guidance session with me. 



Source link

Continue Reading

Tools & Platforms

Polimorphic Closes $18.6M Series A Led by General Catalyst to Drive Government Efficiency with AI

Published

on


Insider Brief

  • Polimorphic raised $18.6M in a Series A led by General Catalyst to expand its AI-powered platform that modernizes local government services and improves resident experiences.
  • Its tools — like the AI Front Desk and CRM — help governments digitize workflows, automate tasks, reduce manual labor, and provide 24/7 multilingual support across channels.
  • Already used by cities and counties across the U.S., Polimorphic has saved over 55,000 working hours and will use the new funding to grow in key states and scale its sales and engineering teams.

PRESS RELEASE — Polimorphic, which uses AI to digitize resident services for local governments and their constituents, has announced an $18.6 million Series A, led by General Catalyst, and continued backing from investors M13 and Shine. With ever-growing pressure on governments for improved efficiency, this round of funding will allow Polimorphic to amplify its support of governments with AI, while making government services more human for residents.

A recent study revealed that many local governments lack the expertise and processes to leverage AI effectively, and only about 20% of the more than $90 billion of the U.S. government’s annual IT spending is devoted to modernization. In addition, local governments are seeing an unprecedented volume of repetitive, manual tasks, including answering the same questions by phone and email, processing simple paper forms, and hunting down information across disconnected systems, signaling a desperate need for the assistance of AI. At the same time, residents are expecting a private sector-like digital experience.

Using Polimorphic’s AI Front Desk, Constituent Relationship Manager, and Analytics, governments can modernize how they serve by providing access to services online, 24/7, and in more than 75 languages, while improving efficiency for government teams.

“With this funding, we’re accelerating our mission to be the AI company for government efficiency, making public service easier, faster, and more human for everyone,” said CEO and Co-founder Parth Shah. “Local governments are the front line of democracy, but they’ve been left behind by decades of underinvestment in technology. We’re here to change that. Our tools help staff serve residents more efficiently and build trust, reduce burnout, and unlock capacity for real community impact. This moment isn’t just about growth, it’s about building a future where every resident can get the help they need, and every public servant has the support they deserve.”

To date, Polimorphic customers have reduced voicemails by up to 90%, experienced a 75% reduction in walk-in requests, and collected more than $10 million in online payments, saving more than 55,000 working hours — or 26 years of work — combined. Polimorphic customers include cities, counties, state agencies, and special districts from across the country, including the City of Pacifica, CA; Tooele County, UT; Polk County, NC; and the Town of Palm Beach, FL. This new round of funding will accelerate growth in Polimorphic’s top states, including Wisconsin, New Jersey, North Carolina, Texas, Florida, and California.

“Polimorphic exemplifies what a true partnership should be: personable, professional, and deeply invested in shared success,” said Jess Savidge, Administrative and Communications Manager for the Town of Palm Beach, FL. “In a community like the Town of Palm Beach, where expectations are exceptionally high, their team has exceeded every standard through innovation, responsiveness, and a commitment to excellence. Thanks to their innovative platform and collaborative approach, we’ve continuously enhanced customer service and gained valuable insights to improve our digital presence. We look forward to continued collaborations to continue the delivery of world-class government services with the precision and quality of a top-tier business.”

The company’s newest round of funding will drive unmatched features in GovTech, including its AI Front Desk, a full-service constituent platform that includes a voice line, chatbot, search, SMS, and email. Plus, powerful GIS-based resident support, agentic AI application reviews, advanced analytics, and additional innovative AI features.

“Polimorphic has the potential to become the next modern system of record for local and state government. Historically, it’s been difficult to drive adoption of these foundational platforms beyond traditional ERP and accounting in the public sector,” said Sreyas Misra, Partner at General Catalyst. “AI is the jet fuel that accelerates this adoption. Parth and the team are making it possible for local and state governments to automate highly complex workflows from end-end, something that’s been out of reach until now.”

“Government inefficiency creates billions of dollars in waste, a problem Polimorphic’s solutions are built to solve,” said M13 General Partner Latif Peracha. “By digitizing how residents and cities interact, they are removing that wasted time and money from the system.”

In addition to innovative AI product features, the funding will allow Polimorphic to triple the size of its sales and engineering teams, driving its mission to create solutions that let governments of all sizes deliver for the people.

About Polimorphic

Polimorphic uses artificial intelligence (AI) to help local governments better serve their communities. Polimorphic’s AI Front Desk, Constituent Relationship Manager, and Dashboard & Analytics empower service-first governments to provide residents with the highest quality and accessible communication and engagement. Serving hundreds of public sector departments across the country, Polimorphic is built for the unique needs of government, including cities, counties, and state agencies. Polimorphic is backed by world-class investors, including General Catalyst, M13, and Shine. Learn more or request a demo at polimorphic.com.

About General Catalyst

General Catalyst is a global investment and transformation company that partners with the world’s most ambitious entrepreneurs to drive resilience and applied AI.

We support founders with a long-term view who challenge the status quo, partnering with them from seed to growth stage and beyond.

With offices in San Francisco, New York City, Boston, Berlin, Bangalore, and London, we have supported the growth of 800+ businesses, including Airbnb, Anduril, Applied Intuition, Commure, Glean, Guild, Gusto, Helsing, Hubspot, Kayak, Livongo, Mistral, Ramp, Samsara, Snap, Stripe, Sword, and Zepto.

For more: www.generalcatalyst.com, @generalcatalyst

Contacts

Megan Olson, Director of Marketing, Polimorphic
[email protected]
414–477–8846

SOURCE



Source link

Continue Reading

Tools & Platforms

Georgia courts deliberate over how to incorporate AI into the justice system

Published

on


The Georgia Supreme Court is taking proactive steps to manage the growing influence of artificial intelligence within the state’s judicial system. In response to misuse concerns, a comprehensive three-year plan has been announced to address the integration of AI technologies.

It started harmlessly enough, with kids using artificial intelligence to cheat on their writing assignments, but the technology has become a palpable threat to society as lawyers and others in the justice system have conducted novel experiments with it and even clearly misused it.

In the five years since OpenAI unleashed Chat GPT-3 on the public, people have found creative and sometimes unwise uses for the technology, including attorneys who harnessed it to write briefs with fake citations.

Recognizing the risk, the Georgia Supreme Court undertook a 10-month review in August and released new recommendations on Thursday. The state’s high court proposes a three-year process to adapt to AI.

It will start with establishing leadership and governance and conclude with new policies and processes for all the courts in Georgia’s judicial system. There will be community engagement, process reviews, education and training, and the establishment of business and technology architectures along the way.

The committee behind the new report, “Artificial Intelligence and Georgia’s Courts,” was led by Justice Andrew A. Pinson. It incorporates observations by the State Bar of Georgia’s Board of Governors, who produced their own report on the risks of AI in early June.

The bar’s report said revisions to a rule of conduct for lawyers was “particularly critical” because it was about their competence and proficiency with technology.

“It is the committee’s assessment GenAI tools will in short order become ubiquitous,” the bar report’s authors wrote.

Pinson’s committee cited numerous examples of AI uses that occurred just during the 10 months of their review process, such as the Indiana Supreme Court’s introduction of AI for voice-to-text transcriptions, the Arizona Supreme Court’s use of AI avatars to deliver news about rulings by their justices, and a family’s use of AI to create a victim impact statement by their dead relative during the sentencing phase of the trial over his road rage death.

“A key challenge the committee faced during its work is the rapidly evolving nature of a technology new to courts and organizations across the country,” Pinson’s committee concluded.

The panel noted acceptable uses for AI such as for research and scheduling, unacceptable uses such as for jury selection and “black box” sentencing algorithms, and potential uses that need more study and testing such as language translation and sentencing and risk assessments.



Source link

Continue Reading

Trending