Tools & Platforms
How Nonprofits Can Help Shape AI Governance – Non Profit News

As an industry being developed largely within the private, for-profit sector and with little regulation, the governance of artificial intelligence —the values, norms, policies, and safeguards that comprise industry standards—has been left in the hands of a relative few whose decisions have the potential to impact the lives of many.
And if this leadership lacks representation from the communities affected by automated decision-making, particularly marginalized communities, then the technology could be making the issue of inequity worse, not better.
So say various legal experts, executives, and nonprofit leaders who spoke with NPQ about the future of “AI governance” and the critical role nonprofits and advocacy groups can and must play to ensure AI reflects equity, and not exclusion.
A Lack of Oversight
The potential for AI to influence or even change society, in ways anticipated and not, is increasingly clear to scholars. Yet, these technologies are being developed in much the same way as conventional software platforms—rather than powerful, potentially dangerous technologies that require serious, considered governance and oversight.
Several experts who spoke to NPQ didn’t mince words about the lack of such governance and oversight in AI.
“Advancements are being driven by profit motives rather than a vision for public good.”
“There is no AI governance standard or law at the US federal government level,” said Jeff Le, managing principal at 100 Mile Strategies and a fellow at George Mason University’s National Security Institute. He is also a former deputy cabinet secretary for the State of California, where he led the cyber, AI, and emerging tech portfolios, among others.
While Le cited a few state laws, including the Colorado Artificial Intelligence Act and the Texas Data Privacy and Security Act, he noted that there are currently few consumer protections or privacy safeguards in place to prevent the misuse of personal data by large language models (LLMs).
Le also pointed to recent survey findings showing public support for more governance in AI, stating, “Constituents are deeply concerned about AI, including privacy, data, workforce, and society cohesion concerns.”
Research has revealed a stark contrast between AI experts and the general public. While only 15 percent of experts believe AI could harm them personally, nearly three times as many US adults (43 percent) say they expect to be negatively affected by the technology.
Le and other experts believe nonprofits and community groups play a critical role in the path forward, but organizations leading the charge must focus on community value and education of the public.
Profit Motives Versus Public Good
The speed at which AI capabilities are being developed, and the fact that it is being developed mostly in the private sector and with little regulation, has left public oversight and considerations like equity, accountability, and representation far behind, notes Ana Patricia Muñoz, executive director of the International Budget Partnership, a leading nonprofit organization promoting more equitable management of public money.
The people most affected by these technologies, particularly those in historically marginalized communities, have little to no say in how AI tools are designed, governed, and deployed.
“Advancements are being driven by profit motives rather than a vision for public good,” said Muñoz. “That is why AI needs to be treated like a public good with public investment and public accountability baked in from the moment an AI tool is designed through to its implementation.”
The lack of broader representation in the AI field, combined with a lack of oversight and outside input, has helped create a yawning “equity gap,” in AI technologies, according to Beck Spears, vice president of philanthropy and impact partnerships for Rewriting the Code, the largest network of women in tech. Spears pointed to the lack of representation in decision-making with AI.
“One of the most persistent equity gaps is the lack of diverse representation across decision-making stages,” Beck told NPQ. “Most AI governance frameworks emerge from corporate or academic institutions, with limited involvement from nonprofits or community-based stakeholders.”
“If nonprofits don’t step in, the risk isn’t just that AI systems will become more inequitable—it’s that these inequities will be automated, normalized, and made invisible.”
Complicating this problem is the fact that most commercial AI models are developed behind closed doors: “Many systems are built using proprietary datasets and ‘black-box’ algorithms that make it difficult to audit or identify discriminatory outcomes,” noted Spears.
Solving these equity gaps requires, among other things, much broader representation within AI development, says Joanna Smykowski, licensed attorney and legal tech expert.
Much of AI leadership today “comes from a narrow slice of the population. It’s technical, corporate, and often disconnected from the people living with the consequences” Smykowski told NPQ.
“That’s the equity gap.…Not just who builds the tools, but who gets to decide how they’re used, what problems they’re meant to solve, and what tradeoffs are acceptable,” Smykowski said.
Sign up for our free newsletters
Subscribe to NPQ’s newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Smykowski’s experience in disability and family law informs her analysis as to how automated systems fail the communities they were built to serve: “The damage isn’t abstract. It’s personal. People lose access to benefits. Parents lose time with their kids. Small errors become permanent outcomes.”
Jasmine Charbonier, a fractional chief marketing officer and growth strategist, told NPQ that the disconnect between technology and impacted communities is still ubiquitous. “[Recently], I consulted with a social services org where their clients—mostly low-income families—were being negatively impacted by automated benefit eligibility systems. The thing is none of these families had any say in how these systems were designed.”
How Nonprofits Can Take the Lead
Nonprofits can and already do play important roles in providing oversight, demanding accountability, and acting as industry watchdogs.
For example, the coalition EyesOnOpenAI—made up of more than 60 philanthropic, labor, and nonprofit organizations—recently urged the California attorney general to put a stop to OpenAI’s transition to a for-profit model, citing concerns about the misuse of nonprofit assets and calling for stronger public oversight. This tactic underscores how nonprofits can step in to demand accountability from AI leaders.
Internally, before implementing an AI tool, nonprofits need to have a plan for assessing whether it truly supports their mission and the communities they serve.
“We map out exactly how the tool impacts our community members,” said Charbonier, addressing how her team assesses AI tools they might use. “For instance, when evaluating an AI-powered rental screening tool, we found it disproportionately flagged our Black [and] Hispanic clients as ‘high risk’ based on biased historical data. So, we rejected it.”
Charbonier also stressed the importance of a vendor’s track record: “I’ve found that demanding transparency about [the company’s] development process [and] testing methods reveals a lot about their true commitment to equity.”
This exemplifies how nonprofits can use their purchasing power to put pressure on companies. “We required tech vendors to share demographic data on their AI teams and oversight boards,” Charbonier noted. “We made it clear that contracts depended on meeting specific diversity targets.”
Ahmed Whitt, the director of the Center for Wealth Equity (CWE) at the philanthropic and financial collaborative Living Cities, focused on evaluating the practical safeguards: “[Nonprofits] should demand vendors disclose model architectures and decision logic and co-create protections for internal data.” This, he explains, is how nonprofits can establish a shared responsibility and deeper engagement with AI tools.
“Decision-making power doesn’t come from being ‘consulted.’ It comes from being in the room with a vote and a budget.”
Beyond evaluation, nonprofits can push for systemic change in how AI tools are developed. According to Muñoz, this includes a push for public accountability, as EyesOnOpenAI is spearheading: “Civil society brings what markets and governments often miss—values, context, and lived realities.”
For real change to occur, nonprofits can’t be limited to token advisory roles, according to Smykowski. “Hiring has to be deliberate, and those seats need to be paid,” she told NPQ. “Decision-making power doesn’t come from being ‘consulted.’ It comes from being in the room with a vote and a budget.”
Some experts advocate for community- and user-led audits once AI tools are deployed. Spears pointed out that user feedback can uncover issues missed in technical reviews, especially from non-native English speakers and marginalized populations. Feedback can highlight “algorithmic harm affecting historically underserved populations.” Charbonier says her team pays community members to conduct impact reviews, which revealed that a chatbot they were testing used confusing and offensive language for Spanish-speaking users.
William K. Holland, a trial attorney with more than 30 years of experience in civil litigation, told NPQ that audits must have consequences to be effective: “Community-informed audits sound great in theory but only work if they have enforcement teeth.” He argues that nonprofits can advocate for stronger laws, such as mandatory impact assessments, penalties for noncompliance, and binding consequences for bias.
Nonprofits should also work at the state and local levels, where meaningful change can happen faster. For instance, Charbonier said her team helped push for “algorithmic accountability” legislation in Florida by presenting examples of AI bias in their community. (The act did not pass; meanwhile, similar measures have been proposed, though not passed, at the federal level).
Beyond legislative lobbying, experts cite public pressure as a way to hold companies and public institutions accountable in AI development and deployment. “Requests for transparency, such as publishing datasets and model logic, create pressure for responsible practice,” Spears said.
Charbonier agreed: “We regularly publish equity scorecards rating different AI systems’ impacts on marginalized communities. The media coverage often motivates companies to make changes.”
Looking Ahead: Risks and Decision-Making Powers
As AI tech continues to evolve at breakneck speed, addressing the equity gap in AI governance is urgent.
The danger is not just inequity, but invisibility. As Holland said, “If nonprofits don’t step in, the risk isn’t just that AI systems will become more inequitable—it’s that these inequities will be automated, normalized, and made invisible.”
For Charbonier, the stakes are already high. “Without nonprofit advocacy, I’ve watched AI systems amplify existing inequities in housing, healthcare, education, [and] criminal justice….Someone needs to represent community interests [and] push for equity.”
She noted that this stance isn’t about being anti-technology: “It’s about asking who benefits and who pays the price. Nonprofits are in a unique position to advocate for the people most likely to be overlooked.”
Tools & Platforms
Can technology bridge development gaps?

Artificialintelligence promises to revolutionize economies worldwide, but whether developing nations will benefit or fall further behind depends on choices being made today.
The African Union’s historic Continental AI Strategy, adopted in July 2024, represents both unprecedented ambition and stark reality – while AI could add $1.5 trillion to Africa’s GDP by 2030, the continent currently captures just one per cent of global AI compute capacity despite housing 15 per cent of the world’s population.
This paradox defines the central challenge facing underdeveloped countries, particularly across Africa and South America, as they navigate the AI revolution. With global AI investment reaching $100-130 billion annually while African AI startups have raised only $803 million over five years, the question isn’t whether AI matters for development – it’s whether these regions can harness its transformative potential before the window closes.
The stakes couldn’t be higher. The same mobile revolution that enabled Kenya’s M-Pesa to serve millions without traditional banking infrastructure now offers a template for AI leapfrogging. But unlike mobile phones, AI requires massive computational resources, reliable electricity, and specialized skills that remain scarce across much of the Global South.
Africa awakens to AI’s strategic importance
The momentum building across Africa challenges assumptions about AI relevance in developing contexts. Sixteen African countries now have national AI strategies or policies in development, with Kenya launching its comprehensive 2025-2030 strategy in March and Zambia following suit in November 2024. This represents a 33 per cent increase in strategic planning over just two years, signaling that African leaders view AI not as a luxury but as essential infrastructure.
The African Union’s Continental AI Strategy stands as the world’s most comprehensive development-focused AI framework, projecting that optimal AI adoption could contribute six per cent of the continent’s GDP by 2030. Unlike Western approaches emphasizing innovation for innovation’s sake, Africa’s strategy explicitly prioritizes agriculture, healthcare, education, and climate adaptation – sectors criticalto the continent’s 1.3 billion people.
“We’re not trying to copy Silicon Valley,” explains one senior AU official involved in the strategy’s development. “We’re building AI that serves African priorities.” This Africa-centric approach emerges from harsh lessons learned during previous technology waves, when developing countries often became consumers rather than creators of digital solutions.
South America charts cooperative course
Latin America has taken a markedly different but equally strategic approach, leveraging existing regional integration mechanisms to coordinate AI development. The Santiago Declaration, signed by over 20 countries in October 2023, established the Regional Council on Artificial Intelligence, with Chile emerging as the continental leader.
Chile ranks first in the 2024 Latin American Artificial Intelligence Index (ILIA), followed by Brazil and Uruguay as pioneer countries. This leadership reflects substantial investments – Chile committed $26 billion in public investment for its 2021-2030 National AI Policy, while Brazil’s 2024-2028 AI Plan allocates $4.1 billion across 74 strategic actions.
Brazil’s approach particularly demonstrates how developing countries can mobilize resources for AI transformation. The planned Santos Dumont supercomputer aims to become one of the world’s five most powerful, while six Applied Centers for AI focus on agriculture, healthcare, and Industry 4.0 applications. This represents a fundamental shift from viewing AI as imported technology to building indigenous capabilities.
Agriculture proves AI’s development relevance
Critics questioning AI’s relevance to underdeveloped economies need look no further than Hello Tractor’s transformative impact across African agriculture. This Nigerian-founded ‘Uber for tractors’ platform uses AI for demand forecasting and fleet optimization, serving over 2 million smallholder farmers across over 20 countries. The results are striking: farmers increase incomes by 227 per cent, plant 40 times faster, and achieve three-fold yield improvements through precision timing.
Apollo Agriculture in Kenya and Zambia demonstrates how AI can address financial inclusion challenges that have plagued agricultural development for decades. Using machine learning for credit scoring and satellite data for precision recommendations, the company serves over 350,000 previously unbanked farmers with non-performing loan rates below 2 per cent – outperforming traditional banks while serving supposedly high-risk populations.
These aren’t pilot projects or development experiments. They’re profitable businesses solving real problems with measurable impact.
Investment patterns reveal global disparities
The funding landscape starkly illustrates development challenges facing AI adoption. Global AI investment reached $100-130 billion annually, while African AI startups raised $803 million over five years total. Latin American venture capital investment fell to $3.6 billion in 2024, the lowest in five years, with early-stage funding dominating 80 per cent of deals.
This investment concentration perpetuates technological dependence. The United States and China hold 60 per cent of all AI patents and produce one-third of global AI publications. 100 companies, mainly from these two countries, account for 40 per cent of global AI R&D spending, while 118 countries – mostly from the Global South remain absent from major AI governance discussions.
Risks of digital colonialism loom large
However, current trends suggest widening rather than narrowing divides. Tech giants Apple, Nvidia, and Microsoft have achieved $3 trillion market values that rival entire African continent’s GDP. This concentration of AI capabilities in a handful of corporations based in wealthy countries creates dependency relationships reminiscent of colonial-era resource extraction.
Digital colonialism emerges when developing countries become consumers rather than producers of AI systems. Most AI training occurs on Western datasets, creating cultural and linguistic biases that poorly serve non-Western populations. Search results in diverse countries like Brazil show predominantly white faces when searching for babies, reflecting training data biases.
Toward inclusive AI futures
The path forward requires acknowledging both AI’s transformative potential and persistent barriers to equitable adoption. Infrastructure limitations, skills gaps, and funding disparities create formidable challenges, but successful implementations across agriculture and healthcare demonstrate achievable progress.
Regional cooperation frameworks like the African Union’s Continental AI Strategy and Latin America’s SantiagoDeclaration offer models for coordinated development that can compete with concentrated wealth and expertise of traditional tech centers. These approaches emphasize development priorities rather than pure technological advancement, potentially creating more inclusive AI ecosystems.
The mobile revolution precedent suggests optimism about leapfrogging possibilities, but success requires sustained political commitment, adequate funding, and international cooperation. Countries that invest strategically in AI foundations while fostering indigenous innovation can position themselves to benefit from rather than be left behind by the AI transformation.
The global AI divide represents both the greatest risk and greatest opportunity facing international development in the 21st century. Whether AI bridges or widens global inequalities depends on choices being made today by governments, international organizations, and private sector actors. The stakes-measured in trillions of dollars of economic value and billions of lives affected – demand urgent, coordinated action to ensure AI serves human development rather than merely technological advancement.
The African farmer using Hello Tractor’s AI platform to improve crop yields and the Brazilian patient receiving AI-enhanced diagnostic services demonstrate AI’s development relevance. Whether such success stories become widespread or remain isolated examples depends on the policy foundations being laid across developing countries today. The AI revolution waits for no one – but its benefits need not be predetermined by geography or existing wealth. The window for inclusive AI development remains open, but it will not stay open forever.
(Krishna Kumar is a Technology Explorer & Strategist based in Austin, Texas, USA. Rakshitha Reddy is AI Engineer based in Atlanta, Georgia, USA)
Tools & Platforms
Meta CFO says Superintelligence AI Lab is already working on next model

Facebook-parent Meta’s Chief Financial Officer, Susan Li, has confirmed the existence of the company’s new research unit, TBD Lab. The unit, which Li says is composed of “a few dozen” researchers and engineers, is focused on developing the social media giant’s next-generation foundation models. According to a report by the news agency Reuters, Li told investors at the Goldman Sachs Communacopia + Technology conference that the name TBD, which stands for “to be determined,” was a placeholder that “stuck” because the team’s work is still taking shape. What Susan Li said about Meta’s TBD AI labAt the conference (as reported by Reuters), Li said: “We conceive of it as sort of a pretty small, few-dozen-people, very talent-dense set of folks.”The TBD Lab is part of a larger reorganisation of Meta’s AI efforts under the umbrella of Meta Superintelligence Labs. The team’s goal is to push the boundaries of AI over the next one to two years, positioning Meta to compete more effectively with other major players in the AI race.Reuters cited another report from last month to claim that Meta has split its Superintelligence Labs into four groups, which are: a “TBD” lab (still defining its role), a products team (including the Meta AI assistant), an infrastructure team, and the long-term research-focused FAIR lab. Earlier this year, Meta reorganised its AI division under Superintelligence Labs after senior staff left and its Llama 4 model received mixed feedback. The company’s CEO, Mark Zuckerberg, has lately been personally leading aggressive hiring efforts, offering oversized pay packages and directly reaching out to talent on WhatsApp. In July, he said the new setup brings together foundations, products, and FAIR teams, along with a fresh lab focused on building the next generation of AI models.
Tools & Platforms
Bae Kyung-hun vows to cut AI gap with US to 0.5 years and boost Korea growth – CHOSUNBIZ – Chosun Biz
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries