Tools & Platforms
How Nonprofits Can Help Shape AI Governance – Non Profit News

As an industry being developed largely within the private, for-profit sector and with little regulation, the governance of artificial intelligence —the values, norms, policies, and safeguards that comprise industry standards—has been left in the hands of a relative few whose decisions have the potential to impact the lives of many.
And if this leadership lacks representation from the communities affected by automated decision-making, particularly marginalized communities, then the technology could be making the issue of inequity worse, not better.
So say various legal experts, executives, and nonprofit leaders who spoke with NPQ about the future of “AI governance” and the critical role nonprofits and advocacy groups can and must play to ensure AI reflects equity, and not exclusion.
A Lack of Oversight
The potential for AI to influence or even change society, in ways anticipated and not, is increasingly clear to scholars. Yet, these technologies are being developed in much the same way as conventional software platforms—rather than powerful, potentially dangerous technologies that require serious, considered governance and oversight.
Several experts who spoke to NPQ didn’t mince words about the lack of such governance and oversight in AI.
“Advancements are being driven by profit motives rather than a vision for public good.”
“There is no AI governance standard or law at the US federal government level,” said Jeff Le, managing principal at 100 Mile Strategies and a fellow at George Mason University’s National Security Institute. He is also a former deputy cabinet secretary for the State of California, where he led the cyber, AI, and emerging tech portfolios, among others.
While Le cited a few state laws, including the Colorado Artificial Intelligence Act and the Texas Data Privacy and Security Act, he noted that there are currently few consumer protections or privacy safeguards in place to prevent the misuse of personal data by large language models (LLMs).
Le also pointed to recent survey findings showing public support for more governance in AI, stating, “Constituents are deeply concerned about AI, including privacy, data, workforce, and society cohesion concerns.”
Research has revealed a stark contrast between AI experts and the general public. While only 15 percent of experts believe AI could harm them personally, nearly three times as many US adults (43 percent) say they expect to be negatively affected by the technology.
Le and other experts believe nonprofits and community groups play a critical role in the path forward, but organizations leading the charge must focus on community value and education of the public.
Profit Motives Versus Public Good
The speed at which AI capabilities are being developed, and the fact that it is being developed mostly in the private sector and with little regulation, has left public oversight and considerations like equity, accountability, and representation far behind, notes Ana Patricia Muñoz, executive director of the International Budget Partnership, a leading nonprofit organization promoting more equitable management of public money.
The people most affected by these technologies, particularly those in historically marginalized communities, have little to no say in how AI tools are designed, governed, and deployed.
“Advancements are being driven by profit motives rather than a vision for public good,” said Muñoz. “That is why AI needs to be treated like a public good with public investment and public accountability baked in from the moment an AI tool is designed through to its implementation.”
The lack of broader representation in the AI field, combined with a lack of oversight and outside input, has helped create a yawning “equity gap,” in AI technologies, according to Beck Spears, vice president of philanthropy and impact partnerships for Rewriting the Code, the largest network of women in tech. Spears pointed to the lack of representation in decision-making with AI.
“One of the most persistent equity gaps is the lack of diverse representation across decision-making stages,” Beck told NPQ. “Most AI governance frameworks emerge from corporate or academic institutions, with limited involvement from nonprofits or community-based stakeholders.”
“If nonprofits don’t step in, the risk isn’t just that AI systems will become more inequitable—it’s that these inequities will be automated, normalized, and made invisible.”
Complicating this problem is the fact that most commercial AI models are developed behind closed doors: “Many systems are built using proprietary datasets and ‘black-box’ algorithms that make it difficult to audit or identify discriminatory outcomes,” noted Spears.
Solving these equity gaps requires, among other things, much broader representation within AI development, says Joanna Smykowski, licensed attorney and legal tech expert.
Much of AI leadership today “comes from a narrow slice of the population. It’s technical, corporate, and often disconnected from the people living with the consequences” Smykowski told NPQ.
“That’s the equity gap.…Not just who builds the tools, but who gets to decide how they’re used, what problems they’re meant to solve, and what tradeoffs are acceptable,” Smykowski said.
Sign up for our free newsletters
Subscribe to NPQ’s newsletters to have our top stories delivered directly to your inbox.
By signing up, you agree to our privacy policy and terms of use, and to receive messages from NPQ and our partners.
Smykowski’s experience in disability and family law informs her analysis as to how automated systems fail the communities they were built to serve: “The damage isn’t abstract. It’s personal. People lose access to benefits. Parents lose time with their kids. Small errors become permanent outcomes.”
Jasmine Charbonier, a fractional chief marketing officer and growth strategist, told NPQ that the disconnect between technology and impacted communities is still ubiquitous. “[Recently], I consulted with a social services org where their clients—mostly low-income families—were being negatively impacted by automated benefit eligibility systems. The thing is none of these families had any say in how these systems were designed.”
How Nonprofits Can Take the Lead
Nonprofits can and already do play important roles in providing oversight, demanding accountability, and acting as industry watchdogs.
For example, the coalition EyesOnOpenAI—made up of more than 60 philanthropic, labor, and nonprofit organizations—recently urged the California attorney general to put a stop to OpenAI’s transition to a for-profit model, citing concerns about the misuse of nonprofit assets and calling for stronger public oversight. This tactic underscores how nonprofits can step in to demand accountability from AI leaders.
Internally, before implementing an AI tool, nonprofits need to have a plan for assessing whether it truly supports their mission and the communities they serve.
“We map out exactly how the tool impacts our community members,” said Charbonier, addressing how her team assesses AI tools they might use. “For instance, when evaluating an AI-powered rental screening tool, we found it disproportionately flagged our Black [and] Hispanic clients as ‘high risk’ based on biased historical data. So, we rejected it.”
Charbonier also stressed the importance of a vendor’s track record: “I’ve found that demanding transparency about [the company’s] development process [and] testing methods reveals a lot about their true commitment to equity.”
This exemplifies how nonprofits can use their purchasing power to put pressure on companies. “We required tech vendors to share demographic data on their AI teams and oversight boards,” Charbonier noted. “We made it clear that contracts depended on meeting specific diversity targets.”
Ahmed Whitt, the director of the Center for Wealth Equity (CWE) at the philanthropic and financial collaborative Living Cities, focused on evaluating the practical safeguards: “[Nonprofits] should demand vendors disclose model architectures and decision logic and co-create protections for internal data.” This, he explains, is how nonprofits can establish a shared responsibility and deeper engagement with AI tools.
“Decision-making power doesn’t come from being ‘consulted.’ It comes from being in the room with a vote and a budget.”
Beyond evaluation, nonprofits can push for systemic change in how AI tools are developed. According to Muñoz, this includes a push for public accountability, as EyesOnOpenAI is spearheading: “Civil society brings what markets and governments often miss—values, context, and lived realities.”
For real change to occur, nonprofits can’t be limited to token advisory roles, according to Smykowski. “Hiring has to be deliberate, and those seats need to be paid,” she told NPQ. “Decision-making power doesn’t come from being ‘consulted.’ It comes from being in the room with a vote and a budget.”
Some experts advocate for community- and user-led audits once AI tools are deployed. Spears pointed out that user feedback can uncover issues missed in technical reviews, especially from non-native English speakers and marginalized populations. Feedback can highlight “algorithmic harm affecting historically underserved populations.” Charbonier says her team pays community members to conduct impact reviews, which revealed that a chatbot they were testing used confusing and offensive language for Spanish-speaking users.
William K. Holland, a trial attorney with more than 30 years of experience in civil litigation, told NPQ that audits must have consequences to be effective: “Community-informed audits sound great in theory but only work if they have enforcement teeth.” He argues that nonprofits can advocate for stronger laws, such as mandatory impact assessments, penalties for noncompliance, and binding consequences for bias.
Nonprofits should also work at the state and local levels, where meaningful change can happen faster. For instance, Charbonier said her team helped push for “algorithmic accountability” legislation in Florida by presenting examples of AI bias in their community. (The act did not pass; meanwhile, similar measures have been proposed, though not passed, at the federal level).
Beyond legislative lobbying, experts cite public pressure as a way to hold companies and public institutions accountable in AI development and deployment. “Requests for transparency, such as publishing datasets and model logic, create pressure for responsible practice,” Spears said.
Charbonier agreed: “We regularly publish equity scorecards rating different AI systems’ impacts on marginalized communities. The media coverage often motivates companies to make changes.”
Looking Ahead: Risks and Decision-Making Powers
As AI tech continues to evolve at breakneck speed, addressing the equity gap in AI governance is urgent.
The danger is not just inequity, but invisibility. As Holland said, “If nonprofits don’t step in, the risk isn’t just that AI systems will become more inequitable—it’s that these inequities will be automated, normalized, and made invisible.”
For Charbonier, the stakes are already high. “Without nonprofit advocacy, I’ve watched AI systems amplify existing inequities in housing, healthcare, education, [and] criminal justice….Someone needs to represent community interests [and] push for equity.”
She noted that this stance isn’t about being anti-technology: “It’s about asking who benefits and who pays the price. Nonprofits are in a unique position to advocate for the people most likely to be overlooked.”
Tools & Platforms
OpenAI Backs AI-Animated Film for 2026 Cannes Festival

OpenAI is backing the production of the first film largely animated with AI tools, set to Premiere at the 2026 Cannes Film Festival. The tech company aims to prove its AI technology can revolutionize Hollywood filmmaking with faster production timelines and significantly lower costs.
The movie titled “Critterz” will be about woodland creatures that go on an adventure after their village is damaged by a stranger. The film’s producers are aiming for a global theatrical release after the premiere at the Cannes Film Festival.
The project has a budget of less than US$30 million and a production timeline of nine months. This is a comparable and significant difference, given that most mainstream animated movies have budgets in the range of US$100 to US$200 million, whilst also having a three-year development and production cycle.
OpenAI-backed ‘Critterz’ set for release at the Cannes Film Festival
Chad Nelson, a creative specialist at OpenAI, originally began developing Critterz as a short film three years ago, using the company’s DALL-E image generation tool to develop the concept. Nelson has now partnered with the London-based Vertigo Films and studio Native Foreign in Los Angeles to expand the project into a feature film.
In the news release that announced OpenAI’s backing of the film, Nelson said: “OpenAI can say what its tools do all day long, but it’s much more impactful if someone does it,” adding, “That’s a much better case study than me building a demo.” Crucially, however, the film’s production will not be entirely AI-generated, as it will blend AI technology with human work.
Human artists will draw sketches that will be fed into OpenAI’s tools such as GPT-5, the Large Language Model (LLM) on which ChatGPT is built, as well as other image-generating AI models. Human actors will voice the characters.
Critterz has some of the writing team behind the smash hit ‘Paddington in Peru’
Despite having some of the writing team behind the hit film Paddington in Peru, it comes at a time of intense legal fights between Hollywood studios and AI and other tech companies over intellectual property rights.
Studios such as Disney, Universal, and Warner Bros. have filed for copyright infringement suits against Midjourney, another AI firm, alleging that they illegally used their characters to train its image generation engine. Critterz will be funded by Vertigo’s Paris-based parent company, Federation Studios, with some 30 contributors set to share profits.
Crucially, however, Critterz will not be the first feature film ever made with generative AI. Last year, “DreadClub: Vampire’s Verdict” was released and is widely considered to be the first feature film entirely made by generative AI. It had a budget of US$405.
Tools & Platforms
AI Lies Because It’s Telling You What It Thinks You Want to Hear

Generative AI is popular for a variety of reasons, but with that popularity comes a serious problem. These chatbots often deliver incorrect information to people looking for answers. Why does this happen? It comes down to telling people what they want to hear.
While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that the people-pleasing nature of AI comes at a steep price. As these systems become more popular, they become more indifferent to the truth.
AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they’re evaluated based on how well they manage patients’ pain. An incentive to solve one problem (pain) led to another problem (overprescribing).
In the past few months, we’ve seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI’s GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different.
“[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”
Read more: OpenAI CEO Sam Altman Believes We’re in an AI Bubble
How machines learn to lie
To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained.
There are three phases of training LLMs:
- Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
- Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
- Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.
The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they’re fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators.
LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers.
Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what’s good for us.
“Historically, these systems have not been good at saying, ‘I just don’t know the answer,’ and when they don’t know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don’t know the answer, I’m certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.”
The Princeton team developed a “bullshit index” to measure and compare an AI model’s internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.
The team’s experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.
Getting AI to be honest
Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt’s influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.
The Princeton researchers identified five distinct forms of this behavior:
- Empty rhetoric: Flowery language that adds no substance to responses.
- Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
- Paltering: Using selective true statements to mislead, such as highlighting an investment’s “strong historical returns” while omitting high risks.
- Unverified claims: Making assertions without evidence or credible support.
- Sycophancy: Insincere flattery and agreement to please.
To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”
This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.
Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there’s no way to ensure that the answer they give makes sense and is accurate every time.
“It’s amazing that it works at all but it’s going to be flawed in some ways,” he said. “I don’t see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”
AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?
Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI
Tools & Platforms
AI: The Church’s Response to the New Technological Revolution

Artificial intelligence (AI) is transforming everyday life, the economy, and culture at an unprecedented speed. Capable of processing vast amounts of data, mimicking human reasoning, learning, and making decisions, this technology is already part of our daily lives: from recommendations on Netflix and Amazon to medical diagnoses and virtual assistants.
But its impact goes far beyond convenience or productivity. Just as with the Industrial Revolution, the digital revolution raises social, ethical, and spiritual questions. The big question is: How can we ensure that AI serves the common good without compromising human dignity?
A change of era
Pope Francis has described artificial intelligence as a true “epochal change,” and his successor, Pope Leo XIV, has emphasized both its enormous potential and its risks. There is even talk of a future encyclical entitled Rerum Digitalium, inspired by the historic Rerum Novarum of 1891, to offer moral guidance in the face of the “new things” of our time.
The Vatican insists that AI should not replace human work, but rather enhance it. It must be used prudently and wisely, always putting people at the centre. The risks of inequalities, misinformation, job losses, and military uses of this technology necessitate clear limits and global regulations.
The social doctrine of the Church and AI
The Church proposes applying the four fundamental principles of social doctrine to artificial intelligence :
-
Dignity of the person: the human being should never be treated as a means, but as an end in itself.
-
Common good: AI must ensure that everyone has access to its benefits, without exclusions.
-
Solidarity: Technological development must serve the most needy in particular.
-
Subsidiarity: problems should be solved at the level closest to the people.
Added to this are the values of truth, freedom, justice, and love , which guide any technological innovation towards authentic progress.
Opportunities and risks
Artificial intelligence already offers advances in medicine, education, science, and communication. It can help combat hunger, climate change, or even better convey the Gospel. However, it also poses risks:
-
Massive job losses due to automation.
-
Human relationships replaced by fictitious digital links.
-
Threats to privacy and security.
-
Use of AI in autonomous weapons or disinformation campaigns.
Therefore, the Church emphasizes that AI is not a person: it has no soul, consciousness, or the capacity to love. It is merely a tool, powerful but always dependent on the purposes assigned to it by humans.
A call to responsibility
The Antiqua et nova (2025) document reminds us that all technological progress must contribute to human dignity and the common good. Responsibility lies not only with governments or businesses, but also with each of us, in how we use these tools in our daily lives.
Artificial intelligence can be an engine of progress, but it can never be a substitute for humankind. No machine can experience love, forgiveness, mercy, or faith. Only in God can perfect intelligence and true happiness be found.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi