Connect with us

AI Research

Beyond the Buzz: What Nonprofits Really Need from AI

Published

on


⇓ More from ICTworks


By Sponsored Post on August 7, 2025

AI has captured global attention and is transforming every sector of society. But what does that transformation look like for the nonprofit sector? Many existing AI skilling efforts assume technical fluency, corporate resources, and commercial use cases – far from the realities faced by most nonprofit teams.

That’s why NetHope, with support from Microsoft, created Unlocking AI for Nonprofits, a free, CPD-certified course series built for the sector, by the sector. Instead of pushing generic tools or abstract theory, the series meets nonprofit professionals where they are, addressing real barriers like limited capacity, trust in new tools, and questions about ethical use.

There are four learning pathways, designed to meet learners where they are:

  • AI Basics – An introduction for those new to AI
  • Applications of Generative AI – Practical tools for day-to-day nonprofit tasks
  • Advanced Applications: Microsoft Copilot and Beyond – For teams already exploring Copilot
  • Responsible Use of AI – Grounding in ethics, safety, and inclusion

From understanding how bias works in algorithmic decisions to exploring how AI can support donor reporting or field logistics, the content is designed with one goal: to make AI work for nonprofits.

All courses are free, self-paced, and open through August 31.

Who this is for?

Not sure where to start? Here’s a quick guide:

  • Curious about AI but don’t know where to begin? → Start with AI Basics
  • Already experimenting with ChatGPT and want to apply it at work? → Try Applications of Generative AI
  • Using Microsoft Copilot and ready to roll it out across your team? →Explore Advanced Applications
  • Concerned about ethical risks and responsible AI use? → Begin with Responsible Use of AI

What will you gain?

  • Four nonprofit-specific learning pathways
  • A short AI readiness assessment to help you plan your AI implementation
  • Real-world use cases and toolkits co-developed with nonprofit professionals
  • A CPD certificate for each course completed
  • Practical, grounded guidance — not tech hype
  • Free access through August 31, 2025

Courses are free and hosted on the Kaya platform. All you need is an internet connection and a few hours per course.

Filed Under: Featured
More About:

This is a Sponsored Post. Share your calls for ideas, event announcements, program successes, or new reports to our 25,000 email subscribers via Sponsored Posts and Email Footers that reach a global audience of digital development professionals.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Will artificial intelligence fuel moral chaos or positive change?

Published

on


Getty Images

Artificial intelligence is transforming our world at an unprecedented rate, but what does this mean for Christians, morality and human flourishing?

In this episode of “The Inside Story,” Billy Hallowell sits down with The Christian Post’s Brandon Showalter to unpack the promises and perils of AI.

From positives like Bible translation to fears over what’s to come, they explore how believers can apply a biblical worldview to emerging technology, the dangers of becoming “subjects” of machines, and why keeping Christ at the center is the only true safeguard.

Plus, learn about The Christian Post’s upcoming “AI for Humanity” event at Colorado Christian University and how you can join the conversation in person or via livestream:

The Inside Story” takes you behind the headlines of the biggest faith, culture and political headlines of the week. In 15 minutes or less, Christian Post staff writers and editors will help you navigate and understand what’s driving each story, the issues at play — and why it all matters.

Listen to more Christian podcasts today on the Edifi app — and be sure to subscribe to The Inside Story on your favorite platforms:



Source link

Continue Reading

AI Research

BNY and Carnegie Mellon University announce five-year $10 million partnership supporting AI research  — EdTech Innovation Hub

Published

on


The $10 million deal aims to bring students, faculty and staff together alongside BNY experts to advance AI applications and systems to prepare the next generation of leaders.

Known as the BNY AI Lab, the collaboration will focus on technologies and frameworks that can ensure robust governance of mission-critical AI applications.

“As AI drives productivity, unlocks growth and transforms industries, Pittsburgh has cemented its role as a global hub for innovation and talent, reinforcing Pennsylvania’s leadership in shaping the broader AI ecosystem,” comments Robin Vince, CEO at BNY. “Building on BNY’s 150-year legacy in the Commonwealth, we are proud to expand our work with Carnegie Mellon University to help attract world-class talent and pioneer AI research with an impact far beyond the region.”

A dedicated space for the collaboration will be created at the University’s Pittsburgh campus during the 2025-26 academic year.

“AI has emerged as one of the single most important intellectual developments of our time, and it is rapidly expanding into every sector of our economy,” adds Farnam Jahanian, President of Carnegie Mellon. “Carnegie Mellon University is thrilled to collaborate with BNY – a global financial services powerhouse – to responsibly develop and scale emerging AI technologies and democratize their impact for the benefit of industry and society at large.” 

The ETIH Innovation Awards 2026

The EdTech Innovation Hub Awards celebrate excellence in global education technology, with a particular focus on workforce development, AI integration, and innovative learning solutions across all stages of education.

Now open for entries, the ETIH Innovation Awards 2026 recognize the companies, platforms, and individuals driving transformation in the sector, from AI-driven assessment tools and personalized learning systems, to upskilling solutions and digital platforms that connect learners with real-world outcomes.

Submissions are open to organizations across the UK, the Americas, and internationally. Entries should highlight measurable impact, whether in K–12 classrooms, higher education institutions, or lifelong learning settings.

Winners will be announced on 14 January 2026 as part of an online showcase featuring expert commentary on emerging trends and standout innovation. All winners and finalists will also be featured in our first print magazine, to be distributed at BETT 2026.



Source link

Continue Reading

AI Research

Beyond Refusal — Constructive Safety Alignment for Responsible Language Models

Published

on


View a PDF of the paper titled Oyster-I: Beyond Refusal — Constructive Safety Alignment for Responsible Language Models, by Ranjie Duan and 26 other authors

View PDF

Abstract:Large language models (LLMs) typically deploy safety mechanisms to prevent harmful content generation. Most current approaches focus narrowly on risks posed by malicious actors, often framing risks as adversarial events and relying on defensive refusals. However, in real-world settings, risks also come from non-malicious users seeking help while under psychological distress (e.g., self-harm intentions). In such cases, the model’s response can strongly influence the user’s next actions. Simple refusals may lead them to repeat, escalate, or move to unsafe platforms, creating worse outcomes. We introduce Constructive Safety Alignment (CSA), a human-centric paradigm that protects against malicious misuse while actively guiding vulnerable users toward safe and helpful results. Implemented in Oyster-I (Oy1), CSA combines game-theoretic anticipation of user reactions, fine-grained risk boundary discovery, and interpretable reasoning control, turning safety into a trust-building process. Oy1 achieves state-of-the-art safety among open models while retaining high general capabilities. On our Constructive Benchmark, it shows strong constructive engagement, close to GPT-5, and unmatched robustness on the Strata-Sword jailbreak dataset, nearing GPT-o1 levels. By shifting from refusal-first to guidance-first safety, CSA redefines the model-user relationship, aiming for systems that are not just safe, but meaningfully helpful. We release Oy1, code, and the benchmark to support responsible, user-centered AI.

Submission history

From: Ranjie Duan [view email]
[v1]
Tue, 2 Sep 2025 03:04:27 UTC (5,745 KB)
[v2]
Thu, 4 Sep 2025 11:54:06 UTC (5,745 KB)
[v3]
Mon, 8 Sep 2025 15:18:35 UTC (5,746 KB)
[v4]
Fri, 12 Sep 2025 04:23:22 UTC (5,747 KB)



Source link

Continue Reading

Trending