Connect with us

AI Insights

Indigenous peoples and Artificial Intelligence: Youth perspectives on rights and a liveable future

Published

on


On August 9, 2025, the world marked the International Day of the World’s Indigenous Peoples under the theme: “Indigenous Peoples and Artificial Intelligence: Defending Rights, Sustaining the Future.” It’s a powerful invitation to ask how emerging tools like AI can empower Indigenous Peoples, rather than marginalise them.

Before we answer how, we need to be clear on who we are talking about and what they face in Cameroon and across the Congo Basin.

Who are Indigenous Peoples in Cameroon?

Cameroon is home to several Indigenous Peoples and communities, including groups often called forest peoples (such as the Baka, Bagyeli, Bedzang) as well as the Mbororo pastoralists and communities commonly referred to as Kirdi. There is no single universal definition of “Indigenous Peoples,” but the UN Declaration on the Rights of Indigenous Peoples (2007) places self-determination at the centre of identification.

The realities: living on the margins

  • Land grabbing and loss of forests. Forests are the supermarket, pharmacy, culture and identity of Indigenous communities in the Congo Basin. Yet illegal and abusive logging, land acquisitions and agroforestry projects without proper consultation put their well-being at risk.
  • Chiefdoms without recognition. The lack of official recognition of Indigenous chiefdoms weakens participation in decision-making and jeopardises their future.
  • No specific national law. Cameroon still lacks a specific legal instrument on Indigenous rights. Reliance on international norms alone doesn’t reflect the local context and leaves gaps in protection.
  • Limited access to education and health. Many Indigenous children lack birth certificates, which blocks school enrolment and access to basic services.

I believe the future can be different: one where Indigenous autonomy is respected, traditional knowledge is valued, and well-being is guaranteed.

So where does AI fit in, and what can youth do?

AI isn’t a silver bullet; however, in the hands of informed, organised youth it can accelerate participatory advocacy, surface evidence, and protect community rights. 

First, AI-assisted mapping, with consent, can document traditional territories, sacred sites, and resource use, turning them into community-owned evidence for authorities and companies. 

Moreover, small AI models can preserve language and knowledge: oral histories, songs, medicinal plants, place names under community data sovereignty, with Indigenous Peoples retaining exclusive rights. 

Meanwhile, simple chatbots or workflows offer legal triage (from birth-certificate requests to land-grievance tracking and administrative appeals). 

Likewise, crowdsourced reports plus AI enable early-warning and accountability on suspicious logging, new roads, or fires, which young monitors can visualise and escalate to community leaders, media, and allies. 

Finally, youth pre-bunk/de-bunk teams can counter misinformation with community-approved information. Above all, use of AI must follow Free, Prior and Informed Consent (FPIC), strong privacy safeguards, and real community control of data.

My commitment as a young activist

As an activist, and with a background in law, I want to keep building projects that put Indigenous Peoples at the centre of decisions. AI can help: it enables faster, structured, participatory advocacy and supports a community-owned database of solutions and traditional knowledge, with exclusive rights for Indigenous communities over any derivative products. My legal training helps me work at the intersection of Indigenous rights, AI, and forest/biodiversity protection.

A call to action

The 2025 theme is more than a slogan; it’s a call to act so that technology serves justice, not exclusion. In Cameroon, where Indigenous Peoples are still fighting for legal recognition, AI must be wielded as a tool of solidarity. With support from allies like Greenpeace Africa and the creativity of youth, a future rooted in dignity and sustainability is within reach.

MACHE NGASSING Darcise Dolorès,  Climate activist



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Answering the question of which AI tools deliver measurable value

Published

on


Silicon Valley kingmakers

Meanwhile, the investor lineup reads like a who’s who of Silicon Valley’s kingmakers. Sequoia’s Roelof Botha and “solo GP” Elad Gil represent the kind of money that moves markets and shapes entire industries. Dramatic as it may sound, their funding decisions often preview which technologies will dominate enterprise conversations within two years, making their perspectives essential intelligence for anyone planning technology strategy.

The programming extends well beyond AI and public markets. The CEO of Waymo will showcase how autonomous systems are reshaping transportation, while Netflix’s CTO will provide a rare glimpse into the streaming infrastructure that powers global entertainment. Perhaps most intriguingly, Kevin Rose—who founded Digg, sold it, then recently rescued it from corporate ownership—will discuss the art of platform resurrection in an era of constant digital disruption.

Disrupt takes place as both TechCrunch and San Francisco reassert their respective primacies — the publication as tech journalism’s defining voice, the city as technology’s undisputed capital. It also promises to be entertaining, as these events always are.



Source link

Continue Reading

AI Insights

AI used for taking notes at UVM Medical Center

Published

on


BURLINGTON, Vt. (WCAX) – Artificial intelligence might be taking notes at your next doctor’s appointment.

Last year, we told you how the University of Vermont Health Network was tapping into AI to streamline doctor-patient conversations.

Over a year later, network officials say it’s making a mark.

Staff say they used to spend an hour or more reviewing notes from their shift, often eating into family or downtime.

There’s no scribbling pen or clacking keyboard in sight as emergency physician Dan Peters walked through a mock appointment at the University of Vermont Medical Center.

That’s because an app is taking notes for him.

“I think my initial thoughts were this is going to be game-changing in terms of time savings for documentation,” said Peters.

The app, called Abridge, summarizes doctor-patient conversations.

Justin Stinnett-Donnelly, the network’s chief health information officer, says it boosts providers’ mental health and bedside manner.

“It changes the interaction. You’re able to be much more focused on the conversation with the patient. And it actually reduces what we call cognitive burden,” said Stinnett-Donnelly.

A pilot study at UVMMC last spring found that Abridge halved clinicians’ cognitive load while more than doubling their professional fulfillment.

Peters is proof. He says Abridge cut his routine hour-long note evaluations in half.

“To have the assistance has been significantly helpful for reducing that burden of writing all the notes and just the cognitive load of needing to remember all the details,” he said.

The note from our mock appointment with Peters was spot on, and staff say physicians always double-check the record.

“Our providers go through, they read it and they edit it for clarity, and that ultimately, it is a human reviewing that note to make sure that it is accurate for that encounter,” said Stinnett-Donnelly.

Of the 1,200 physicians and hundreds of other staff throughout the network, 500 use Abridge.

Officials say some are wary or prefer taking notes on their own. The network encourages them to give it a try.

Patients, on the other hand, don’t need much encouragement.

((Dan peters // emergency physician

Dan Peters: “If the hundreds of patient encounters where I’ve used this technology, only a single patient has said no to me.”

Reporter Sophia Thomas: “Wow. One person?”

Peters: “Only one person. I don’t think it’s specific to me. I think patients expect that we’re using this type of technology to stay on the cutting edge.“

They’re getting some of that time back thanks to AI.

Network officials say there haven’t been any breaches of private information through Abridge.

They’re currently collecting data on its benefits and plan to roll out an impact study after the two-year anniversary of adopting the app.



Source link

Continue Reading

AI Insights

AI accurately identifies questionable open-access journals by analysing websites and content, matching expert human assessment

Published

on


Artificial intelligence (AI) could be a useful tool to find ‘questionable’ open-access journals, by analysing features such as website design and content, new research has found.

The researchers set out to evaluate the extent to which AI techniques could replicate the expertise of human reviewers in identifying questionable journals and determining key predictive factors. ‘Questionable’ journals were defined as journals violating the best practices outlined in the Directory of Open Access Journals (DOAJ) – an index of open access journals managed by the DOAF foundation based in Denmark – and showing indicators of low editorial standards. Legitimate journals were those that followed DOAJ best practice standards and classed as ‘whitelisted’.

The AI model was designed to transform journal websites into machine-readable information, according to DOAJ criteria, such as editorial board expertise and publication ethics. To train the questionable journal classifier, they compiled a list of around 12,800 whitelisted journals and 2500 unwhitelisted, and then extracted three kinds of features to help distinguish them from each other: website content, website design and bibliometrics-based classifiers.

The model was then used to predict questionable journals from a list of just over 15,000 open-access journals housed by the open database, Unpaywall. Overall, it flagged 1437 suspect journals of which about 1092 were expected to be genuinely questionable. The researchers said these journals had hundreds of thousands of articles, millions of citations, acknowledged funding from major agencies and attracted authors from developing countries.

There were around 345 false positives among those identified, which the researchers said shared a few patterns, for example they had sites that were unreachable or had been formally discontinued, or referred to a book series or conference with titles similar to that of a journal. They also said there was likely around 1780 problematic journals that had remained undetected.

Overall, they concluded that AI could accurately discern questionable journals with high agreement with expert human assessments, although they pointed out that existing AI models would need to be continuously updated to track evolving trends.

‘Future work should explore ways to incorporate real-time web crawling and community feedback into AI-driven screening tools to create a dynamic and adaptable system for monitoring research integrity,’ they said.

 

 



Source link

Continue Reading

Trending