AI Insights
Safeguarding indigenous rights in the age of artificial intelligence

August 11, 2025
JAKARTA – Artificial Intelligence is reshaping the world with astonishing speed, transforming sectors from agriculture and health to education. But as we race toward this digital future, a critical question emerges: For whom is this future being built?
While AI promises efficiency and innovation, for indigenous peoples and local communities (IPLCs), the future is not just about algorithms: It’s about recognition, protection and dignity.
As the global community marks the International Day of the World’s Indigenous Peoples on Aug. 9 with a focus on AI this year, we must confront a pressing reality: For indigenous communities, this technology can be a double-edged sword. It offers potential for empowerment, yet also poses new threats to their existence.
The challenge facing Indonesia is clear: How do we ensure that our digital transformation upholds, not overrides, the rights of those who have long safeguarded the nation’s ecological and cultural heritage?
In many parts of the country, indigenous peoples are still perceived as remnants of the past; isolated, outdated and disconnected from modern life. This could not be further from the truth. Indonesia’s indigenous communities have long demonstrated remarkable resilience and adaptability, especially in preserving forests, managing biodiversity and responding to climate change.
As evidenced in upcoming research by Estungkara-INKLUSI, the local knowledge systems of indigenous peoples, which are rooted in communal values, offer blueprints for sustainable living in a time of accelerating ecological breakdown.
The Indigenous Peoples Alliance of the Archipelago (AMAN) estimates that the country is home to between 50 million and 70 million indigenous peoples and local communities representing over 1,300 ethnic groups. These communities speak hundreds of languages and maintain diverse customary systems to manage natural resources.
Despite their contributions to sustainability and food security, many of these communities remain excluded from formal governance. Article 18B(2) of the 1945 Constitution recognizes the existence and rights of masyarakat hukum adat (customary law communities), a category that is used in laws such as the 1960 Basic Agrarian Law, the 1999 Human Rights Law and the 2007 Coastal and Small Islands Law.
A 2013 decision of the Constitutional Court even affirmed the collective rights of indigenous peoples to customary forests, but in practice, progress remains sluggish.
Indonesia is a signatory to the United Nations Declaration on the Rights of Indigenous Peoples (UNDRIP), but the government often argues the term “indigenous” is not applicable, as all Indonesian citizens, with a few exceptions, are native to the archipelago.
This narrative erases the specific histories of marginalization experienced by self-identified indigenous groups, and it also weakens efforts to protect their distinct land tenure systems, cultures and governance institutions. This legal and political ambiguity has tangible consequences.
The potential of AI to support indigenous communities should not be dismissed. The technology can be used to document endangered languages, map customary lands and strengthen indigenous leadership in climate action. Properly designed, AI tools could help elevate the voices and stories of communities that have long been marginalized.
Yet these opportunities come with serious risks. Without inclusive governance and ethical safeguards, AI can entrench inequality. Algorithms might misrepresent indigenous cultures, strip traditional knowledge from their context or allow companies and governments to extract data without consent.
What we are witnessing is a digital form of colonialism, where control over data, identity and culture is once again removed from communities and placed in the hands of others.
This is particularly dangerous in Indonesia where the Indigenous Peoples Bill (RUU Masyarakat Adat) has yet to be passed, despite more than a decade of advocacy. In the absence of strong legal protections, the acceleration of digital systems could deepen dispossession of not only land but also knowledge, narrative and digital autonomy.
The government must act urgently to overcome this.
First, it must pass the Indigenous Peoples Bill to guarantee collective rights, including land tenure, cultural protection and data sovereignty. Second, it must ensure the meaningful participation of indigenous peoples in policymaking around digital governance and AI, not just as subjects to be consulted but as equal partners. Third, it must recognize that customary law and indigenous knowledge are not barriers to development: They are essential pillars for a pluralistic and democratic society.
Local initiatives already offer promising models. In several provinces, regional regulations and village decrees have formally recognized indigenous territories, which have helped communities secure their boundaries and strengthen their legal standing. When the Forestry Ministry issues a decree recognizing a customary forest, that area becomes part of official spatial planning maps, protecting it from unauthorized intervention.
Recognition also unlocks cultural opportunities. For example in West Sumatra’s Mentawai Islands Regency, traditional schools called sekolah uma teach children to read forest signs, grow medicinal plants and farm sustainably under the Arat Sabulungan belief system. In the highland hamlet of Bara in Maros regency, South Sulawesi, sekolah kolong are held under traditional stilt houses to pass down knowledge on basic literacy and other life skills.
These formal and informal sekolah adat, or customary schools, help sustain indigenous ways of life across generations. Women also play a vital role, transmitting knowledge through daily activities like weaving, farming and storytelling.
When indigenous peoples have the legal space to thrive, they innovate on their own terms.
To build a fair digital future, inclusive collaboration is essential. Government bodies, tech developers, researchers, indigenous organizations and civil society must cocreate AI systems that respect cultural rights and serve local needs. International frameworks such as UNDRIP, the principle of free, prior and informed consent (FPIC) and global justice norms can help guide national policies.
Indonesia has the opportunity to lead the way in digital development that is culturally just. Imagine AI systems that help revitalize local languages, assist participatory mapping of indigenous lands or support eco-cultural tourism without commodifying sacred traditions. These visions are not utopian, and they are possible if built on ethical foundations.
Lastly, indigenous peoples are not obstacles to progress. They are partners in building a future that is not only technologically savvy, but also socially responsible and ecologically sound.
Today, let us mark indigenous peoples day not as a ceremonial observance, but as a moment for reflection and action. If we are to build a truly intelligent future, it must begin with the wisdom of those who have protected and continue to protect the Earth, long before algorithms existed.
Moch. Yasir Sani is a program manager at the Partnership for Governance Reform (Kemitraan). Elis Nurhayati is CEO of Daya Data Komunita and currently pursuing a Master of Public Policy with a specialization in climate change at the International Islamic Indonesian University (UIII). The views expressed are personal.
AI Insights
Big tech is offering AI tools to California students. Will it save jobs?
By Adam Echelman, CalMatters

This story was originally published by CalMatters. Sign up for their newsletters.
As artificial intelligence replaces entry-level jobs, California’s universities and community colleges are offering a glimmer of hope for students: free AI training that will teach them to master the new technology.
“You’re seeing in certain coding spaces significant declines in hiring for obvious reasons,” Gov. Gavin Newsom said Thursday during a press conference from the seventh floor of Google’s San Francisco office.
Flanked by leadership from California’s higher education systems, he called attention to the recent layoffs at Microsoft, at Google’s parent company, Alphabet, and at Salesforce Tower, just a few blocks away, home to the tech company that is still the city’s largest private employer.
Now, some of those companies — including Google and Microsoft — will offer a suite of AI resources for free to California schools and universities. In return, the companies could gain access to millions of new users.
The state’s community colleges and its California State University campuses are “the backbone of our workforce and economic development,” Newsom said, just before education leaders and tech executives signed agreements on AI.
The new deals are the latest developments in a frenzy that began in November 2022, when OpenAI publicly released the free artificial intelligence tool ChatGPT, forcing schools to adapt.
The Los Angeles Unified School District implemented an AI chatbot last year, only to cancel it three months later without disclosing why. San Diego Unified teachers started using AI software that suggested what grades to give students, CalMatters reported. Some of the district’s board members were unaware that the district had purchased the software.
Last month, the company that oversees Canvas, a learning management system popular in California schools and universities, said it would add “interactive conversations in a ChatGPT-like environment” into its software.
To combat potential AI-related cheating, many K-12 and college districts are using a new feature from the software company Turnitin to detect plagiarism, but a CalMatters investigation found that the software accused students who did real work instead.
Mixed signals?
These deals are sending mixed signals, said Stephanie Goldman, the president of the Faculty Association of California Community Colleges. “Districts were already spending lots of money on AI detection software. What do you do when it’s built into the software they’re using?”
Don Daves-Rougeaux, a senior adviser for the community college system, acknowledged the potential contradiction but said it’s part of a broader effort to keep up with the rapid pace of changes in AI. He said the community college system will frequently reevaluate the use of Turnitin along with all other AI tools.
California’s community college system is responsible for the bulk of job training in the state, though it receives the least funding from the state per student.
“Oftentimes when we are having these conversations, we are looked at as a smaller system,” said Daves-Rougeaux. The state’s 116 community colleges collectively educate roughly 2.1 million students.
In the deals announced Thursday, the community college system will partner with Google, Microsoft, Adobe and IBM to roll out additional AI training for teachers. Daves-Rougeaux said the system has also signed deals that will allow students to use exclusive versions of Google’s counterpart to ChatGPT, Gemini, and Google’s AI research tool, Notebook LLM. Daves-Rougeaux said these tools will save community colleges “hundreds of millions of dollars,” though he could not provide an exact figure.
“It’s a tough situation for faculty,” said Goldman. “AI is super important but it has come up time and time again: How do you use AI in the classroom while still ensuring that students, who are still developing critical thinking skills, aren’t just using it as a crutch?”
One concern is that faculty could lose control over how AI is used in their classrooms, she added.
The K-12 system and Cal State University system are forming their own tech deals. Amy Bentley-Smith, a spokesperson for the Cal State system, said it is working on its own AI programs with Google, Microsoft, Adobe and IBM as well as Amazon Web Services, Intel, LinkedIn, Open AI and others.
Angela Musallam, a spokesperson for the state government operations agency, said California high schools are part of the deal with Adobe, which aims to promote “AI literacy,” the idea that students and teachers should have basic skills to detect and use artificial intelligence.
Much like the community college system, which is governed by local districts, Musallam said individual K-12 districts would need to approve any deal.
Will deals make a difference to students, teachers?
Experts say it’s too early to tell how effective AI training will actually be.
Justin Reich, an associate professor at MIT, said a similar frenzy took place 20 years ago when teachers tried to teach computer literacy. “We do not know what AI literacy is, how to use it, and how to teach with it. And we probably won’t for many years,” Reich said.
The state’s new deals with Google, Microsoft, Adobe and IBM allow these tech companies to recruit new users — a benefit for the companies — but the actual lessons aren’t time-tested, he said.
“Tech companies say: ‘These tools can save teachers time,’ but the track record is really bad,” said Reich. “You cannot ask schools to do more right now. They are maxed out.”
Erin Mote, the CEO of an education nonprofit called InnovateEDU, said she agrees that state and education leaders need to ask critical questions about the efficacy of the tools that tech companies offer but that schools still have an imperative to act.
“There are a lot of rungs on the career ladder that are disappearing,” she said. “The biggest mistake we could make as educators is to wait and pause.”
Last year, the California Community Colleges Chancellor’s Office signed an agreement with NVIDIA, a technology infrastructure company, to offer AI training similar to the kinds of lessons that Google, Microsoft, Adobe and IBM will deliver.
Melissa Villarin, a spokesperson for the chancellor’s office, said the state won’t share data about how the NVIDIA program is going because the cohort of teachers involved is still too small.
This article was originally published on CalMatters and was republished under the Creative Commons Attribution-NonCommercial-NoDerivatives license.
AI Insights
Warren County Schools hosts AI Workshop

KENTUCKY — On this week’s program, we’re keeping you up to date on happenings within Kentucky’s government, which includes ongoing work this summer with legislative committees and special task forces in Frankfort.
During this “In Focus Kentucky” segment, reporter Aaron Dickens shared how leaders in Warren County Public Schools are helping educators bring their new computer science knowledge to the front of classrooms.
Also in this segment, we shared details about the U.S. Department of Energy selecting the Paducah Gaseous Diffusion Plant site in Paducah, Kentucky, as one of four sites for the development of artificial intelligence data centers and associated energy infrastructure. This initiative is reportedly part of the Trump administration’s plan to accelerate AI development, hoping to leverage federal land assets to establish high-performance computing facilities and reliable energy sources for the burgeoning AI industry.
You can watch the full “In Focus Kentucky” segment in the player above.
AI Insights
Do You Have an Emotionally Intelligent Team?

-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Funding & Business1 month ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 month ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education1 month ago
VEX Robotics launches AI-powered classroom robotics system
-
Education1 month ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Mergers & Acquisitions1 month ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Jobs & Careers1 month ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Podcasts & Talks1 month ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks1 month ago
OpenAI 🤝 @teamganassi
-
Jobs & Careers1 month ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure