AI Insights
Indigenous digital activists in Mexico assert their self-determination with regard to artificial intelligence · Global Voices

The Forum brought together 47 digital activists of Indigenous languages from Mexico. Photo by Jer Clarke. CC-BY-NC.
What impact does artificial intelligence (AI) have on Mexico’s Indigenous languages? This was one of the questions posed at the first AI+Indigenous Languages Forum, held on March 13 and 14 in Mexico City. The forum provided an opportunity to hear the aspirations and concerns of dozens of participants, explore how tools like machine translation, text-to-speech, and chatbots work, and reflect on linguistic sovereignty and data governance.
With the participation of 47 activists speaking more than 20 different Indigenous languages in Mexico, who are all developing projects that use digital tools to support Indigenous languages, the forum provided a space to share concerns and explore common principles without seeking a unified, collective position.
Held within the framework of the Indigenous Languages Digital Activists Summit 2025, the forum was organized by Rising Voices in collaboration with First Languages AI Reality (FLAIR) and the Research Chair in Digital Indigeneities at Bishop’s University in Canada. The event was supported by the W.K. Kellogg Foundation, the Embassy of Canada in Mexico, and the Wikimedia Foundation, with the Cultural Center of Spain in Mexico as the host.
Key questions were asked: Who is using AI in relation to Indigenous languages? What risks and opportunities exist for peoples’ sovereignty? How can we collectively protect cultural heritage and intellectual creativity? Are these technologies aligned with my values?

Forum participants presenting their group reflections in a plenary session. Photo by Jer Clarke. CC-BY-NC.
Reflections on the risks of AI
Central topics of discussion included copyright, environmental impact, collective rights, cultural heritage, and monitoring of the extraction of ancestral knowledge. Participant Katia González voiced a shared concern about the environmental impact of artificial intelligence requirements:
En mi opinión, parte de la congruencia ambiental es cuestionarnos los impactos que está teniendo en nuestras comunidades para mantener el enfriamiento de los motores.
In my opinion, part of environmental consistency is questioning the impacts it’s having on our communities to keep our engines cool.
Linguistic and cultural sovereignty was also a hot topic, with concerns over whether the development of generative AI could affect the self-determination of Indigenous communities, their collective rights, and intellectual property rights over their knowledge and cultural expressions. The importance of respecting communities’ autonomy regarding access to and use of their knowledge was also highlighted, as was the need for inclusive regulatory frameworks and policies that prioritize the protection of human rights and cultural diversity.
Significant ethical and technical challenges related to the use of artificial intelligence were also addressed, such as lack of technological knowledge, surveillance risks, and digital divides.
Participants also discussed the need to question how content is collected and presented in order to avoid biases and stereotypes. The data that feeds AI comes from external, biased, incomplete, and outdated perspectives, which distorts the cultural richness and current realities of Indigenous peoples.
Forum participant Verónica Aguilar stated:
¿De dónde saca sus datos la inteligencia artificial para crear un contenido nuevo? Pues de lo que ya existe. Y lo que ya existe es mucho de lo que se promovió en el siglo pasado, muy folclorizante. La historia de los indígenas en el campo, de que todos somos buenos. Entonces, de ahí va a tomar la IA la información. Y quizás, desde el punto de vista lingüístico, [el uso de la IA] es algo positivo, pero desde el punto de vista de los valores que están transmitiendo, ahí no vamos a estar de acuerdo porque para nosotros no es sólo un asunto de lengua sino de toda la cultura.
Where does artificial intelligence get its data to create new content? Well, from what already exists. And what already exists is much of what was promoted in the last century, very folkloric. The history of Indigenous people in the countryside, that we are all good. So, that’s where AI will take its information from. And perhaps, from a linguistic perspective, [the use of AI] is something positive, but from the perspective of the values being transmitted, we won’t agree on that because for us it’s not just a matter of language but of the entire culture.
On the second day of the forum, dialogue highlighting the need to establish fundamental principles for the development of AI, and addressing risks such as gender bias, algorithmic perspectives, and the inclusion of Indigenous communities, brought together government actors, companies, NGOs, and embassies.

Work in small teams to reflect on AI. Photo by Jer Clarke. CC-BY-NC
AI applications with Indigenous languages
AI also offers opportunities for Indigenous peoples. For example, Dani Ramos, a Nahua student in computer science and linguistics, presented examples of AI applications using Indigenous languages in the United States, Canada, and New Zealand, created with and by Indigenous peoples.
She highlighted the example of Te Hiku Media in Aotearoa (New Zealand), which uses technology to revitalize the Māori language, as well as the Indigenous and Artificial Intelligence Protocol, which guarantees data sovereignty and community participation. Projects such as Abundant Intelligences, which promotes AI models based on Indigenous knowledge, were also mentioned, alongside similar initiatives in Latin America.
The cases of the Lakota AI Code Camp, IndigiGenius, and FLAIR, which seek to empower communities through technological tools designed from their own cultural and linguistic perspectives, were also shared. These efforts reflect a global movement defending the right of Indigenous peoples to shape AI according to their needs and values.
A desired future
Following an exercise on envisioning the future, participants were divided into small groups and asked to work on proposals for technological development based on Indigenous autonomy, as well as on promoting the creation and management of artificial intelligence, digital tools, and multilingual platforms managed by Indigenous speakers.

Graphic documentation of the key ideas and concepts that emerged during the session of imagining the desired future. Image created by Reilly Dow. Used with permission.
The need for inclusive technologies like search engines, voice agents, and automatic translation devices in Indigenous languages was highlighted, allowing communities to develop their own applications without depending on large companies. The importance of preserving Indigenous cultures through digital repositories, community media, and new maps based on their territorial vision was also emphasized.
The creation of intercultural networks, technological cooperatives, and technological sovereignty with their own programming languages was proposed, as part of imagining a sustainable future that combines digital technologies with respect for the land and autonomous local management.
In terms of action, participants suggested the strengthening of digital activists’ networks, the promotion of technological autonomy, the safe use of AI and data sovereignty, the promotion of legislative proposals and campaigns for the ethical use of AI, and the development of collaborative workshops for recommendations adapted to Indigenous contexts. Everyone agreed that this forum should be the beginning of a community-based, participatory strategy with a tangible impact.
Participants recognized the need to continue the dialogue in order to create appropriate tools and protocols for Indigenous communities in the context of the development of AI and language technologies — especially given the complexity that AI poses regarding autonomy, collective ownership, the preservation of linguistic variants, and the predominance of Western perspectives.
As a space in which Indigenous digital activists in Mexico could critically reflect on and analyze the effects of AI, the forum was an important first step — a launching pad to begin to imagine digital ecosystems led by Indigenous speakers who protect and revitalize their languages with a vision for the future.
AI Insights
OpenAI says spending to rise to $115 billion through 2029: Information

OpenAI Inc. told investors it projects its spending through 2029 may rise to $115 billion, about $80 billion more than previously expected, The Information reported, without providing details on how and when shareholders were informed.
OpenAI is in the process of developing its own data center server chips and facilities to drive the technologies, in an effort to control cloud server rental expenses, according to the report.
The company predicted it could spend more than $8 billion this year, roughly $1.5 billion more than an earlier projection, The Information said.
Another factor influencing the increased need for capital is computing costs, on which the company expects to spend more than $150 billion from 2025 through 2030.
The cost to develop AI models is also higher than previously expected, The Information said.
AI Insights
Microsoft Says Azure Service Affected by Damaged Red Sea Cables

Microsoft Corp. said on Saturday that clients of its Azure cloud platform may experience increased latency after multiple international cables in the Red Sea were cut.
Source link
AI Insights
Geoffrey Hinton says AI will cause massive unemployment and send profits soaring

Pioneering computer scientist Geoffrey Hinton, whose work has earned him a Nobel Prize and the moniker “godfather of AI,” said artificial intelligence will spark a surge in unemployment and profits.
In a wide-ranging interview with the Financial Times, the former Google scientist cleared the air about why he left the tech giant, raised alarms on potential threats from AI, and revealed how he uses the technology. But he also predicted who the winners and losers will be.
“What’s actually going to happen is rich people are going to use AI to replace workers,” Hinton said. “It’s going to create massive unemployment and a huge rise in profits. It will make a few people much richer and most people poorer. That’s not AI’s fault, that is the capitalist system.”
That echos comments he gave to Fortune last month, when he said AI companies are more concerned with short-term profits than the long-term consequences of the technology.
For now, layoffs haven’t spiked, but evidence is mounting that AI is shrinking opportunities, especially at the entry level where recent college graduates start their careers.
A survey from the New York Fed found that companies using AI are much more likely to retrain their employees than fire them, though layoffs are expected to rise in the coming months.
Hinton said earlier that healthcare is the one industry that will be safe from the potential jobs armageddon.
“If you could make doctors five times as efficient, we could all have five times as much health care for the same price,” he explained on the Diary of a CEO YouTube series in June. “There’s almost no limit to how much health care people can absorb—[patients] always want more health care if there’s no cost to it.”
Still, Hinton believes that jobs that perform mundane tasks will be taken over by AI, while sparing some jobs that require a high level of skill.
In his interview with the FT, he also dismissed OpenAI CEO Sam Altman’s idea to pay a universal basic income as AI disrupts the economy and reduce demand for workers, saying it “won’t deal with human dignity” and the value people derive from having jobs.
Hinton has long warned about the dangers of AI without guardrails, estimating a 10% to 20% chance of the technology wiping out humans after the development of superintelligence.
In his view, the dangers of AI fall into two categories: the risk the technology itself poses to the future of humanity, and the consequences of AI being manipulated by people with bad intent.
In his FT interview, he warned AI could help someone build a bioweapon and lamented the Trump administration’s unwillingness to regulate AI more closely, while China is taking the threat more seriously. But he also acknowledged potential upside from AI amid its immense possibilities and uncertainties.
“We don’t know what is going to happen, we have no idea, and people who tell you what is going to happen are just being silly,” Hinton said. “We are at a point in history where something amazing is happening, and it may be amazingly good, and it may be amazingly bad. We can make guesses, but things aren’t going to stay like they are.”
Meanwhile, he told the FT how he uses AI in his own life, saying OpenAI’s ChatGPT is his product of choice. While he mostly uses the chatbot for research, Hinton revealed that a former girlfriend used ChatGPT “to tell me what a rat I was” during their breakup.
“She got the chatbot to explain how awful my behavior was and gave it to me. I didn’t think I had been a rat, so it didn’t make me feel too bad . . . I met somebody I liked more, you know how it goes,” he quipped.
Hinton also explained why he left Google in 2023. While media reports have said he quit so he could speak more freely about the dangers of AI, the 77-year-old Nobel laureate denied that was the reason.
“I left because I was 75, I could no longer program as well as I used to, and there’s a lot of stuff on Netflix I haven’t had a chance to watch,” he said. “I had worked very hard for 55 years, and I felt it was time to retire . . . And I thought, since I am leaving anyway, I could talk about the risks.”
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi