Connect with us

AI Insights

Indigenous digital activists in Mexico assert their self-determination with regard to artificial intelligence · Global Voices

Published

on


The Forum brought together 47 digital activists of Indigenous languages from Mexico. Photo by Jer Clarke. CC-BY-NC.

What impact does artificial intelligence (AI) have on Mexico’s Indigenous languages? This was one of the questions posed at the first AI+Indigenous Languages Forum, held on March 13 and 14 in Mexico City. The forum provided an opportunity to hear the aspirations and concerns of dozens of participants, explore how tools like machine translation, text-to-speech, and chatbots work, and reflect on linguistic sovereignty and data governance.

With the participation of 47 activists speaking more than 20 different Indigenous languages in Mexico, who are all developing projects that use digital tools to support Indigenous languages, the forum provided a space to share concerns and explore common principles without seeking a unified, collective position.

Held within the framework of the Indigenous Languages Digital Activists Summit 2025, the forum was organized by Rising Voices in collaboration with First Languages AI Reality (FLAIR) and the Research Chair in Digital Indigeneities at Bishop’s University in Canada. The event was supported by the W.K. Kellogg Foundation, the Embassy of Canada in Mexico, and the Wikimedia Foundation, with the Cultural Center of Spain in Mexico as the host.

Key questions were asked: Who is using AI in relation to Indigenous languages? What risks and opportunities exist for peoples’ sovereignty? How can we collectively protect cultural heritage and intellectual creativity? Are these technologies aligned with my values?

Foto del foro

Forum participants presenting their group reflections in a plenary session. Photo by Jer Clarke. CC-BY-NC.

Reflections on the risks of AI

Central topics of discussion included copyright, environmental impact, collective rights, cultural heritage, and monitoring of the extraction of ancestral knowledge. Participant Katia González voiced a shared concern about the environmental impact of artificial intelligence requirements:
 

En mi opinión, parte de la congruencia ambiental es cuestionarnos los impactos que está teniendo en nuestras comunidades para mantener el enfriamiento de los motores.

In my opinion, part of environmental consistency is questioning the impacts it’s having on our communities to keep our engines cool.

Linguistic and cultural sovereignty was also a hot topic, with concerns over whether the development of generative AI could affect the self-determination of Indigenous communities, their collective rights, and intellectual property rights over their knowledge and cultural expressions. The importance of respecting communities’ autonomy regarding access to and use of their knowledge was also highlighted, as was the need for inclusive regulatory frameworks and policies that prioritize the protection of human rights and cultural diversity.

Significant ethical and technical challenges related to the use of artificial intelligence were also addressed, such as lack of technological knowledge, surveillance risks, and digital divides.

Participants also discussed the need to question how content is collected and presented in order to avoid biases and stereotypes. The data that feeds AI comes from external, biased, incomplete, and outdated perspectives, which distorts the cultural richness and current realities of Indigenous peoples.

Forum participant Verónica Aguilar stated:

¿De dónde saca sus datos la inteligencia artificial para crear un contenido nuevo? Pues de lo que ya existe. Y lo que ya existe es mucho de lo que se promovió en el siglo pasado, muy folclorizante. La historia de los indígenas en el campo, de que todos somos buenos. Entonces, de ahí va a tomar la IA la información. Y quizás, desde el punto de vista lingüístico, [el uso de la IA] es algo positivo, pero desde el punto de vista de los valores que están transmitiendo, ahí no vamos a estar de acuerdo porque para nosotros no es sólo un asunto de lengua sino de toda la cultura.

Where does artificial intelligence get its data to create new content? Well, from what already exists. And what already exists is much of what was promoted in the last century, very folkloric. The history of Indigenous people in the countryside, that we are all good. So, that’s where AI will take its information from. And perhaps, from a linguistic perspective, [the use of AI] is something positive, but from the perspective of the values being transmitted, we won’t agree on that because for us it’s not just a matter of language but of the entire culture.

On the second day of the forum, dialogue highlighting the need to establish fundamental principles for the development of AI, and addressing risks such as gender bias, algorithmic perspectives, and the inclusion of Indigenous communities, brought together government actors, companies, NGOs, and embassies.

Foto del Foro

Work in small teams to reflect on AI. Photo by Jer Clarke. CC-BY-NC

AI applications with Indigenous languages

AI also offers opportunities for Indigenous peoples. For example, Dani Ramos, a Nahua student in computer science and linguistics, presented examples of AI applications using Indigenous languages in the United States, Canada, and New Zealand, created with and by Indigenous peoples.

She highlighted the example of Te Hiku Media  in Aotearoa (New Zealand), which uses technology to revitalize the Māori language, as well as the Indigenous and Artificial Intelligence Protocol, which guarantees data sovereignty and community participation. Projects such as Abundant Intelligences, which promotes AI models based on Indigenous knowledge, were also mentioned, alongside similar initiatives in Latin America.

The cases of the Lakota AI Code Camp, IndigiGenius, and FLAIR, which seek to empower communities through technological tools designed from their own cultural and linguistic perspectives, were also shared. These efforts reflect a global movement defending the right of Indigenous peoples to shape AI according to their needs and values.

A desired future

Following an exercise on envisioning the future, participants were divided into small groups and asked to work on proposals for technological development based on Indigenous autonomy, as well as on promoting the creation and management of artificial intelligence, digital tools, and multilingual platforms managed by Indigenous speakers.

Graphic documentation of the key ideas and concepts that emerged during the session of imagining the desired future. Image created by Reilly Dow. Used with permission.

The need for inclusive technologies like search engines, voice agents, and automatic translation devices in Indigenous languages was highlighted, allowing communities to develop their own applications without depending on large companies. The importance of preserving Indigenous cultures through digital repositories, community media, and new maps based on their territorial vision was also emphasized.

The creation of intercultural networks, technological cooperatives, and technological sovereignty with their own programming languages was proposed, as part of imagining a sustainable future that combines digital technologies with respect for the land and autonomous local management.

In terms of action, participants suggested the strengthening of digital activists’ networks, the promotion of technological autonomy, the safe use of AI and data sovereignty, the promotion of legislative proposals and campaigns for the ethical use of AI, and the development of collaborative workshops for recommendations adapted to Indigenous contexts. Everyone agreed that this forum should be the beginning of a community-based, participatory strategy with a tangible impact.

Participants recognized the need to continue the dialogue in order to create appropriate tools and protocols for Indigenous communities in the context of the development of AI and language technologies — especially given the complexity that AI poses regarding autonomy, collective ownership, the preservation of linguistic variants, and the predominance of Western perspectives.

As a space in which Indigenous digital activists in Mexico could critically reflect on and analyze the effects of AI, the forum was an important first step — a launching pad to begin to imagine digital ecosystems led by Indigenous speakers who protect and revitalize their languages with a vision for the future.





Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

‘A burgeoning epidemic’: Why some kids are forming extreme emotional relationships with AI

Published

on


As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT.

As more kids turn to artificial intelligence to answer questions or help them understand their homework, some appear to be forming too close a relationship with services such as ChatGPT — and that is taking a toll on their mental health.

“AI psychosis,” while not an official clinical diagnosis, is a term clinicians are using to describe children who appear to be forming emotional bonds with AI, according to Dr. Ashley Maxie-Moreman, clinical psychologist at Children’s National Hospital in D.C.

Maxie-Moreman said symptoms can include delusions of grandeur, paranoia, fantastical relationships with AI, and even detachment from reality.

“Especially teens and young adults are engaging with generative AI for excessive periods of time, and forming these sort of fantastical relationships with AI,” she said.

In addition to forming close bonds with AI, those struggling with paranoia may see their condition worsen, with AI potentially affirming paranoid beliefs.

“I think that’s more on the extreme end,” Maxie-Moreman said.

More commonly, she said, young people are turning to generative AI for emotional support. They are sharing information about their emotional well-being, such as feeling depressed, anxious, socially isolated or having suicidal thoughts. The responses they receive from AI vary.

“And I think on the more concerning end, generative AI, at times, has either encouraged youth to move forward with plans or has not connected them to the appropriate resources or flagged any crisis support,” Maxie-Moreman said.

“It almost feels like this is a burgeoning epidemic,” she added. “Just in the past couple of weeks, I’ve observed cases of this.”

Maxie-Moreman said kids who are already struggling with anxiety, depression, social isolation or academic stress are most at risk of developing these bonds with AI. That’s why, she said, if you suspect your child is suffering from those conditions, you should seek help.

“I think it’s really, really important to get your child connected to appropriate mental health services,” she said.

With AI psychosis, parents need to be on the lookout for symptoms. One could be a lack of desire to go to school.

“They’re coming up with a lot of excuses, like, ‘I’m feeling sick,’ or ‘I feel nauseous,’ and maybe you’re finding that the child is endorsing a lot of physical symptoms that are sometimes unfounded in relation to attending school,” Maxie-Moreman said.

Another sign is a child who appears to be isolating themselves and losing interest in things they used to look forward to, such as playing sports or hanging out with friends.

“I don’t want to be alarmist, but I do think it’s important for parents to be looking out for these things and to just have direct conversations with their kiddos,” she said.

Talking to a child about mental health concerns can be tricky, especially if they are teens who, as Maxie-Moreman noted, can be irritable and a bit moody. But having a conversation with them is key.

“I think not skirting around the bush is probably the most helpful thing. And I think teens tend to get a little bit annoyed with indirectness anyhow, so being direct is probably the best approach,” she said.

To help prevent these issues, Maxie-Moreman suggested parents start doing emotional check-ins with their children from a young age.

“Just making it sort of a norm in your household to have conversations about how your child is doing emotionally, checking in with them on a regular basis, is important. So starting at a young age is what I would recommend on the preventative end,” she said.

She also encouraged parents to talk to their children about the limits of the technology they use, including generative AI.

“I think that’s probably one of the biggest interventions that will be most helpful,” she said.

Maxie-Moreman said tech companies must also be held accountable.

“Ultimately, we have to hold our tech companies accountable, and they need to be implementing better safeguards, as opposed to just worrying about the commercialization of their products,” she said.

Get breaking news and daily headlines delivered to your email inbox by signing up here.

© 2025 WTOP. All Rights Reserved. This website is not intended for users located within the European Economic Area.



Source link

Continue Reading

AI Insights

The Debate On Whether Artificial General Intelligence Should Inevitably Be Declared A Worldwide Public Good With Free Access For All

Published

on


In today’s column, I examine an ongoing debate about who will have access to artificial general intelligence (AGI). AGI is purportedly on the horizon and will be AI so advanced that it acts intellectually on par with humans. The question arises as to whether everyone will be able to use AGI or whether only those who can afford to do so will have ready access.

Some ardently insist that if AGI is truly attained, it ought to be considered a worldwide public good, including that AGI would be freely available to all at any time and any place.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this weighty discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

Immense Capability Of AGI

Imagine how much you could accomplish if you had access to AGI.

AGI would be an amazing intellectual partner. Any decisions that you need to make can be bounced off AGI for feedback and added insights. AGI would be instrumental in your personal life. You might use AGI to teach you new skills or hone your own intellectual capacity. By and large, having AGI at your fingertips will be crucial to how your life proceeds.

The issue is this.

Suppose that AGI is provided by an AI maker that opts to charge for the use of the AGI. This seems like a reasonable approach by the AI maker since they undoubtedly want to recoup their investment made in devising AGI. Plus, they will have ongoing costs such as the running of AGI on expensive servers in costly data centers, along with doing upgrades and maintenance on AGI.

As they say, to the victor go the spoils.

Those who can afford to pay for AGI usage will have a leg up on everyone else. We will descend into a society of the AGI haves and the AGI have-nots. Things will get even worse since the AGI haves will presumably flourish in new ways and far outpace the AGI have-nots. Humans armed with AGI will excel while the rest of the populace is at a distinct disadvantage.

Some believe that if that’s how the post-AGI era plays out, something demonstrative would need to be done. We can’t simply allow the world to be divided into those that can afford AGI and those that cannot do so.

There must be a means to figure this out in some balancing way.

AGI As A Designated Public Good

One burgeoning idea is that AGI should be designated as an international public good.

If we reach AGI, no matter whether by an AI maker or some governmental entity or however reached, the AGI must be globally declared as a public good for all. There must be no barriers to the use of AGI. People everywhere are to have full and equal access to AGI. Period, end of story.

How might this be undertaken?

Some indicate that AGI should be handed over to the United Nations. The UN would be tasked with making sure that access to AGI was a worldwide facility. People in all nations and anywhere on the planet would readily be provided a login and full access to AGI. They could use AGI as much as they desired.

Furthermore, since there will undoubtedly be people who don’t know about AGI or don’t have equipment such as smartphones or Internet access, the UN would perform a global educational effort to get people up-to-speed. This would include providing equipment such as a smartphone and the network capacity needed to utilize AGI.

The AI maker that attained AGI would be given some compensation for their accomplishment but might be cut out of future compensation under the notion of a government taking of AGI from the AI maker. In an imminent domain type of action, their devised AGI would be taken from them and placed into a special infrastructure established to globally enable access to the AGI.

Lots of variations are being tossed around about whether the originating maker of AGI would be allowed to keep AGI and run it on behalf of the world or be set to the side as a result of the governmental taking of AGI. Pros and cons are hotly debated.

Worries About Human-AGI Evildoing

Wait for a second, some bellow vociferously. If everyone has ready access to AGI, at no cost and no barrier to usage, we are opening a Pandora’s box.

Consider this scenario. An evildoer opts to access AGI. They can do so without any cost concerns. So, they tell AGI to come up with a biochemical weapon. AGI runs and runs, inventing a new biochemical weapon that nobody knows about. The evildoer thanks AGI and proceeds to construct the weapon. They can use it to blackmail the world.

Not good.

One retort to this worry is that we would stridently instruct AGI to not undertake any effort that would be construed as detrimental to humankind. Thus, when someone tries to go down the evil route, AGI would stop them cold. The AGI would indicate that their request is not allowed.

But suppose an evildoer manages to trick AGI into aiding an evildoing plot. Perhaps the evildoer asks for something seemingly innocuous but is part of a subtle step in a larger evil plan. Step by step, they get AGI to give them solutions that can be ultimately pieced together into a larger evil purpose. The AGI was not the wiser for it.

It would be very challenging to somehow get AGI to in an ironclad fashion never reveal or respond in a manner that would completely avert all potential evildoing.

Registering People That Use AGI

A related facet is that some say we would want to make sure that the people using AGI are identifiable. In other words, even if ready access to all is provided, we should still require people to identify who they are. We cannot allow people to wantonly or anonymously use AGI.

Each person ought to be responsible for how they use AGI.

Whoa, comes the reply, if you require identification to use AGI, that’s an entirely different can of worms. People will naturally suspect that the AGI is going to monitor them. In turn, the AGI might tattle on them to government authorities. This is a dire pathway.

It would also be a somewhat impractical notion anyway. What kind of identification would be used? The identification could be faked. Some point out that with the latest high-tech, such as eyeball scans, presumably people could be distinctly and suitably identified.

Creating A Worldwide AGI Agency-Entity

There are those that express qualms about handing over AGI to the UN. The concern is that the UN might not be the most suitable choice to fully control and run AGI.

A suggested alternative is to establish a new entity that would oversee AGI. This would be a means of starting fresh. No prior baggage. The new agency entity would be formulated from the ground up as having the sole purpose of managing AGI.

Who would pay for this and how would the new entity be globally governed?

Various proposed approaches are being floated. Perhaps every country in the world would need to pay into a special fund for AGI. The monies would go to the ongoing costs of running AGI, along with administering the use of AGI. The new agency entity wouldn’t necessarily be expected to drive a profit and would be set up as a non-profit, perhaps as an NGO.

That sounds solid, but again there are those who doubt this new agency would be as beneficial and supportive as one might assume. For example, the agency might become corrupt and those running AGI start to use the AGI for their own nefarious purposes. The rest of the world might not know what’s happening.

A reply to this concern is that there would be auditors that routinely examine the new agency. They would report to the world at large. This would presumably keep the AGI-managing entity on the up and up.

Round and round these arguments go.

Universal Access To AGI

How do you feel about the claim that universal access to AGI is a must-do?

There are those who fall into the camp that AGI access must absolutely be a universal right. Nothing other than full universal access is to be undertaken. All humans deserve access to AGI.

Others say that AGI access is a privilege and isn’t necessarily going to be applicable to everyone. That’s the way the ball bounces. The world doesn’t owe everyone access to AGI.

Another variation is that we might have tiered access to AGI. Perhaps there would be AGI access at a minimum level for all, including associated constraints on usage, and then above that tier would be AGI usage with more open-ended facilities.

It’s a tough question.

At this time, it is a theoretical question since we don’t yet have AGI. That being said, there are predictions that we might have AGI in the next several years, perhaps by 2030 or so. The question is looming, and we probably should be figuring out what we are going to do when the moment arrives (assuming we do attain AGI).

Take a few reflective moments to ascertain where you stand on the thorny issue.

A final thought for now might be spurred by the famous line of Margaret Fuller, noted American journalist in the early 1800s: “If you have knowledge, let others light their candles in it.”



Source link

Continue Reading

AI Insights

University of North Carolina hiring Chief Artificial Intelligence Officer

Published

on


The University of North Carolina (UNC) System Office has announced it is hiring a Chief Artificial Intelligence Officer (CAIO) to provide strategic vision, executive leadership, and operational oversight for AI integration across the 17-campus system.

Reporting directly to the Chief Operating Officer, the CAIO will be responsible for identifying, planning, and implementing system-wide AI initiatives. The role is designed to enhance administrative efficiency, reduce operational costs, improve educational outcomes, and support institutional missions across the UNC system.

The position will also act as a convenor of campus-level AI leads, data officers, and academic innovators, with a brief to ensure coherent strategies, shared best practices, and scalable implementations. According to the job description, the role requires coordination and diplomacy across diverse institutions to embed consistent policies and approaches to AI.

The UNC System Office includes the offices of the President and other senior administrators of the multi-campus system. Nearly 250,000 students are enrolled across 16 universities and the NC School of Science and Mathematics.

System Office staff are tasked with executing the policies of the UNC Board of Governors and providing university-wide leadership in academic affairs, financial management, planning, student affairs, and government relations. The office also has oversight of affiliates including PBS North Carolina, the North Carolina Arboretum, the NC State Education Assistance Authority, and University of North Carolina Press.

The new CAIO will work under a hybrid arrangement, with at least three days per week onsite at the Dillon Building in downtown Raleigh.

UNC’s move to appoint a CAIO reflects a growing trend of U.S. universities formalizing AI integration strategies at the leadership level. Last month, Rice University launched a search for an Assistant Director for AI and Education, tasked with leading faculty-focused innovation pilots and embedding responsible AI into classroom practice.

The ETIH Innovation Awards 2026



Source link

Continue Reading

Trending