Connect with us

Ethics & Policy

Shaping Global AI Governance at the 80th UNGA

Published

on


Global AI governance is currently at a critical juncture. Rapid advancements in technology are presenting exciting opportunities but also significant challenges. The rise of AI agents — AI systems that can reason, plan, and take direct action — makes strong international cooperation more crucial than ever. To create safer and more responsible AI that benefits people and society, we must work collectively on a global scale.

Partnership on AI (PAI) has been deeply engaged in these conversations, bridging the gap between AI development and responsible policy.

Our team has crossed the globe, connecting with partners and collaborators at key events this year, from the AI Action Summit in Paris, to the World AI Conference in Shanghai, and the Global AI Summit on Africa in Kigali. This builds on the discussion at PAI’s 2024 Policy Forum and the Policy Alignment on AI Transparency report published last October, both of which explored how AI governance efforts align with one another and highlighted the need for international cooperation and coordination on AI policy.

Our journey now takes us next to the 80th session of the United Nations General Assembly (UNGA), taking place in New York this week.

In addition to marking the 80th anniversary of the UN, this year’s UNGA is a call for renewed commitment to multilateralism. It also serves as the official launch of the new UN Global Dialogue on AI Governance. The UN is a crucial piece of the global AI governance puzzle, as a universal and inclusive forum where every nation, regardless of size or influence, has a voice in shaping the future of this technology.

To celebrate this milestone anniversary, PAI is bringing together its community of Partners, policymakers, and other stakeholders for a series of events alongside the UNGA. This is a pivotal moment that demands increased global cooperation amid a challenging geopolitical environment. Our community has identified two particularly important and challenging areas for global AI governance this year:

  1. The opportunities and challenges of AI agents (with 2025 dubbed the “year of agents”) across different fields, including AI safety, human connection, and public policy
  2. The need to build a more robust global AI assurance ecosystem; AI assurance being defined as the process to assess whether an AI system or model is trustworthy

To inform these important discussions and build on our support for the UN Global Digital Compact, PAI is bringing both topics to the attention of the community of UN stakeholders through a series of UNGA side events and publications on both issues. The issues align with the mandates of two new UN AI mechanisms: the UN Independent International Scientific Panel on AI and the Global Dialogue.

The Scientific Panel is tasked with issuing “evidence-based scientific assessments” that synthesize and analyze existing research on the opportunities, risks, and impacts of AI.

Meanwhile, the role of the Global Dialogue is to discuss international cooperation, share best practices and lessons learned, and to facilitate discussions on AI governance to advance the sustainable development goals (SDGs), including on the development of trustworthy AI systems; the protection of human rights in the field of AI; and transparency, accountability, and human oversight consistent with international law.

AI agents are a new research topic that the international community needs to better understand, considering opportunities and potential risks in areas such as human oversight, transparency, and human rights. We expect this topic to be taken up by the Scientific Panel and brought to the attention of the Global Dialogue.

PAI’s work on AI agents includes three key publications:

  1. A Real-time Failure Detection Framework that provides guidance on how to monitor and thereby prevent critical failures in the deployment of autonomous AI agents, which could lead to hazards or real-world incidents that can harm people, disrupt infrastructure, or violate human rights.
  2. An International Policy Brief that offers anticipatory guidance on how to manage the potential cross-border harms and human rights impacts of AI agents, leveraging foundational global governance tools, i.e., international law, non-binding global norms, and global accountability mechanisms.
  3. A Policy Research Agenda that outlines priority questions that policymakers and the scientific community should explore to ensure that we govern AI agents in an informed manner domestically, regionally, and globally.

At the same time, we believe a robust AI assurance ecosystem is crucial to enabling trust and unlocking opportunities for adoption in line with the SDGs and international law. Both the Scientific Panel and the Global Dialogue can help fill significant research and implementation gaps in this area.

Looking ahead, we will expand our focus on AI assurance, with plans to publish a white paper, progress report, and international policy brief at the end of 2025 and early 2026. These publications will touch on issues ranging from the challenges to effective AI assurance, such as insufficient incentives and access to documentation, to AI assurance needs in the Global South.

We hope these contributions will not only inform discussions at the UN but also in other important international AI governance forums, including the OECD’s Global Partnership on AI Expert Group Meeting in November, the G20 Summit in November, and the AI Impact Summit in India next year.

The global conversation on AI governance is still in the early stages, and PAI is committed to ensuring that it is an inclusive, informed, and effective one. To stay up to date on our work in this area, sign up for our newsletter.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Ethical AI: Investing in a Responsible Future

Published

on


Risks to Investors and Regulatory Momentum

Despite its potential, AI carries its own unique and significant risks. It can amplify subjectivity, compromise privacy, and make opaque, unaccountable decisions, which could prove especially detrimental in high-stake sectors such as finance, law enforcement, and healthcare. Key concerns include inaccuracy, discrimination from biased data, as well as privacy breaches due to cyber vulnerabilities. Additionally, the environmental footprint of AI is swiftly expanding, with inference from models like ChatGPT already consuming over 124 GWh annually, and with compute demand doubling every 100 days, a potential trajectory toward tens of terawatt-hours annually over the next few years. Water usage is heading in a similar direction, with up to up 6.6 billion cubic meters of water projected to be consumed by 2027 – enough to meet Denmark’s yearly water needs.

“Greenwashing”, which can arise when businesses overstate their “green” credentials (which could include situations where businesses underestimate or fail to fully understand the environmental impact of their AI use), is increasingly coming into focus. This can be particularly pertinent to AI, as AI providers’ claims on their model’s energy and water usage are often opaque.  In the UK, under new powers introduced in the Digital Markets, Competition and Consumers Act 2024, the Competition and Markets Authority can impose fines of up to 10% of a company’s global turnover where companies engage in unfair commercial practices, including for misleading environmental claims. As ESG becomes more important in supply chains, scrutiny of AI usage and its underlying environmental impact is only likely to increase.

To consider another ethical angle; Getty’s claim against Stability AI for copyright and trademark infringement in respect of the data that Stability AI has used to train its AI model has drawn into sharp focus the ethics of the way in which AI developers acquire their training data. Investors may want reassurance that AI businesses in which they invest will not face the threat of litigation as a result of “stealing” data to develop their models. 

Encouragingly, investor awareness of these issues is growing. The World Benchmarking Alliance’s Collective Impact Coalition for Digital Inclusion brings together 34 institutional investors representing over $6.9 trillion in assets, alongside 12 civil society groups. Their collective engagement has reportedly prompted 19 companies to adopt ethical AI principles since 2022; however, the work is far from over, with a recent report revealing that only 52 of 200 major tech firms disclose their ethical AI principles.  

Regulatory momentum is building globally. The EU AI Act is the most comprehensive AI regulatory framework implemented so far and, much in the way that GPDR set privacy standards globally, looks sets to be the “gold standard” in AI regulation. The Act introduces a risk-based framework which bans the use of AI in high-risk applications, mandates transparency and introduces requirements for those developing and deploying AI to be AI literate. As noted above, other countries are also increasingly regulating, although in its recently published Digital and Technologies Sector Plan the UK Government has stated its aim to take a more pro-innovation and anti-administration sector specific stance rather than implement a single piece of overarching regulation..

As AI becomes more accessible, thanks to a 280-fold drop in inference costs between November 2022 and October 2024, deployment is accelerating, making inclusive and ethical AI governance more urgent than ever. Businesses and investors alike would be wise to stay alert to the risks, particularly if ethical applications are key to their business plans or investment strategies. 

To help mitigate risks to investors, the Responsible Investment Association Australasia recommends stewardship and integration strategies, including human rights due diligence aligned with the UN Guiding Principles on Business and Human Rights (UNGPs). It also advocates prioritising engagement based on the severity and likelihood of impacts, and pushing for greater transparency, contestability, and accountability in AI governance.



Source link

Continue Reading

Ethics & Policy

Chile faces national debate as proposed bill to regulate AI use advances

Published

on


“It’s not that a country like Chile aspires to have a seat at the table with the world’s greatest powers, but rather that it already has one,” stated Aisén Etcheverry, Chile’s Minister of Science, Technology, Knowledge and Innovation earlier this year in an interview with France 24

Her words capture Chile’s growing ambition to advance a pioneering bill to regulate artificial intelligence (AI), sparking a national debate over how to balance innovation with ethics. 

The Latin American Artificial Intelligence Index (ILIA) recently confirmed Chile as the regional leader in AI thanks to high levels of investment in technological infrastructure, training programmes and supporting policies. 

However, as President Gabriel Boric’s government seeks to expand the use of AI to drive modernization and sustainable growth in the country, there has been a sustained focus on discussions around implementing AI regulations to promote an “ethical, transparent and responsible use of AI for the benefit of all.” 

“Some companies see regulations as an opportunity to grow, while others view it as a burden. But in the long run, those who resist innovation will lose ground in the market,” said Sebastian Martinez, General Manager at Nisum Chile, a technology consulting and software development company, while in conversation with Latin America Reports.

The government’s proposed AI Regulation bill, first introduced to Congress in May 2024, was approved by the Chamber of Deputies on August 4, 2025, and has proceeded to the Committee of Future, Science, Technology, Knowledge, and Innovation to widen the conversation by drawing on the views of experts from the public, private, academic and civil society sectors.

Whilst the government maintains that the implementation of this bill would promote innovation and responsible development aligned with international standards, critics warn that tight regulation could instead hinder the technological progress the country aims to achieve. 

“Artificial intelligence isn’t a threat; it’s a tool. But unless we invest in educating people about it, fear will dominate, and Chile will miss out on the benefits this technology can bring,” Martinez noted. 

The proposed AI regulation bill

President Boric has consistently emphasised the importance of investing in artificial intelligence as a key driver of development in Chile and Latin America, placing ambition, innovation and informed decision-making at the heart of his government’s approach.

Whilst acknowledging the risks posed by AI, the president has underscored the human role and responsibility in regulating advanced technologies to ensure ethical practice. 

Speaking at the Congreso Futuro 2024 forum on artificial intelligence, Boric stated: “It is necessary to accompany its development with deep ethical reflection.” 

These comments were made shortly before his government introduced its proposed bill on May 7, 2024, aimed at ensuring that the development and use of AI in Chile respects citizens’ rights while also promoting innovation and strengthening the state’s capacity to respond to the risks and challenges posed by the technology. 

The bill is aligned with UNESCO’s Recommendation on the Ethics of AI, a framework that guided Chile in becoming the first country worldwide to apply and complete UNESCO’s Readiness Assessment Methodology (RAM). 

Audrey Azoulay, Director-General of UNESCO, praised the initiative, stating: “Chile has emerged as a global leader in ethical AI governance, and we are proud that UNESCO has played an essential role in helping achieve this landmark.”

If approved, the bill aims to boost innovation in the business sector, supporting small and medium-sized enterprises (SMEs) especially, by fostering the technological conditions needed for growth, whilst maintaining regulatory oversight of AI systems.

The proposal also seeks to protect Chileans from algorithmic discrimination, a lack of transparency in AI interactions, as well as AI decision-making that could affect fundamental rights in areas such as healthcare, education, law and finance.

Chile’s approach to regulating AI 

Chile’s proposed AI regulation adopts a risk-based framework, similar to the EU AI Act, classifying systems into four categories: unacceptable risk, high risk, limited risk, and no evident risk.

Under the proposal, AI systems considered to pose an unacceptable risk would be strictly banned. This includes technologies that undermine human dignity, such as those generating deepfakes or sexual content that exploits vulnerable groups like children and teenagers. 

The bill also prohibits systems designed to manipulate emotions and influence decisions without informed consent, as well as those that collect or process facial biometric data without explicit permission. 

High-risk AI systems are those that could significantly impact health, safety, fundamental constitutional rights, the environment, or consumer rights. AI tools used in recruitment processes to screen and filter job applications, for instance, fall under this category due to their potential bias and discrimination. 

Those deemed to pose limited risk include AI systems that present minimal potential for manipulation, deception, or error in user interactions — such as public service chatbots that respond to queries within its area of competence. At the lowest tier, systems considered to carry no evident risk are tools like recommendation engines for films or music, technologies that under no circumstance pose harm to fundamental rights. 

Under this model, AI systems will not require pre-market certification or review. Instead, each company is responsible for assessing and classifying its own systems, according to the established risk categories. 

As explained by Minister Etcheverry, cases of non-compliance will lead to administrative sanctions imposed by the future Chilean Data Protection Agency, with decisions open to appeal before the country’s courts. 

Innovation or limitation? 

Whilst many actors in the public, private and civil society sectors support the proposed AI bill for its emphasis on responsible and ethical use of technology, experts have also raised concerns regarding the risk-based framework’s close alignment with EU standards and the potential bureaucracy that this model could introduce.

Sebastián Dueñas, researcher at the Law, Science and Technology Program at Pontificia Universidad Católica de Chile (UC), criticized the framework for its strict regulations and vague definition of what constitutes a “high-risk” system. He warned that such ambiguity could stifle innovation, discouraging developers who fear heavy sanctions.

The framework’s similarity to the EU AI Act has also raised doubts given the substantial differences between the Chilean context and that of the EU. Matías Aránguiz, professor at the Faculty of Law and deputy director of the Law, Science and Technology Program at UC, highlighted the disparity in budget and personnel as a major challenge in effectively implementing a similar risk-based regulatory approach in Chile.

In August, the Santiago Chamber of Commerce, a non-profit trade association representing over 2,500 companies across Chile’s key economic sectors, expressed concern about the bill’s potential impact. 

The association warned that the rigidity of the proposed risk-based framework could negatively affect technological development, investment, and national competitiveness.

The association emphasized the need to foster responsible AI development in Chile whilst avoiding overly restrictive regulations that could limit innovation and the technology’s transformative potential. 

Echoing this view, Dueñas commented: “Regulating AI is necessary, but doing so with the same rigidity as the European Union—just as they are trying to soften their own framework—would only add friction to Chilean development.”

For Martinez, on the other hand, what’s most needed is investment, rather than regulation. “Chile urgently needs to invest in AI. Without it, we risk falling further behind the U.S., and the gap between our markets will only continue to widen,” he stressed.

The government’s proposed AI regulation bill reflects more than two years of collaborative work, with input from the national AI Expert Committee, congressional commissions and members of both academia and civil society alike.

However, the debate continues as actors from diverse sectors convened on August 14 to highlight both the progress made and the complexities that remain in navigating this technological challenge.

This article was originally published by Nadia Hussain on Latin America Reports and was re-published with permission.



Source link

Continue Reading

Ethics & Policy

Guerra publishes on AI ethics and blockchain technology

Published

on


Katia Guerra

Katia Guerra, assistant professor of information technology management, has had a series of her recent academic contributions published on subjects spanning ethical artificial intelligence, AI system adoption and blockchain technology. Guerra’s work highlights the multifaceted nature of modern technological research.

Guerra published two of her papers in the AMCIS 2025 Proceedings, “Ethical AI Design and Implementation: A Systematic Literature Review,” which aims to showcase how AI can be implemented ethically in order to comply with new rules and guidelines set by major governing bodies.

Guerra’s second paper is one she co-authored titled, “AI Self-diagnosis Systems Adoption: A Socio Technical Perspective.” This paper explores the environmental and technological factors at play when adopting AI self diagnosis systems. Both papers address significant aspects of AI development and implementation.

Additionally, Guerra had her work published in the International Review of Law, Computers, & Technology. She co-authored “Blockchain technology: an analysis of economic, technical, and legal implications.” This paper details how blockchain technology is not yet ready to replace traditional business transactions, as it does not fully adhere to existing legal rules. The paper goes on to highlight how that can begin to change.



Source link

Continue Reading

Trending