Connect with us

Ethics & Policy

Artificial intelligence in the international sphere

Published

on


Artificial intelligence (AI) has become widely used in professional or personal life, and it is transforming both public and private sectors, ranging from healthcare, education, employment, to transportation or defence. However, collective action is necessary to address the inherent risks of AI. Ensuring its responsible and ethical use in the international sphere has become a global priority.

Although AI systems can facilitate efficiency by automating tasks and reducing operational burdens, they also introduce new and complex vulnerabilities. Tools that enable public institutions and private entities to function more effectively can also be misused to design bioweapons, conduct sophisticated cyberattacks, such as phishing, whaling, denial-of-service (DoS) and man-in-the-middle (MitM) operations, as well as amplify disinformation, and enable authoritarian surveillance.

AI in modern conflicts

In the defence sector, AI is already reshaping the landscape of modern conflict. Autonomous Weapons Systems (AWS) and Lethal Autonomous Weapons (LAWs) increasingly operate with limited or no human intervention. These developments raise fundamental ethical, legal and security questions, including those related to accountability and the potential for rapid conflict escalation.

A regulatory void

International policymaking and legal responses are not keeping pace with this technological advancement. As a result, a regulatory void has emerged. The absence of accountability frameworks, early-warning mechanisms, and consistent cooperation among international organizations (IOs) has allowed malicious actors to operate without any meaningful consequence.

In an environment where it is increasingly difficult to attribute cyberattacks to specific actors, where there is a lack of clarity around the definitions of AI and its components, and where incidents affecting governments, civil society, businesses, and communities continue to grow, trust and cooperation among States and international organizations are more essential than ever.

The importance of multilateral cooperation

As António Guterres emphasised at the AI Action Summit in France (February 2025):

 “We need concerted efforts to build sustainable digital infrastructure at an unprecedented scale”.

A proactive, inclusive and rules-based approach, grounded in shared responsibility and human rights, must guide the global community’s efforts in governing AI effectively. It is only through robust, coordinated action that the world can harness AI for sustainable development, peace, and international security.

Governing AI: The role of the United Nations and other key actors

The United Nations has taken several steps to foster regulatory approaches to AI, maximising its benefits and effectively managing the associated risks.

High-Level Advisory Body on Artificial Intelligence (2023–2024): The UN Secretary-General brought together 39 preeminent AI leaders from 33 countries across all regions and multiple sectors to advise on global AI governance, aligning it with human rights and the Sustainable Development Goals. The Body’s final report outlines a blueprint for addressing AI-related risks and sharing its transformative potential globally.

The Global Digital Compact: A comprehensive global framework for digital cooperation and governance of artificial intelligence.

ITU’s AI for Good Initiative: The United Nations’ leading platform on artificial intelligence to solve global challenges. It connects policymakers, researchers, and businesses to promote the use of AI in support of the Sustainable Development Goals (SDGs).

UNESCO’s Recommendation on the Ethics of Artificial Intelligence: UNESCO’s first-ever global standard on AI ethics, adopted in 2021, is applicable to all 194 member states of UNESCO. It outlines principles on transparency, accountability, and data governance.

As of 1 January 2025, a new UN Office for Digital and Emerging Technologies (ODET) has been established. A key focus for the Office will be supporting the follow-up and implementation of the Global Digital Compact, including its decisions on AI governance.

Regionally, the European Union has emerged as a global regulatory leader with the adoption of the AI Act, in 2024. The AI Act is the first-ever comprehensive legal framework on AI worldwide, aiming to foster trustworthy AI in Europe.

The OECD AI Principles were adopted in 2019 and updated in 2024. It is the first intergovernmental standard on AI that promotes innovative, trustworthy AI that respects human rights and democratic values. The Principles provide practical and flexible guidance for policymakers and AI actors.

The importance of collective action

It takes a collective effort to ensure the responsible use of AI, close the governance gap, and align AI development with human dignity, peace, and sustainability. In an era of digital globalization, regulating AI at the national level is increasingly challenging, making global coordination essential. To achieve this, the UN promotes multilateral cooperation among its member states, alongside collaboration between international organizations and partnerships between the public and private sectors. This coordinated approach is a prerequisite to building a prosperous future resilient to the threats posed by malicious actors.

 

Further reading:

AI: Transformative power and governance challenges

UN addresses AI and the Dangers of Lethal Autonomous Weapons Systems

Artificial Intelligence – Selected Online Resources



Source link

Ethics & Policy

Culture x Code: AI, Human Values & the Future of Creativity | Abu Dhabi Culture Summit 2025

Published

on


Culture x Code AI Human Values  the Future of Creativity  Abu Dhabi Culture Summit 2025

Step into the future of creativity at the Abu Dhabi Culture Summit 2025. This video explores how artificial intelligence is reshaping cultural preservation, creation, and access. Featuring HE Sheikh Salem bin Khalid Al Qassimi on the UAE’s cultural AI strategy, Tracy Chan (Splash) on Gen Z’s role in co-creating culture, and Iyad Rahwan on the rise of “machine culture” and the ethics of AI for global inclusion.

Discover how India is leveraging AI to preserve its heritage and foster its creative economy. The session underscores a shared vision for a “co-human” future — where technology enhances, rather than replaces, human values and cultural expression.





Source link

Continue Reading

Ethics & Policy

Good robot, bad robot: the ethics of AI

Published

on


This post was paid for and produced by our sponsor, Olin College, in collaboration with WBUR’s Business Partnerships team. WBUR’s editorial teams are independent of business teams and were not involved in the production of this post. For more information about Olin College, click here.

In answer to a future that will increasingly be shaped by AI, Olin College is incorporating AI and ethics concepts into multiple courses and disciplines for today’s engineering students. By preparing tomorrow’s leading engineers to develop confident, competent perspectives on how to use AI, students will be prepared to make ethical decisions throughout their careers.

For example, in its ‘Artificial Intelligence and Society’ class, students examine the impact of engineering on humanity and the ethical implications through multiple perspectives, including anthropology and computer science.

Each week, Olin students examine different topics, from bias in large language models like ChatGPT to parallels between perspectives on AI today and the 19th-century Luddite movement of English textile workers who opposed the use of cost-saving machinery. They also hear from healthcare and climate researchers who discuss the benefits of AI in their fields, such as using machine learning to identify inequities in the healthcare system or to improve renewable energy storage.

For their final project, students work in groups to design AI ethics content that can be incorporated into existing Olin courses. Together, students and faculty design problems for future engineering students to dissect, such as the ethical question of when to use AI tools in real-life scenarios.

Through pioneering this curriculum, the next generation of Olin engineers are equipped with excellent technical skills that complement their desire to change the world and the ability to adapt to a rapidly-changing society.

Founded just twenty-five years ago, Olin College of Engineering has made a name for itself in the world of undergraduate engineering education. It is currently ranked No. 2 Undergraduate Engineering Program by US News & World Report. Olin was the first undergraduate engineering school in the United States to achieve gender parity with half its student population being women. It is known around the world for its innovative curriculum. In a recent study, “The global state of the art in engineering education,” Olin was named one of the world’s most highly regarded undergraduate engineering programs.

The curriculum at Olin College is centered around providing students with real-world experiences. Students complete dozens of projects over their four years, preparing them well for the workforce of today — and tomorrow. And the world needs more engineers. US labor statistics suggest the country will need six million more engineers to graduate, to fully meet the demand for their critical skill set.

An emphasis on ethics isn’t surprising given that Olin’s most visible alumna is Facebook whistleblower Frances Haugen. In her new book “The Power of One,” Haugen writes about her experience at Olin as a place that “believed integrating the humanities into its engineering curriculum was essential because it wanted its alumni to understand not just whether a solution could be built, but whether it should be built.”

Learn more about Olin’s unique approach to engineering education at olin.edu.



Source link

Continue Reading

Ethics & Policy

An AI Ethics Roadmap Beyond Academic Integrity For Higher Education

Published

on


Higher education institutions are rapidly embracing artificial intelligence, but often without a comprehensive strategic framework. According to the 2025 EDUCAUSE AI Landscape Study, 74% of institutions prioritized AI use for academic integrity alongside other core challenges like coursework (65%) and assessment (54%). At the same time, 68% of respondents say students use AI “somewhat more” or “a lot more” than faculty.

These data underscore a potential misalignment: Institutions recognize integrity as a top concern, but students are racing ahead with AI and faculty lack commensurate fluency. As a result, AI ethics debates are unfolding in classrooms with underprepared educators.

The necessity of integrating ethical considerations alongside AI tools in education is paramount. Employers have made it clear that ethical reasoning and responsible technology use are critical skills in today’s workforce. According to the Graduate Management Admission Council’s 2024 Corporate Recruiters Survey, these skills are increasingly vital for graduates, underscoring ethics as a competitive advantage rather than merely a supplemental skill.

Yet, many institutions struggle to clearly define how ethics should intertwine with their AI-enhanced pedagogical practices. Recent discussions with education leaders from Grammarly, SAS, and the University of Delaware offer actionable strategies to ethically and strategically integrate AI into higher education.

Ethical AI At The Core

Grammarly’s commitment to ethical AI was partially inspired by a viral incident: a student using Grammarly’s writing support was incorrectly accused of plagiarism by an AI detector. In response, Grammarly introduced Authorship, a transparency tool that delineates student-created content from AI-generated or refined content. Authorship provides crucial context for student edits, enabling educators to shift from suspicion to meaningful teaching moments.

Similarly, SAS has embedded ethical safeguards into its platform, SAS Viya, featuring built-in bias detection tools and ethically vetted “model cards.” These features help students and faculty bring awareness to and proactively address potential biases in AI models.

SAS supports faculty through comprehensive professional development, including an upcoming AI Foundations credential with a module focused on Responsible Innovation and Trustworthy AI. Grammarly partners directly with institutions like the University of Florida, where Associate Provost Brian Harfe redesigned a general education course to emphasize reflective engagement with AI tools, enhancing student agency and ethical awareness.

Campus Spotlight: University of Delaware

The University of Delaware offers a compelling case study. In the wake of COVID-19, their Academic Technology Services team tapped into 15 years of lecture capture data to build “Study Aid,” a generative AI-powered tool that helps students create flashcards, quizzes, and summaries from course transcripts. Led by instructional designer Erin Ford Sicuranza and developer Jevonia Harris, the initiative exemplifies ethical, inclusive innovation:

  • Data Integrity: The system uses time-coded transcripts, ensuring auditability and traceability.
  • Human in the Loop: Faculty validate topics before the content is used.
  • Knowledge Graph Approach: Instead of retrieval-based AI, the tool builds structured data to map relationships and respect academic complexity.
  • Cross-Campus Collaboration: Librarians, engineers, data scientists, and faculty were involved from the start.
  • Ethical Guardrails: Student access is gated until full review, and the university retains consent-based control over data.

Though the tool is still in pilot phase, faculty from diverse disciplines—psychology, climate science, marketing—have opted in. With support from AWS and a growing slate of speaking engagements, UD has emerged as a national model. Their “Aim Higher” initiative brought together IT leaders, faculty, and software developers to a conference and hands-on AI Makerspace in June 2025.

As Sicuranza put it: “We didn’t set out to build AI. We used existing tools in a new way—and we did it ethically.”

An Ethical Roadmap For The AI Era

Artificial intelligence is not a neutral force—it reflects the values of its designers and users. As colleges and universities prepare students for AI-rich futures, they must do more than teach tools. They must cultivate responsibility, critical thinking, and the ethical imagination to use AI wisely. Institutions that lead on ethics will shape the future—not just of higher education, but of society itself.

Now is the time to act by building capacity, empowering communities, and leading with purpose.



Source link

Continue Reading

Trending