Connect with us

Ethics & Policy

Maldives – UNESCO

Published

on

Ethics & Policy

Keough School of Global Affairs hires AI specialist Mohammad Rifat

Published

on


As higher education continues to reckon with the emergence of AI, the Keough School of Global Affairs recently hired Mohammad Rifat as an assistant professor of tech ethics and global affairs. He is a specialist in AI ethics, human-computer interaction and critical social science.

Rifat earned his doctorate in computer science from the University of Toronto this year. While he primarily works in the Keough School he also holds positions within the Department of Computer Science and Engineering. He also works significantly with the Notre Dame Institute for Ethics and Common Good.

“In Keough, we prioritize values like human development and human dignity, but I also get technical expertise from computer science,” Rifat said, reflecting on his experience within both fields.

Rifat’s research is primarily concerned with the question of how society can make AI more inclusive of marginalized communities. Rifat explained how AI and computer systems are designed based on the information accessible to them — which he says is often from the most modernized cultures. Faith-based and traditional communities in the Global South might not be represented in data sets AI systems are trained on, he explained, and therefore people in these communities may not be represented in the content these AI systems produce.

“AI is modernistic, and it uses the methods and methodologies and techniques and approaches in a way that  reflect the institutions and techniques in modern culture, but not traditional cultures, not indigenous or faith-based cultures. The faith-based communities are not strong consumers of modernism,” Rifat said. 

Rifat says these communities are being inadvertently marginalized and that it’s his goal to bring their history and life stories out of the shadows.

Professor Rifat also spoke on the responsibility scientists should have in a world that has become increasingly AI-dominated. 

“As a scientist myself, our responsibility is to steer AI towards the direction where the business interests don’t dictate how AI should serve the community. The deeper questions are human dignity, welfare and safety,” he said.





Source link

Continue Reading

Ethics & Policy

How linguistic frames influence AI policy, education and ethics

Published

on


Artificial intelligence is not only defined by its algorithms and applications but also by the language that frames it. A new study by Tifany Petricini of Penn State Erie – The Behrend College reveals that the way AI is described profoundly influences how societies, policymakers, and educators perceive its role in culture and governance.

The research, titled “The Power of Language: Framing AI as an Assistant, Collaborator, or Transformative Force in Cultural Discourse” and published in AI & Society, examines the rhetorical and semantic frameworks that position AI as a tool, a partner, or a transformative force. The study argues that these framings are not neutral but carry cultural, political, and ethical weight, shaping both public imagination and institutional responses.

How linguistic frames influence perceptions of AI

The study identifies three dominant frames: cognitive offloading, augmented intelligence, and co-intelligence. Each of these linguistic choices embeds assumptions about what AI is and how it should interact with humans.

Cognitive offloading presents AI as a means of reducing human mental workload. This view highlights efficiency gains and productivity but raises concerns about dependency and reduced autonomy. By framing AI as a tool to handle cognitive burdens, societies risk normalizing reliance on systems that are not infallible, potentially weakening human critical judgment over time.

Augmented intelligence emphasizes AI as an extension of human ability. This optimistic narrative encourages a vision of collaboration where AI supports human decision-making. Yet the study cautions that this framing, while reassuring, can obscure structural issues such as labor displacement and the concentration of decision-making power in AI-driven systems.

Co-intelligence positions AI as a collaborator, creating a shared space where humans and machines produce meaning together. This framing offers a synergistic and even utopian vision of human–machine partnerships. However, the study highlights that such narratives blur distinctions between tools and agents, reinforcing anthropomorphic views that can distort both expectations and policy.

These framings are not just descriptive; they act as cultural signposts that influence how societies choose to regulate, adopt, and educate around AI.

What theoretical frameworks reveal about AI and language

To unpack these framings, the study draws on two major traditions: general semantics and media ecology. General semantics, rooted in Alfred Korzybski’s assertion that “the map is not the territory,” warns that words about AI often misrepresent the underlying technical reality. Descriptions that attribute thinking, creativity, or learning to machines are, in this view, category errors that mislead people into treating systems as human-like actors.

Media ecology, shaped by thinkers such as Marshall McLuhan, Neil Postman, and Walter Ong, emphasizes that communication technologies form environments that shape thought and culture. AI, when described as intelligent or collaborative, is not only a tool but part of a media ecosystem that reshapes how people view agency, trust, and authority. Petricini argues that these linguistic frames form “semantic environments” that shape imagination, policy, and cultural norms.

By placing AI discourse within these frameworks, the study reveals how misalignments between language and technical reality create distortions. For instance, when AI is linguistically elevated to the status of an autonomous agent, regulators may overemphasize machine responsibility and underemphasize human accountability.

What is at stake for policy, education and culture

The implications of these framings extend beyond semantics. The study finds that policy debates, education systems, and cultural narratives are all shaped by the language used to describe AI.

In policy, terms such as “trustworthy AI” or “high-risk AI” influence legal frameworks like the European Union’s AI Act. By anthropomorphizing or exaggerating AI’s autonomy, these discourses risk regulating machines as if they were independent actors, rather than systems built and controlled by people. Such linguistic distortions can divert attention away from human accountability and ethical responsibility in AI development.

In education, anthropomorphic metaphors such as AI “learning” or “thinking” create misconceptions for students and teachers. These terms can either inspire misplaced fear or encourage over-trust in AI systems. By reshaping how knowledge and learning are understood, the study warns, such framings may erode human-centered approaches to teaching and critical inquiry.

Culturally, the dominance of Western terminology risks sidelining diverse perspectives. Petricini points to the danger of “semantic imperialism,” where Western narratives impose a one-size-fits-all framing of AI that marginalizes non-Western traditions. For instance, Japan’s concept of Society 5.0 presents an alternative model in which AI is integrated into society with a participatory and pluralistic orientation. Recognizing such diversity, the study argues, is essential for creating more balanced global conversations about AI.



Source link

Continue Reading

Ethics & Policy

The Human Purpose and the Ethics of Progress, ETHRWorld

Published

on


A diverse group of people collaborating on ethical AI projects, reflecting inclusion, compassion, and community strength.

This article is the eighth part of a nine-part series that unpacks the evolution of intelligence, the rise of artificial intelligence, and its profound impact on jobs, ethics, society and purpose. The series will help readers understand how AI is reshaping job roles and what skills will matter most, reflect on ethical and psychological shifts AI may trigger in the workplace, and ask better questions about education, inclusion and purpose.

“We are called to be architects of the future, not its victims.” — R Buckminster Fuller

Tom, now older, walks through a forest near his childhood village. He’s mentoring young students in ethics and technology. One asks, “What’s the point of all this AI if people are still lonely or hungry?” Tom smiles. “That’s the right question,” he says. He believes the purpose of intelligence—natural or artificial—is not domination, but compassion. As the sun sets, he feels a quiet hope. Maybe the future isn’t about smarter machines, but wiser humans.

What Is the Purpose of Human Life?

This question has echoed through philosophy, religion, and art for millennia. Is our purpose to create? To love? To understand? To serve?

In the age of AI, this question becomes urgent. If machines can think, work, and even simulate emotion—what is left for us?

The answer may lie not in what AI can do, but in what it cannot. AI can optimize, but it cannot care. It can simulate empathy, but it cannot suffer. It can generate beauty, but it cannot feel awe.

And perhaps most importantly, it cannot choose to care. Humans don’t just feel emotions—they act from them. Love becomes sacrifice. Awe becomes protection. Sorrow becomes protest. These are not lines of code—they are the beating pulse of a conscious life.

Human purpose is not just about intelligence—it’s about consciousness, connection, and conscience. As we delegate more tasks to machines, we must double down on what makes us human: our ability to give meaning, to endure suffering with grace, and to find joy beyond utility.

Fairness in the Age of AI

As AI reshapes the world, fairness must be our compass. This means:

Equity of access: Ensuring rural, tribal, and marginalized communities are not left behind.

Ethical design: Building AI that respects privacy, dignity, and diversity.

Inclusive governance: Giving all voices a seat at the table—especially those most affected.

Tom remembers the tribal families he met as a child—struggling for water, ignored by systems. He remembers villages with no digital access but rich with oral traditions. These were not data-rich zones, but they were wisdom-rich. Yet algorithms rarely hear from them.

We must avoid building systems that optimize only for the lives of the loudest and most visible. Fairness is not a technical feature—it’s a moral stance. It demands that we look beyond convenience and efficiency and ask: Who benefits? Who is harmed? Who is invisible?

We don’t just need inclusive tools; we need inclusive visions.

Designing for Humanity

Technology is not destiny. It reflects the values of its creators. We must design AI that:

• Amplifies human potential, not replaces it.

• Supports mental and emotional wellbeing, not exploits it.

• Strengthens communities, not isolates individuals.

Designing AI for humanity means resisting the seductive pull of efficiency above all else. It means asking how our tools shape habits, culture, and relationships. If social media algorithms reward outrage, then outrage becomes the norm. If hiring systems absorb historical bias, injustice persists in digital form.

Design must go beyond usability and user experience—it must ask what kind of world a system makes more likely. This is the new design brief: Create systems that leave people more whole, not more addicted. More curious, not more cynical. More connected, not more fragmented.

This requires collaboration between technologists, ethicists, educators, artists, and citizens. It requires wisdom, not just intelligence. It means slowing down sometimes—not to delay innovation, but to deepen it.

A Day in Tom’s Life

Tom, now a mentor and elder voice in the AI community, shares less about the howof machines, and more about the whyof life. He spends time with students—not teaching code, but teaching compassion. He listens more than he speaks. He reminds them that no breakthrough matters if it does not help someone in need.

He tells them stories of people building low-cost translation apps for Indigenous languages. Of students using AI to map missing persons in disaster zones. Of technologists who left big salaries to work on open-source tools for refugees and teachers.

Every day, he sees how young people are not just hungry for power—they are hungry for meaning. His role, he feels, is not to give them answers, but to help them ask better questions.

The Social Media Mirror

Online, the narrative is shifting. People are asking deeper questions:

What kind of world are we building?

Who gets to decide?

What does it mean to live a good life in the age of AI?

Social media, for all its toxicity, is also a mirror. It reflects our fears, but also our longing. In a sea of memes and misinformation, you can still find grassroots movements, intergenerational conversations, and voices previously unheard.

Tom sees young creators using AI to tell stories of justice. He sees elders sharing wisdom through digital platforms. He sees a new kind of intelligence emerging—not artificial, but collective.

It’s messy. It’s imperfect. But it’s alive—and that may be what matters most.

The Bigger Picture

AI is not the end of the human story. It is a new chapter—one that forces us to grow, not just technologically, but morally and spiritually.

The future will not be written by machines. It will be written by the decisions we make—about fairness, about purpose, and about what we choose to protect.

And so, we must widen our lens. The question is not just what the future of AI is. It is what the future of us will be. Will we use our tools to colonize or to collaborate? To extract or to restore? To automate apathy—or awaken empathy?

There are many futures available. But only one will be chosen.

Critique: The Ethical Blind Spot

Even now, the ethical framework around AI remains thin. We build models that can mimic genius but forget to embed values. We train AI on global data but deploy it in cultural vacuums. We idolize “intelligence” but devalue wisdom.

Education systems are outdated. Regulation is reactionary. Moral discourse often lags behind technological disruption. The deepest failure isn’t technical—it’s philosophical.

We must ask not only: What can AI do?

But also: What should we do with it?

And more importantly: Who are we becoming because of it?

When efficiency becomes a religion, we forget how to honour slowness. When scale becomes the goal, we overlook the sacred. When simulation replaces presence, we lose the texture of real life.

Tom’s critique is not anti-technology. It’s pro-humanity. He warns that a society obsessed with progress, but blind to meaning, will eventually lose both.

Why This Chapter Matters

This chapter isn’t about AI—it’s about us.

About what we value. About who we include. About the kind of world we’re willing to imagine—and fight for.

The journey of intelligence—from neurons to nations, from fire to fibre optics—has brought us to a profound crossroads.

Now we must ask:

Will we build a future of algorithms, or a future of ethics? Will we pursue power, or purpose?

That answer is still human.

Coming Up Next

Final Chapter – The Rise of Machine Intelligence: Utopia or Dystopia?

We enter uncharted territory. A world where intelligence is no longer human-only. What does it mean when machines begin to surpass our minds? Will we see abundance—or collapse? Evolution— or extinction?

Join us as we explore the edge of what comes next.

DISCLAIMER: The views expressed are solely of the author and ETHRWorld does not necessarily subscribe to it. ETHRWorld will not be responsible for any damage caused to any person or organisation directly or indirectly.

  • Published On Sep 14, 2025 at 02:20 PM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

All about ETHRWorld industry right on your smartphone!






Source link

Continue Reading

Trending