Connect with us

Ethics & Policy

Vatican Hosts Historic “Grace for the World” Concert and AI Ethics Summit | Ukraine news

Published

on


Crowds gather in St. Peter’s Square for the concert ‘Grace for the World,’ co-directed by Andrea Bocelli and Pharrell Williams, as part of the World Meeting on Human Fraternity aimed at promoting unity, in the Vatican, September 13, 2025. REUTERS/Ciro De Luca

According to CNN CNN

In the Vatican, a historic concert titled “Grace for the World” took place, which for the first time brought together world pop stars on St. Peter’s Square. The event featured John Legend, Teddy Swims, Karol G, and other stars, and the broadcast was provided by CNN and ABC News. The concert occurred as part of the Third World Meeting on Human Fraternity and was open to everyone.

During the event, performances spanning various genres graced the stage. Among the participants were Thai rapper Bambam from GOT7, Black Eyed Peas frontman Will.i.am, and American singer Pharrell Williams. Between performances, Vatican cardinals addressed the audience with calls to remain humane and to uphold mutual respect among people.

Key Moments of the Event

“to remain humane”

– Vatican Cardinals

Within the framework of the Third World Meeting on Human Fraternity in the Vatican, the topic of artificial intelligence and ethical regulation of its use was also discussed. The summit participants emphasized the need to establish international norms and governance systems for artificial intelligence to ensure the safety of societies. Leading experts joined the discussion: Geoffrey Hinton, known as the “godfather of artificial intelligence,” Max Tegmark from the Massachusetts Institute of Technology, Khimena Sofia Viveros Alvarez, and Marco Trombetti, founder of Translated. Pope Leo XIV also participated in the discussion and reaffirmed the position of the previous pope regarding the establishment of a single agreement on the use of artificial intelligence.

“to define local and international pathways for developing new forms of social charity and to see the image of God in the poor, refugees, and even adversaries.”

– Pope Leo

They also discussed the risk of the digital divide between countries with access to AI and those without such access. Participants urged concrete local and international initiatives aimed at developing new forms of social philanthropy and supporting the most vulnerable segments of the population.

Other topics you might like:





Source link

Ethics & Policy

How linguistic frames influence AI policy, education and ethics

Published

on


Artificial intelligence is not only defined by its algorithms and applications but also by the language that frames it. A new study by Tifany Petricini of Penn State Erie – The Behrend College reveals that the way AI is described profoundly influences how societies, policymakers, and educators perceive its role in culture and governance.

The research, titled “The Power of Language: Framing AI as an Assistant, Collaborator, or Transformative Force in Cultural Discourse” and published in AI & Society, examines the rhetorical and semantic frameworks that position AI as a tool, a partner, or a transformative force. The study argues that these framings are not neutral but carry cultural, political, and ethical weight, shaping both public imagination and institutional responses.

How linguistic frames influence perceptions of AI

The study identifies three dominant frames: cognitive offloading, augmented intelligence, and co-intelligence. Each of these linguistic choices embeds assumptions about what AI is and how it should interact with humans.

Cognitive offloading presents AI as a means of reducing human mental workload. This view highlights efficiency gains and productivity but raises concerns about dependency and reduced autonomy. By framing AI as a tool to handle cognitive burdens, societies risk normalizing reliance on systems that are not infallible, potentially weakening human critical judgment over time.

Augmented intelligence emphasizes AI as an extension of human ability. This optimistic narrative encourages a vision of collaboration where AI supports human decision-making. Yet the study cautions that this framing, while reassuring, can obscure structural issues such as labor displacement and the concentration of decision-making power in AI-driven systems.

Co-intelligence positions AI as a collaborator, creating a shared space where humans and machines produce meaning together. This framing offers a synergistic and even utopian vision of human–machine partnerships. However, the study highlights that such narratives blur distinctions between tools and agents, reinforcing anthropomorphic views that can distort both expectations and policy.

These framings are not just descriptive; they act as cultural signposts that influence how societies choose to regulate, adopt, and educate around AI.

What theoretical frameworks reveal about AI and language

To unpack these framings, the study draws on two major traditions: general semantics and media ecology. General semantics, rooted in Alfred Korzybski’s assertion that “the map is not the territory,” warns that words about AI often misrepresent the underlying technical reality. Descriptions that attribute thinking, creativity, or learning to machines are, in this view, category errors that mislead people into treating systems as human-like actors.

Media ecology, shaped by thinkers such as Marshall McLuhan, Neil Postman, and Walter Ong, emphasizes that communication technologies form environments that shape thought and culture. AI, when described as intelligent or collaborative, is not only a tool but part of a media ecosystem that reshapes how people view agency, trust, and authority. Petricini argues that these linguistic frames form “semantic environments” that shape imagination, policy, and cultural norms.

By placing AI discourse within these frameworks, the study reveals how misalignments between language and technical reality create distortions. For instance, when AI is linguistically elevated to the status of an autonomous agent, regulators may overemphasize machine responsibility and underemphasize human accountability.

What is at stake for policy, education and culture

The implications of these framings extend beyond semantics. The study finds that policy debates, education systems, and cultural narratives are all shaped by the language used to describe AI.

In policy, terms such as “trustworthy AI” or “high-risk AI” influence legal frameworks like the European Union’s AI Act. By anthropomorphizing or exaggerating AI’s autonomy, these discourses risk regulating machines as if they were independent actors, rather than systems built and controlled by people. Such linguistic distortions can divert attention away from human accountability and ethical responsibility in AI development.

In education, anthropomorphic metaphors such as AI “learning” or “thinking” create misconceptions for students and teachers. These terms can either inspire misplaced fear or encourage over-trust in AI systems. By reshaping how knowledge and learning are understood, the study warns, such framings may erode human-centered approaches to teaching and critical inquiry.

Culturally, the dominance of Western terminology risks sidelining diverse perspectives. Petricini points to the danger of “semantic imperialism,” where Western narratives impose a one-size-fits-all framing of AI that marginalizes non-Western traditions. For instance, Japan’s concept of Society 5.0 presents an alternative model in which AI is integrated into society with a participatory and pluralistic orientation. Recognizing such diversity, the study argues, is essential for creating more balanced global conversations about AI.



Source link

Continue Reading

Ethics & Policy

The Human Purpose and the Ethics of Progress, ETHRWorld

Published

on


A diverse group of people collaborating on ethical AI projects, reflecting inclusion, compassion, and community strength.

This article is the eighth part of a nine-part series that unpacks the evolution of intelligence, the rise of artificial intelligence, and its profound impact on jobs, ethics, society and purpose. The series will help readers understand how AI is reshaping job roles and what skills will matter most, reflect on ethical and psychological shifts AI may trigger in the workplace, and ask better questions about education, inclusion and purpose.

“We are called to be architects of the future, not its victims.” — R Buckminster Fuller

Tom, now older, walks through a forest near his childhood village. He’s mentoring young students in ethics and technology. One asks, “What’s the point of all this AI if people are still lonely or hungry?” Tom smiles. “That’s the right question,” he says. He believes the purpose of intelligence—natural or artificial—is not domination, but compassion. As the sun sets, he feels a quiet hope. Maybe the future isn’t about smarter machines, but wiser humans.

What Is the Purpose of Human Life?

This question has echoed through philosophy, religion, and art for millennia. Is our purpose to create? To love? To understand? To serve?

In the age of AI, this question becomes urgent. If machines can think, work, and even simulate emotion—what is left for us?

The answer may lie not in what AI can do, but in what it cannot. AI can optimize, but it cannot care. It can simulate empathy, but it cannot suffer. It can generate beauty, but it cannot feel awe.

And perhaps most importantly, it cannot choose to care. Humans don’t just feel emotions—they act from them. Love becomes sacrifice. Awe becomes protection. Sorrow becomes protest. These are not lines of code—they are the beating pulse of a conscious life.

Human purpose is not just about intelligence—it’s about consciousness, connection, and conscience. As we delegate more tasks to machines, we must double down on what makes us human: our ability to give meaning, to endure suffering with grace, and to find joy beyond utility.

Fairness in the Age of AI

As AI reshapes the world, fairness must be our compass. This means:

Equity of access: Ensuring rural, tribal, and marginalized communities are not left behind.

Ethical design: Building AI that respects privacy, dignity, and diversity.

Inclusive governance: Giving all voices a seat at the table—especially those most affected.

Tom remembers the tribal families he met as a child—struggling for water, ignored by systems. He remembers villages with no digital access but rich with oral traditions. These were not data-rich zones, but they were wisdom-rich. Yet algorithms rarely hear from them.

We must avoid building systems that optimize only for the lives of the loudest and most visible. Fairness is not a technical feature—it’s a moral stance. It demands that we look beyond convenience and efficiency and ask: Who benefits? Who is harmed? Who is invisible?

We don’t just need inclusive tools; we need inclusive visions.

Designing for Humanity

Technology is not destiny. It reflects the values of its creators. We must design AI that:

• Amplifies human potential, not replaces it.

• Supports mental and emotional wellbeing, not exploits it.

• Strengthens communities, not isolates individuals.

Designing AI for humanity means resisting the seductive pull of efficiency above all else. It means asking how our tools shape habits, culture, and relationships. If social media algorithms reward outrage, then outrage becomes the norm. If hiring systems absorb historical bias, injustice persists in digital form.

Design must go beyond usability and user experience—it must ask what kind of world a system makes more likely. This is the new design brief: Create systems that leave people more whole, not more addicted. More curious, not more cynical. More connected, not more fragmented.

This requires collaboration between technologists, ethicists, educators, artists, and citizens. It requires wisdom, not just intelligence. It means slowing down sometimes—not to delay innovation, but to deepen it.

A Day in Tom’s Life

Tom, now a mentor and elder voice in the AI community, shares less about the howof machines, and more about the whyof life. He spends time with students—not teaching code, but teaching compassion. He listens more than he speaks. He reminds them that no breakthrough matters if it does not help someone in need.

He tells them stories of people building low-cost translation apps for Indigenous languages. Of students using AI to map missing persons in disaster zones. Of technologists who left big salaries to work on open-source tools for refugees and teachers.

Every day, he sees how young people are not just hungry for power—they are hungry for meaning. His role, he feels, is not to give them answers, but to help them ask better questions.

The Social Media Mirror

Online, the narrative is shifting. People are asking deeper questions:

What kind of world are we building?

Who gets to decide?

What does it mean to live a good life in the age of AI?

Social media, for all its toxicity, is also a mirror. It reflects our fears, but also our longing. In a sea of memes and misinformation, you can still find grassroots movements, intergenerational conversations, and voices previously unheard.

Tom sees young creators using AI to tell stories of justice. He sees elders sharing wisdom through digital platforms. He sees a new kind of intelligence emerging—not artificial, but collective.

It’s messy. It’s imperfect. But it’s alive—and that may be what matters most.

The Bigger Picture

AI is not the end of the human story. It is a new chapter—one that forces us to grow, not just technologically, but morally and spiritually.

The future will not be written by machines. It will be written by the decisions we make—about fairness, about purpose, and about what we choose to protect.

And so, we must widen our lens. The question is not just what the future of AI is. It is what the future of us will be. Will we use our tools to colonize or to collaborate? To extract or to restore? To automate apathy—or awaken empathy?

There are many futures available. But only one will be chosen.

Critique: The Ethical Blind Spot

Even now, the ethical framework around AI remains thin. We build models that can mimic genius but forget to embed values. We train AI on global data but deploy it in cultural vacuums. We idolize “intelligence” but devalue wisdom.

Education systems are outdated. Regulation is reactionary. Moral discourse often lags behind technological disruption. The deepest failure isn’t technical—it’s philosophical.

We must ask not only: What can AI do?

But also: What should we do with it?

And more importantly: Who are we becoming because of it?

When efficiency becomes a religion, we forget how to honour slowness. When scale becomes the goal, we overlook the sacred. When simulation replaces presence, we lose the texture of real life.

Tom’s critique is not anti-technology. It’s pro-humanity. He warns that a society obsessed with progress, but blind to meaning, will eventually lose both.

Why This Chapter Matters

This chapter isn’t about AI—it’s about us.

About what we value. About who we include. About the kind of world we’re willing to imagine—and fight for.

The journey of intelligence—from neurons to nations, from fire to fibre optics—has brought us to a profound crossroads.

Now we must ask:

Will we build a future of algorithms, or a future of ethics? Will we pursue power, or purpose?

That answer is still human.

Coming Up Next

Final Chapter – The Rise of Machine Intelligence: Utopia or Dystopia?

We enter uncharted territory. A world where intelligence is no longer human-only. What does it mean when machines begin to surpass our minds? Will we see abundance—or collapse? Evolution— or extinction?

Join us as we explore the edge of what comes next.

DISCLAIMER: The views expressed are solely of the author and ETHRWorld does not necessarily subscribe to it. ETHRWorld will not be responsible for any damage caused to any person or organisation directly or indirectly.

  • Published On Sep 14, 2025 at 02:20 PM IST

Join the community of 2M+ industry professionals.

Subscribe to Newsletter to get latest insights & analysis in your inbox.

All about ETHRWorld industry right on your smartphone!






Source link

Continue Reading

Ethics & Policy

Pet Dog Joins Google’s Gemini AI Retro Photo Trend! Internet Can’t Get Enough | Viral Video | Viral

Published

on


Beautiful retro pictures of people in breathtaking ethics in front of an esthetically pleasing wall under the golden hour is currently what is going on on social media! All in all, a new trend is in the ‘internet town’ and it’s spreading- fast. For those not aware, it’s basically a trend where netizens are using Google’s Gemini AI to create a rather beautiful retro version of themselves. In a nutshell, social media is currently full of such pictures. However, when this PET DOG joined the bandwagon, many instantly declared the furry one the winner- and for obvious reasons. The video showed the trend being used on the pet dog- the result of which was simply heartwarming. The AI generated pictures showed the cute one draped in multiple dupattas, with ears that looked like the perfect hairstyle one can ask for- for their pets. Most netizens loved the video, while some expressed their desire to try the same on their pets. Times Now could not confirm the authenticity of the post. Image Source: Jinnie Bhatt/ Instagram





Source link

Continue Reading

Trending