Connect with us

Ethics & Policy

Illustrations of Ethics in AI

Published

on


THIS GRAPHIC COMIC of students discussing artificial intelligence possessing consciousness was inspired by a book Dr. Rocco Gennaro, Professor of Philosophy, is working on, tentatively titled Dialogues on Minds, Machines, and AI (forthcoming, Routledge Press). Gennaro plans to teach a course on AI Ethics in Fall 2025.

The script for this illustration was edited from Gennaro’s book by C. L. Stambush, Editor/Senior Writer. Charles Armstrong, Associate Professor of Graphic Design, used the script as an assignment for his students to illustrate. The comic in Illume this issue was illustrated by Kamyrn Johnson. Scroll down to see all the illustrations created by Armstrong’s students. 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Sam Altman on AI morality, ethics and finding God in ChatGPT

Published

on



You look hard enough at an AI chatbot’s output, it starts to look like scripture. At least, that’s the unsettling undercurrent of Sam Altman’s recent interview with Tucker Carlson – a 57-minute exchange that had everything from deepfakes to divine design, from moral AI frameworks to existential dread, even touching upon the tragic death of an OpenAI whistleblower. To his credit, the man steering the most influential AI system on the planet – OpenAI’s ChatGPT – Sam Altman wasn’t evasive in his response. He was honest, vulnerable, even contradictory at times. Which made his answers all the more illuminating.

Survey

✅ Thank you for completing the survey!

“Do you believe in God?” Tucker Carlson asked, directly without mincing his words. “I think probably like most other people, I’m somewhat confused about this,” Sam Altman replied. “But I believe there is something bigger going on than… can be explained by physics.”

It’s the kind of answer you might expect from a quantum physicist or a sci-fi writer – not the CEO of a company that shapes how billions of people interact with knowledge. But that’s precisely what makes Altman’s quiet agnosticism so fascinating. He shows neither theistic certainty, nor waves the flag of militant atheism. He simply admits he doesn’t know. And yet, he’s helping build the most powerful simulation engine for human cognition we’ve ever known.

Altman on ChatGPT and AI’s moral compass and religion

In another question, Tucker Carlson described ChatGPT’s output as having “the spark of life,” and suggested many users treat it as a kind of oracle.

“There’s something divine about this,” Carlson said. “There’s something bigger than the sum total of the human inputs… it’s a religion.”

Sam Altman didn’t flinch when he said, “No, there’s nothing to me at all that feels divine about it or spiritual in any way. But I am also, like, a tech nerd. And I kind of look at everything through that lens.”

It’s a revealing response. Because what happens when someone who sees the world as a system of probabilities and matrices starts programming “moral” decisions into the machines we consult more often than our friends, therapists, or priests?

Also read: Sam Altman’s AI vision: 5 key takeaways from ChatGPT maker’s blog post

Altman does not deny that ChatGPT reflects a moral structure – it has to, to some degree, purely in order to function. But he’s clear that this isn’t morality in the biblical sense.

“We’re training this to be like the collective of all of humanity,” he explains. “If we do our job right… some things we’ll feel really good about, some things that we’ll feel bad about. That’s all in there.”

This idea – that ChatGPT is the average of our moral selves, a statistical mean of our human knowledge pool – is both radical and terrifying. Because when you average out humanity’s ethical behaviour, do you necessarily get what’s true and just? Or something that’s more bland, crowd-sourced, and neither here nor there?

Altman admits this: “We do have to align it to behave one way or another… there are absolute bounds that we draw.” But who decides those bounds? OpenAI? Nation-states? Market forces? A default setting on a server in an obscure datacenter?

As Carlson rightly pressed, “Unless [the AI model] admits what it stands for… it guides us in a kind of stealthy way toward a conclusion we might not even know we’re reaching.” Altman’s answer to this was to front the “model spec” – a living document outlining intended behaviours and moral defaults. “We try to write this all out,” he said. “People do need to know.” It’s a start. But let’s not confuse documentation for philosophy.

Altman on privacy, biometrics, and AI’s war on reality

If AI becomes the mirror in which humanity stares long enough to worship itself, what happens when that mirror is fogged, gamed, or deepfaked?

Altman is clear-eyed about the risks: “These models are getting very good at bio… they could help us design biological weapons.” But his deeper fear is more subtle. “You have enough people talking to the same language model,” he observed, “and it actually does cause a change in societal scale behaviour.”

He gave the example of users adopting the model’s voice – its rhythm, its diction, even its overuse of em dashes. That’s not a glitch. That’s the first sign of culture being rewritten, adapting and changing itself in the face of a growing new tech adoption.

Also read: What is Gentle Singularity: Sam Altman’s vision for the future of AI?

On the subject of AI deepfakes, Altman was pragmatic: “We are rapidly heading to a world where… you have to really have some way to verify that you’re not being scammed.” He mentioned cryptographic signatures for political messages. Crisis code words for families. It all sounds like spycraft in the face of growing AI tension. Because in a world where your child’s voice can be faked to drain your bank account, maybe it has to be.

What he resists, though, is mandatory biometric verification to use AI tools. “You should just be able to use ChatGPT from any computer,” he says.

That tension – between security and surveillance, authenticity and anonymity – will only grow sharper. In an AI-mediated world, proving you’re real might cost you your privacy.

What to make of Altman’s views on AI’s morality?

Watching Altman wrestle with the moral alignment and spiritual implications of (ChatGPT and) AI reminded me of Prometheus – not the Greek god, but the Ridley Scott movie. The one where humanity finally meets its maker only to find the maker just as confused as they were.

Sam Altman isn’t without flaws, no doubt. While grappling with Tucker Carlson’s questions on AI’s morality, religiosity and ethics, Altman came across as largely thoughtful, conflicted, and arguably burdened. But that doesn’t mean his creation isn’t dangerous.

The question is no longer whether AI will become godlike. The question is whether we’ve already started treating it like a god. And if so, what kind of faith we’re building around it. I don’t know if AI has a soul. But I know it has a style. And as of now, it’s ours. Let’s not give it more than that, shall we?

Also read: AI vision: How Zuckerberg, Musk and Altman see future of AI differently

Jayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile





Source link

Continue Reading

Ethics & Policy

Keough School of Global Affairs hires AI specialist Mohammad Rifat

Published

on


As higher education continues to reckon with the emergence of AI, the Keough School of Global Affairs recently hired Mohammad Rifat as an assistant professor of tech ethics and global affairs. He is a specialist in AI ethics, human-computer interaction and critical social science.

Rifat earned his doctorate in computer science from the University of Toronto this year. While he primarily works in the Keough School he also holds positions within the Department of Computer Science and Engineering. He also works significantly with the Notre Dame Institute for Ethics and Common Good.

“In Keough, we prioritize values like human development and human dignity, but I also get technical expertise from computer science,” Rifat said, reflecting on his experience within both fields.

Rifat’s research is primarily concerned with the question of how society can make AI more inclusive of marginalized communities. Rifat explained how AI and computer systems are designed based on the information accessible to them — which he says is often from the most modernized cultures. Faith-based and traditional communities in the Global South might not be represented in data sets AI systems are trained on, he explained, and therefore people in these communities may not be represented in the content these AI systems produce.

“AI is modernistic, and it uses the methods and methodologies and techniques and approaches in a way that  reflect the institutions and techniques in modern culture, but not traditional cultures, not indigenous or faith-based cultures. The faith-based communities are not strong consumers of modernism,” Rifat said. 

Rifat says these communities are being inadvertently marginalized and that it’s his goal to bring their history and life stories out of the shadows.

Professor Rifat also spoke on the responsibility scientists should have in a world that has become increasingly AI-dominated. 

“As a scientist myself, our responsibility is to steer AI towards the direction where the business interests don’t dictate how AI should serve the community. The deeper questions are human dignity, welfare and safety,” he said.





Source link

Continue Reading

Ethics & Policy

How linguistic frames influence AI policy, education and ethics

Published

on


Artificial intelligence is not only defined by its algorithms and applications but also by the language that frames it. A new study by Tifany Petricini of Penn State Erie – The Behrend College reveals that the way AI is described profoundly influences how societies, policymakers, and educators perceive its role in culture and governance.

The research, titled “The Power of Language: Framing AI as an Assistant, Collaborator, or Transformative Force in Cultural Discourse” and published in AI & Society, examines the rhetorical and semantic frameworks that position AI as a tool, a partner, or a transformative force. The study argues that these framings are not neutral but carry cultural, political, and ethical weight, shaping both public imagination and institutional responses.

How linguistic frames influence perceptions of AI

The study identifies three dominant frames: cognitive offloading, augmented intelligence, and co-intelligence. Each of these linguistic choices embeds assumptions about what AI is and how it should interact with humans.

Cognitive offloading presents AI as a means of reducing human mental workload. This view highlights efficiency gains and productivity but raises concerns about dependency and reduced autonomy. By framing AI as a tool to handle cognitive burdens, societies risk normalizing reliance on systems that are not infallible, potentially weakening human critical judgment over time.

Augmented intelligence emphasizes AI as an extension of human ability. This optimistic narrative encourages a vision of collaboration where AI supports human decision-making. Yet the study cautions that this framing, while reassuring, can obscure structural issues such as labor displacement and the concentration of decision-making power in AI-driven systems.

Co-intelligence positions AI as a collaborator, creating a shared space where humans and machines produce meaning together. This framing offers a synergistic and even utopian vision of human–machine partnerships. However, the study highlights that such narratives blur distinctions between tools and agents, reinforcing anthropomorphic views that can distort both expectations and policy.

These framings are not just descriptive; they act as cultural signposts that influence how societies choose to regulate, adopt, and educate around AI.

What theoretical frameworks reveal about AI and language

To unpack these framings, the study draws on two major traditions: general semantics and media ecology. General semantics, rooted in Alfred Korzybski’s assertion that “the map is not the territory,” warns that words about AI often misrepresent the underlying technical reality. Descriptions that attribute thinking, creativity, or learning to machines are, in this view, category errors that mislead people into treating systems as human-like actors.

Media ecology, shaped by thinkers such as Marshall McLuhan, Neil Postman, and Walter Ong, emphasizes that communication technologies form environments that shape thought and culture. AI, when described as intelligent or collaborative, is not only a tool but part of a media ecosystem that reshapes how people view agency, trust, and authority. Petricini argues that these linguistic frames form “semantic environments” that shape imagination, policy, and cultural norms.

By placing AI discourse within these frameworks, the study reveals how misalignments between language and technical reality create distortions. For instance, when AI is linguistically elevated to the status of an autonomous agent, regulators may overemphasize machine responsibility and underemphasize human accountability.

What is at stake for policy, education and culture

The implications of these framings extend beyond semantics. The study finds that policy debates, education systems, and cultural narratives are all shaped by the language used to describe AI.

In policy, terms such as “trustworthy AI” or “high-risk AI” influence legal frameworks like the European Union’s AI Act. By anthropomorphizing or exaggerating AI’s autonomy, these discourses risk regulating machines as if they were independent actors, rather than systems built and controlled by people. Such linguistic distortions can divert attention away from human accountability and ethical responsibility in AI development.

In education, anthropomorphic metaphors such as AI “learning” or “thinking” create misconceptions for students and teachers. These terms can either inspire misplaced fear or encourage over-trust in AI systems. By reshaping how knowledge and learning are understood, the study warns, such framings may erode human-centered approaches to teaching and critical inquiry.

Culturally, the dominance of Western terminology risks sidelining diverse perspectives. Petricini points to the danger of “semantic imperialism,” where Western narratives impose a one-size-fits-all framing of AI that marginalizes non-Western traditions. For instance, Japan’s concept of Society 5.0 presents an alternative model in which AI is integrated into society with a participatory and pluralistic orientation. Recognizing such diversity, the study argues, is essential for creating more balanced global conversations about AI.



Source link

Continue Reading

Trending