Connect with us

Ethics & Policy

ISTE+ASCD, day 1: Ethical AI, sunshine committees and chatting with Paul Revere

Published

on


Helping parents make friends with AI. Differentiated learning. Workforce culture. Ethics and AI.

The Henry B. Gonzalez Convention Center in San Antonio buzzed with activity Monday as educators engaged in sessions, exchanged ideas and checked out the latest technology wares at this year’s ISTELive and ASCD Annual Show & Conference.

Here are our takeaways from day one at the show.

Differentiated learning: Design toward the edges

At a Turbo Talk, Eric Carbaugh, a professor at James Madison University, outlined some of the tensions that arise at the junction where the use of generative AI and differentiated instruction meet. One of these is ensuring that AI is not being used without a metacognitive piece that ensures students understand the why of what they are learning. “We don’t want to short-circuit that pathway to expertise,” Carbaugh said, noting that the same thing goes for teachers. 

Carbaugh encouraged educators to think about creating classrooms that support differentiation by designing toward the edges. “Rather than aiming down the middle, thinking about how you’re designing outward to try to meet more kids where they are as often as possible. That’s really the goal if we’re aiming for maximum growth,” he said.

Potential ways for using AI in differentiation include providing scaffolding experiences around students’ readiness to learn. AI can also be used effectively to adjust text complexity to meet students’ needs. “To me, this is one of the really big game changers for AI use,” he said. 

Other ideas included using AI to develop choice-based activities or to provide feedback to students, Carbaugh said, cautioning that educators should ensure this use does not short circuit what teachers know about their students’ needs. AI tools can also be used as brainstorming partners or can help teachers proactively develop strategies to help students stretch past known sticking points, he said. 

“Ultimately, we’re trying to live in that middle ground, where DI meets AI, where we understand why we need to do this, we understand what it looks like and recognize that AI is a tool. It doesn’t in itself differentiate – the teachers do that,” he said. 

Bridging gaps: Culturally-relevant content

Preserving the Indigenous languages of the Marianas — Chamorro and Carolinian — is a priority for the Commonwealth of the Northern Mariana Islands Public School System, said Riya Nathrani, instructional technology coach for CNMI PSS, during a panel discussion about practical AI implementation strategies moderated by ISTE+ASCD Senior Director of Innovative Learning Jessica Garner. 

We want to ensure that our students know the languages, so that they are able to carry it on for future generations,” said Nathrani 

The challenge, though, was a lack of resources and materials to teach the languages effectively. The team turned to AI for help. It became “an idea bank, where they could get activities and lesson ideas and write stories, and then translate [them] into the languages,” said Nathrani. The project helped build a foundation from which teachers could create materials without having to start from scratch.

CNMI PSS teachers are also using AI to generate images of Pacific Islanders and create culturally-relevant materials. 

“[I]t’s hard to be what you cannot see,” said Nathrani. “[I]f you don’t really see yourselves reflected in that curriculum or in that role or in that leadership position, [you] won’t really aspire to do those things or to be in those roles.”

Nathrani gave the example of a science teacher who was doing a lesson on ocean biodiversity and wanted to highlight the oceans and coral reefs surrounding their islands. Unfortunately, the textbook did not include this information. The teacher used AI to create content and stories related to the Pacific Islands.

“[T]hat was really meaningful to our students,” said Nathrani. “[N]ow they could really see how it was so relevant to their lives and their surroundings.”

Bring on the sunshine!

Elyse Hahne, a K-5 life skills teacher in the Texas’ Grapevine-Colleyville school district, suggested school leaders take steps to improve their workplace culture by curating a sunshine committee to help support and show gratitude for teachers and staff. 

These committees can use surveys to gather ideas about staff interests and the ways in which they’d like to be supported. Ideas for events and activities can be found and shared in social media groups or through word of mouth, Hahne said. 

Whether through words of appreciation, gifts or acts of service, school leaders should be intentional about their approach and ensure they honor people’s preferences and cultures, Hahne said, and they can reach out to community partners to help make events and activities more affordable. 

The value of showing kindness and improving the culture extends to students as well, Hahne said. “As leaders we get to model this, whether in the classroom or out of the classroom. The kids are watching and they want to see us being nice to each other, and they’ll reciprocate.” 

Schooling parents on AI

How do you help parents adjust to the presence of AI in their children’s learning?

“[Parents] just need to be aware that these are the tools that are expected to be used in the class,” said Alicia Discepola Mackall, supervisor of instructional technology at Ewing Public Schools, during the panel discussion with Garner. 

Mackall referenced different ways schools are helping parents get comfortable with AI, including hosting AI academies or classroom demonstrations. These tactics can go a long way in building knowledge and nurturing support.

“[T]o be honest, a lot of people don’t know what [AI] is. They don’t understand, right?” said Mackall. “So having teachers and students show parents what they’re doing with AI might shift perspective further.”

Sharing AI-use guidance resources with parents can help quell safety concerns, said Mackall. She also encouraged educators to show parents how they can use AI in their own daily lives. “Starting with meal planning, so that people can start to see the power of it and not be quite as afraid of it,” said Mackall. “[Make] it accessible to them.”

Demonstrate how AI tools can charge students’ curiosity and help them think and question along new lines, Mackall advised. She gave the example of a conversation she had with her daughter, a third-grader, who was using SchoolAI as part of a history lesson. Mackall’s daughter and her classmates were engaging in conversations with historical figures.

“She came home and [said], ‘Mom, did you know that there was a girl who actually did a longer ride than Paul Revere?’ I was like ‘Who told you that?’ and she said, ‘Paul Revere,’” Mackall recounted.

Using AI to deliver creative learning experiences like this helps learning stick. Parents want to support that, said Mackall. 

“As a parent, that’s exactly what I want my kid to be doing,” said Mackall. “I want them to be questioning. Even if a parent’s not parenting how we think they should be, they still want what’s best for their kids, right? So I think it’s our job to invite them in virtually and show them what we might be able to do with tools like this and thinking like this.”

Exploring ethics and AI 

As AI has emerged, there has been damage to the social contract between teachers and students, said university lecturer, author and consultant Laurel Aguilar-Kirchhoff at an innovator talk with teacher librarian and program director Katie McNamara. Aguilar-Kirchhoff  shared a personal story of being accused of AI plagiarism by a professor in a graduate course. After explaining and providing evidence to the instructor showing that she had used an AI tool not to plagiarize but to merge two documents, the instructor admitted she wasn’t keeping up with the technology, but did not give her back the points she had taken off her grade. “Our contract here has been broken,” she said. 

Concerns around ethics at the classroom level also include the privacy piece. “Everytime a student uses AI for practice, for writing something, or whatever they’re doing, data is being collected about that learner,” Aguilar-Kirchhoff said. While schools and districts will mostly handle the vetting process, it’s important for educators to consider these implications as well and find out how the data is being stored and used when deciding to use a tool, McNamara advised. 

With concerns around negative bias and AI, educators can help students understand the algorithms being used and have them ask critical questions about what AI is producing. “We know that not only is it representing the biases in our society, in our world, but also it can perpetuate that because the AI outputs do impact societal problems,” Aguilar-Kirchhoff said.  

But addressing these and other concerns about AI use does not mean avoiding using it. “We have to prepare students for the future, critical thinking, digital literacies, digital citizenship and media literacy, “ Aguilar-Kirchhoff said. 

To access the benefits of AI in an ethical way, educators should consider their own practice and as lifelong learners ensure that they are building capacity and knowledge around AI, she said. And as with all edtech, they should be thinking about the specific tools they are using and why. “Because when we have that intentionality, you know it’s not just the next new thing,” she said. 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Sam Altman on AI morality, ethics and finding God in ChatGPT

Published

on



You look hard enough at an AI chatbot’s output, it starts to look like scripture. At least, that’s the unsettling undercurrent of Sam Altman’s recent interview with Tucker Carlson – a 57-minute exchange that had everything from deepfakes to divine design, from moral AI frameworks to existential dread, even touching upon the tragic death of an OpenAI whistleblower. To his credit, the man steering the most influential AI system on the planet – OpenAI’s ChatGPT – Sam Altman wasn’t evasive in his response. He was honest, vulnerable, even contradictory at times. Which made his answers all the more illuminating.

Survey

✅ Thank you for completing the survey!

“Do you believe in God?” Tucker Carlson asked, directly without mincing his words. “I think probably like most other people, I’m somewhat confused about this,” Sam Altman replied. “But I believe there is something bigger going on than… can be explained by physics.”

It’s the kind of answer you might expect from a quantum physicist or a sci-fi writer – not the CEO of a company that shapes how billions of people interact with knowledge. But that’s precisely what makes Altman’s quiet agnosticism so fascinating. He shows neither theistic certainty, nor waves the flag of militant atheism. He simply admits he doesn’t know. And yet, he’s helping build the most powerful simulation engine for human cognition we’ve ever known.

Altman on ChatGPT and AI’s moral compass and religion

In another question, Tucker Carlson described ChatGPT’s output as having “the spark of life,” and suggested many users treat it as a kind of oracle.

“There’s something divine about this,” Carlson said. “There’s something bigger than the sum total of the human inputs… it’s a religion.”

Sam Altman didn’t flinch when he said, “No, there’s nothing to me at all that feels divine about it or spiritual in any way. But I am also, like, a tech nerd. And I kind of look at everything through that lens.”

It’s a revealing response. Because what happens when someone who sees the world as a system of probabilities and matrices starts programming “moral” decisions into the machines we consult more often than our friends, therapists, or priests?

Also read: Sam Altman’s AI vision: 5 key takeaways from ChatGPT maker’s blog post

Altman does not deny that ChatGPT reflects a moral structure – it has to, to some degree, purely in order to function. But he’s clear that this isn’t morality in the biblical sense.

“We’re training this to be like the collective of all of humanity,” he explains. “If we do our job right… some things we’ll feel really good about, some things that we’ll feel bad about. That’s all in there.”

This idea – that ChatGPT is the average of our moral selves, a statistical mean of our human knowledge pool – is both radical and terrifying. Because when you average out humanity’s ethical behaviour, do you necessarily get what’s true and just? Or something that’s more bland, crowd-sourced, and neither here nor there?

Altman admits this: “We do have to align it to behave one way or another… there are absolute bounds that we draw.” But who decides those bounds? OpenAI? Nation-states? Market forces? A default setting on a server in an obscure datacenter?

As Carlson rightly pressed, “Unless [the AI model] admits what it stands for… it guides us in a kind of stealthy way toward a conclusion we might not even know we’re reaching.” Altman’s answer to this was to front the “model spec” – a living document outlining intended behaviours and moral defaults. “We try to write this all out,” he said. “People do need to know.” It’s a start. But let’s not confuse documentation for philosophy.

Altman on privacy, biometrics, and AI’s war on reality

If AI becomes the mirror in which humanity stares long enough to worship itself, what happens when that mirror is fogged, gamed, or deepfaked?

Altman is clear-eyed about the risks: “These models are getting very good at bio… they could help us design biological weapons.” But his deeper fear is more subtle. “You have enough people talking to the same language model,” he observed, “and it actually does cause a change in societal scale behaviour.”

He gave the example of users adopting the model’s voice – its rhythm, its diction, even its overuse of em dashes. That’s not a glitch. That’s the first sign of culture being rewritten, adapting and changing itself in the face of a growing new tech adoption.

Also read: What is Gentle Singularity: Sam Altman’s vision for the future of AI?

On the subject of AI deepfakes, Altman was pragmatic: “We are rapidly heading to a world where… you have to really have some way to verify that you’re not being scammed.” He mentioned cryptographic signatures for political messages. Crisis code words for families. It all sounds like spycraft in the face of growing AI tension. Because in a world where your child’s voice can be faked to drain your bank account, maybe it has to be.

What he resists, though, is mandatory biometric verification to use AI tools. “You should just be able to use ChatGPT from any computer,” he says.

That tension – between security and surveillance, authenticity and anonymity – will only grow sharper. In an AI-mediated world, proving you’re real might cost you your privacy.

What to make of Altman’s views on AI’s morality?

Watching Altman wrestle with the moral alignment and spiritual implications of (ChatGPT and) AI reminded me of Prometheus – not the Greek god, but the Ridley Scott movie. The one where humanity finally meets its maker only to find the maker just as confused as they were.

Sam Altman isn’t without flaws, no doubt. While grappling with Tucker Carlson’s questions on AI’s morality, religiosity and ethics, Altman came across as largely thoughtful, conflicted, and arguably burdened. But that doesn’t mean his creation isn’t dangerous.

The question is no longer whether AI will become godlike. The question is whether we’ve already started treating it like a god. And if so, what kind of faith we’re building around it. I don’t know if AI has a soul. But I know it has a style. And as of now, it’s ours. Let’s not give it more than that, shall we?

Also read: AI vision: How Zuckerberg, Musk and Altman see future of AI differently

Jayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile





Source link

Continue Reading

Ethics & Policy

Keough School of Global Affairs hires AI specialist Mohammad Rifat

Published

on


As higher education continues to reckon with the emergence of AI, the Keough School of Global Affairs recently hired Mohammad Rifat as an assistant professor of tech ethics and global affairs. He is a specialist in AI ethics, human-computer interaction and critical social science.

Rifat earned his doctorate in computer science from the University of Toronto this year. While he primarily works in the Keough School he also holds positions within the Department of Computer Science and Engineering. He also works significantly with the Notre Dame Institute for Ethics and Common Good.

“In Keough, we prioritize values like human development and human dignity, but I also get technical expertise from computer science,” Rifat said, reflecting on his experience within both fields.

Rifat’s research is primarily concerned with the question of how society can make AI more inclusive of marginalized communities. Rifat explained how AI and computer systems are designed based on the information accessible to them — which he says is often from the most modernized cultures. Faith-based and traditional communities in the Global South might not be represented in data sets AI systems are trained on, he explained, and therefore people in these communities may not be represented in the content these AI systems produce.

“AI is modernistic, and it uses the methods and methodologies and techniques and approaches in a way that  reflect the institutions and techniques in modern culture, but not traditional cultures, not indigenous or faith-based cultures. The faith-based communities are not strong consumers of modernism,” Rifat said. 

Rifat says these communities are being inadvertently marginalized and that it’s his goal to bring their history and life stories out of the shadows.

Professor Rifat also spoke on the responsibility scientists should have in a world that has become increasingly AI-dominated. 

“As a scientist myself, our responsibility is to steer AI towards the direction where the business interests don’t dictate how AI should serve the community. The deeper questions are human dignity, welfare and safety,” he said.





Source link

Continue Reading

Ethics & Policy

How linguistic frames influence AI policy, education and ethics

Published

on


Artificial intelligence is not only defined by its algorithms and applications but also by the language that frames it. A new study by Tifany Petricini of Penn State Erie – The Behrend College reveals that the way AI is described profoundly influences how societies, policymakers, and educators perceive its role in culture and governance.

The research, titled “The Power of Language: Framing AI as an Assistant, Collaborator, or Transformative Force in Cultural Discourse” and published in AI & Society, examines the rhetorical and semantic frameworks that position AI as a tool, a partner, or a transformative force. The study argues that these framings are not neutral but carry cultural, political, and ethical weight, shaping both public imagination and institutional responses.

How linguistic frames influence perceptions of AI

The study identifies three dominant frames: cognitive offloading, augmented intelligence, and co-intelligence. Each of these linguistic choices embeds assumptions about what AI is and how it should interact with humans.

Cognitive offloading presents AI as a means of reducing human mental workload. This view highlights efficiency gains and productivity but raises concerns about dependency and reduced autonomy. By framing AI as a tool to handle cognitive burdens, societies risk normalizing reliance on systems that are not infallible, potentially weakening human critical judgment over time.

Augmented intelligence emphasizes AI as an extension of human ability. This optimistic narrative encourages a vision of collaboration where AI supports human decision-making. Yet the study cautions that this framing, while reassuring, can obscure structural issues such as labor displacement and the concentration of decision-making power in AI-driven systems.

Co-intelligence positions AI as a collaborator, creating a shared space where humans and machines produce meaning together. This framing offers a synergistic and even utopian vision of human–machine partnerships. However, the study highlights that such narratives blur distinctions between tools and agents, reinforcing anthropomorphic views that can distort both expectations and policy.

These framings are not just descriptive; they act as cultural signposts that influence how societies choose to regulate, adopt, and educate around AI.

What theoretical frameworks reveal about AI and language

To unpack these framings, the study draws on two major traditions: general semantics and media ecology. General semantics, rooted in Alfred Korzybski’s assertion that “the map is not the territory,” warns that words about AI often misrepresent the underlying technical reality. Descriptions that attribute thinking, creativity, or learning to machines are, in this view, category errors that mislead people into treating systems as human-like actors.

Media ecology, shaped by thinkers such as Marshall McLuhan, Neil Postman, and Walter Ong, emphasizes that communication technologies form environments that shape thought and culture. AI, when described as intelligent or collaborative, is not only a tool but part of a media ecosystem that reshapes how people view agency, trust, and authority. Petricini argues that these linguistic frames form “semantic environments” that shape imagination, policy, and cultural norms.

By placing AI discourse within these frameworks, the study reveals how misalignments between language and technical reality create distortions. For instance, when AI is linguistically elevated to the status of an autonomous agent, regulators may overemphasize machine responsibility and underemphasize human accountability.

What is at stake for policy, education and culture

The implications of these framings extend beyond semantics. The study finds that policy debates, education systems, and cultural narratives are all shaped by the language used to describe AI.

In policy, terms such as “trustworthy AI” or “high-risk AI” influence legal frameworks like the European Union’s AI Act. By anthropomorphizing or exaggerating AI’s autonomy, these discourses risk regulating machines as if they were independent actors, rather than systems built and controlled by people. Such linguistic distortions can divert attention away from human accountability and ethical responsibility in AI development.

In education, anthropomorphic metaphors such as AI “learning” or “thinking” create misconceptions for students and teachers. These terms can either inspire misplaced fear or encourage over-trust in AI systems. By reshaping how knowledge and learning are understood, the study warns, such framings may erode human-centered approaches to teaching and critical inquiry.

Culturally, the dominance of Western terminology risks sidelining diverse perspectives. Petricini points to the danger of “semantic imperialism,” where Western narratives impose a one-size-fits-all framing of AI that marginalizes non-Western traditions. For instance, Japan’s concept of Society 5.0 presents an alternative model in which AI is integrated into society with a participatory and pluralistic orientation. Recognizing such diversity, the study argues, is essential for creating more balanced global conversations about AI.



Source link

Continue Reading

Trending