Connect with us

Ethics & Policy

ISTE+ASCD, day 1: Ethical AI, sunshine committees and chatting with Paul Revere

Published

on


Helping parents make friends with AI. Differentiated learning. Workforce culture. Ethics and AI.

The Henry B. Gonzalez Convention Center in San Antonio buzzed with activity Monday as educators engaged in sessions, exchanged ideas and checked out the latest technology wares at this year’s ISTELive and ASCD Annual Show & Conference.

Here are our takeaways from day one at the show.

Differentiated learning: Design toward the edges

At a Turbo Talk, Eric Carbaugh, a professor at James Madison University, outlined some of the tensions that arise at the junction where the use of generative AI and differentiated instruction meet. One of these is ensuring that AI is not being used without a metacognitive piece that ensures students understand the why of what they are learning. “We don’t want to short-circuit that pathway to expertise,” Carbaugh said, noting that the same thing goes for teachers. 

Carbaugh encouraged educators to think about creating classrooms that support differentiation by designing toward the edges. “Rather than aiming down the middle, thinking about how you’re designing outward to try to meet more kids where they are as often as possible. That’s really the goal if we’re aiming for maximum growth,” he said.

Potential ways for using AI in differentiation include providing scaffolding experiences around students’ readiness to learn. AI can also be used effectively to adjust text complexity to meet students’ needs. “To me, this is one of the really big game changers for AI use,” he said. 

Other ideas included using AI to develop choice-based activities or to provide feedback to students, Carbaugh said, cautioning that educators should ensure this use does not short circuit what teachers know about their students’ needs. AI tools can also be used as brainstorming partners or can help teachers proactively develop strategies to help students stretch past known sticking points, he said. 

“Ultimately, we’re trying to live in that middle ground, where DI meets AI, where we understand why we need to do this, we understand what it looks like and recognize that AI is a tool. It doesn’t in itself differentiate – the teachers do that,” he said. 

Bridging gaps: Culturally-relevant content

Preserving the Indigenous languages of the Marianas — Chamorro and Carolinian — is a priority for the Commonwealth of the Northern Mariana Islands Public School System, said Riya Nathrani, instructional technology coach for CNMI PSS, during a panel discussion about practical AI implementation strategies moderated by ISTE+ASCD Senior Director of Innovative Learning Jessica Garner. 

We want to ensure that our students know the languages, so that they are able to carry it on for future generations,” said Nathrani 

The challenge, though, was a lack of resources and materials to teach the languages effectively. The team turned to AI for help. It became “an idea bank, where they could get activities and lesson ideas and write stories, and then translate [them] into the languages,” said Nathrani. The project helped build a foundation from which teachers could create materials without having to start from scratch.

CNMI PSS teachers are also using AI to generate images of Pacific Islanders and create culturally-relevant materials. 

“[I]t’s hard to be what you cannot see,” said Nathrani. “[I]f you don’t really see yourselves reflected in that curriculum or in that role or in that leadership position, [you] won’t really aspire to do those things or to be in those roles.”

Nathrani gave the example of a science teacher who was doing a lesson on ocean biodiversity and wanted to highlight the oceans and coral reefs surrounding their islands. Unfortunately, the textbook did not include this information. The teacher used AI to create content and stories related to the Pacific Islands.

“[T]hat was really meaningful to our students,” said Nathrani. “[N]ow they could really see how it was so relevant to their lives and their surroundings.”

Bring on the sunshine!

Elyse Hahne, a K-5 life skills teacher in the Texas’ Grapevine-Colleyville school district, suggested school leaders take steps to improve their workplace culture by curating a sunshine committee to help support and show gratitude for teachers and staff. 

These committees can use surveys to gather ideas about staff interests and the ways in which they’d like to be supported. Ideas for events and activities can be found and shared in social media groups or through word of mouth, Hahne said. 

Whether through words of appreciation, gifts or acts of service, school leaders should be intentional about their approach and ensure they honor people’s preferences and cultures, Hahne said, and they can reach out to community partners to help make events and activities more affordable. 

The value of showing kindness and improving the culture extends to students as well, Hahne said. “As leaders we get to model this, whether in the classroom or out of the classroom. The kids are watching and they want to see us being nice to each other, and they’ll reciprocate.” 

Schooling parents on AI

How do you help parents adjust to the presence of AI in their children’s learning?

“[Parents] just need to be aware that these are the tools that are expected to be used in the class,” said Alicia Discepola Mackall, supervisor of instructional technology at Ewing Public Schools, during the panel discussion with Garner. 

Mackall referenced different ways schools are helping parents get comfortable with AI, including hosting AI academies or classroom demonstrations. These tactics can go a long way in building knowledge and nurturing support.

“[T]o be honest, a lot of people don’t know what [AI] is. They don’t understand, right?” said Mackall. “So having teachers and students show parents what they’re doing with AI might shift perspective further.”

Sharing AI-use guidance resources with parents can help quell safety concerns, said Mackall. She also encouraged educators to show parents how they can use AI in their own daily lives. “Starting with meal planning, so that people can start to see the power of it and not be quite as afraid of it,” said Mackall. “[Make] it accessible to them.”

Demonstrate how AI tools can charge students’ curiosity and help them think and question along new lines, Mackall advised. She gave the example of a conversation she had with her daughter, a third-grader, who was using SchoolAI as part of a history lesson. Mackall’s daughter and her classmates were engaging in conversations with historical figures.

“She came home and [said], ‘Mom, did you know that there was a girl who actually did a longer ride than Paul Revere?’ I was like ‘Who told you that?’ and she said, ‘Paul Revere,’” Mackall recounted.

Using AI to deliver creative learning experiences like this helps learning stick. Parents want to support that, said Mackall. 

“As a parent, that’s exactly what I want my kid to be doing,” said Mackall. “I want them to be questioning. Even if a parent’s not parenting how we think they should be, they still want what’s best for their kids, right? So I think it’s our job to invite them in virtually and show them what we might be able to do with tools like this and thinking like this.”

Exploring ethics and AI 

As AI has emerged, there has been damage to the social contract between teachers and students, said university lecturer, author and consultant Laurel Aguilar-Kirchhoff at an innovator talk with teacher librarian and program director Katie McNamara. Aguilar-Kirchhoff  shared a personal story of being accused of AI plagiarism by a professor in a graduate course. After explaining and providing evidence to the instructor showing that she had used an AI tool not to plagiarize but to merge two documents, the instructor admitted she wasn’t keeping up with the technology, but did not give her back the points she had taken off her grade. “Our contract here has been broken,” she said. 

Concerns around ethics at the classroom level also include the privacy piece. “Everytime a student uses AI for practice, for writing something, or whatever they’re doing, data is being collected about that learner,” Aguilar-Kirchhoff said. While schools and districts will mostly handle the vetting process, it’s important for educators to consider these implications as well and find out how the data is being stored and used when deciding to use a tool, McNamara advised. 

With concerns around negative bias and AI, educators can help students understand the algorithms being used and have them ask critical questions about what AI is producing. “We know that not only is it representing the biases in our society, in our world, but also it can perpetuate that because the AI outputs do impact societal problems,” Aguilar-Kirchhoff said.  

But addressing these and other concerns about AI use does not mean avoiding using it. “We have to prepare students for the future, critical thinking, digital literacies, digital citizenship and media literacy, “ Aguilar-Kirchhoff said. 

To access the benefits of AI in an ethical way, educators should consider their own practice and as lifelong learners ensure that they are building capacity and knowledge around AI, she said. And as with all edtech, they should be thinking about the specific tools they are using and why. “Because when we have that intentionality, you know it’s not just the next new thing,” she said. 



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Advisory Committee to start on AI Plan

Published

on


The Provost’s Advisory Committee on Artificial Intelligence will begin its work on an Academic Affairs Artificial Intelligence Plan this year, aiming to prepare and inform the DePaul community about AI by developing ethical guidelines and recommendations for all colleges and departments. 

The project was announced at academic convocation on Wednesday, Sept. 3, by Provost Salma Ghanem. John Shanahan, English professor and Associate Provost for Student Success and Accreditation, is leading this initiative.

“This is a chance for this group to help bring all of the stuff going on at DePaul, all the different versions of what artificial intelligence means and try to focus it so that strategically we can think ahead and get students prepared well,” Shanahan said. 

As part of the Designing DePaul Strategic Plan, a university community initiative launched in the 2023-2024 academic year, the Advisory Committee will include representatives of various DePaul departments. This includes Information Systems, Research Services, Center for Teaching and Learning, University Registrar, Library and the AI Institute. Additionally, members of the committee will be elected from the Staff Council, Faculty Council and SGA.

The first meeting will be in October, and Shanahan hopes to start assembling data for policy recommendations by winter. In spring, Shanahan expects a completed set of recommendations, updates to current policy and possibly new curriculum. 

“The idea of getting people together from faculty, staff and students is that when they talk together, we can figure out collectively, what are the best transparent processes?” Shanahan said. 

Shanahan says he anticipates monthly committee meetings that will gather best practices, feedback and learn from other universities’ approaches to AI. The group will be producing reports on their findings.

James Moore, instructor and director of online learning for the Driehaus College of Business, works closely with the DePaul AI Institute.

“The only way that you can change the culture is if you allow everyone to have a voice,” Moore said. “We’re focused on students.” 

Because AI technology is a part of the future, Moore says it is the university’s job to make sure students have the tools to discuss it and use it ethically. 

Bamshad Mobasher, director of DePaul’s AI Institute, says that having students on the committee allows recommendations to be more useful and impactful. 

“Students are using these tools,” Mobasher said. “So it is important for this committee to understand who is using it and in what ways, and what would be the impact of any recommendations.”

Shanahan says the AI advisory committee plans to hold open listening sessions with a moderator to discuss AI and how this technology is being used at DePaul in order to better inform their recommendations. Additionally, the committee will hold “ask me anything” sessions where an expert or panel of experts will answer questions. 

“I hope DePaul spends a lot of reflective time on this really transformative technology,” Shanahan said. “We want to get the right AI approach for our students and that is what this committee is for.” 

Moore explains that because AI touches all areas of study, it is critical for all students to have some level of AI literacy — something that the Academic Affairs Artificial Intelligence Plan will hopefully provide the tools for.

“So students coming out, no matter what they’ve studied in college, they’ve got a practical experience with AI because that’s what our stakeholders, our employers are looking for and we need to provide that,” Moore said. 

Many DePaul departments have training and tools concerning AI already, including DePaul’s Approach to Artificial Intelligence, Teaching Guides, and in the Center for Teaching and Learning, among others. 

“Its not just that the university’s woken up this week and said, ‘We’re doing this,’” Moore said. “There’s been lots of sorts of things in the background that were perhaps less promoted.”

Already this year, the Driehaus College of Business announced their 2025-26 Initiative, “AI@Driehaus.” They are planning to build AI literacy into the curriculum in the hopes of preparing students for their career. 

“We are embedding AI into the core of business education, challenging traditional models, and empowering our community to innovate,” Sulin Ba, dean of the Driehaus College of Business said in an email announcing the plan. 

The Academic Affairs Artificial Intelligence Plan aims to provide more robust university wide recommendations as generative AI continues to cause excitement and concern. 

“The problem with generative (AI) is accuracy,” Moore said. “There’s hallucinations, there’s bias, there’s ethical issues.” 

Mobasher says there is a need for universal, ethical usage clarity in order to navigate forward. 

“I hope that it provides some useful guidelines and resources for faculty, staff and students to navigate this really complex maze of all of these tools and different applications,” Mobasher said. 

 

Related Stories:

 







Support Student Journalism!


The DePaulia is DePaul University’s award-winning, editorially independent student newspaper. Since 1923, student journalists have produced high-quality, on-the-ground reporting that informs our campus and city.


We rely on reader support to keep doing what we do. Donations are tax deductible through DePaul’s giving page.



Source link

Continue Reading

Ethics & Policy

Sam Altman Warns of AI Risks, Ethics, and Bubble in Carlson Interview

Published

on

By


Altman’s Candid Reflections on AI Ethics

In a revealing interview with Tucker Carlson, OpenAI CEO Sam Altman opened up about the sleepless nights plaguing him amid the rapid evolution of artificial intelligence. Altman confessed that the moral quandaries surrounding his company’s chatbot, ChatGPT, keep him awake, particularly decisions on handling sensitive user interactions like suicide prevention. He emphasized the weight of these choices, noting that OpenAI strives to set ethical boundaries while respecting user privacy.

Altman delved into broader societal impacts, warning of potential “AI privilege” where access to advanced tools could exacerbate inequalities. He called for global input to shape AI’s future, highlighting the need for inclusive regulation to mitigate risks like fraud and even engineered pandemics, as reported in a recent WebProNews article on his predictions for workforce transformation by 2025.

Confronting Conspiracy Theories and Personal Attacks

The conversation took a dramatic turn when Carlson pressed Altman on a conspiracy theory tied to the 2024 death of former OpenAI researcher Suchir Balaji, found with a gunshot wound in his San Francisco apartment. Altman firmly denied any involvement, expressing frustration over baseless accusations that have swirled online. This exchange, detailed in a Moneycontrol report, underscores the intense scrutiny Altman faces as AI’s public figurehead.

Posts on X have amplified these tensions, with users alleging Altman has a history of misleading statements, including claims from former board members about his ousting in 2023 due to dishonesty over safety testing for models like GPT-4. Such sentiments echo broader criticisms, as seen in Wikipedia’s account of his temporary removal from OpenAI, which stemmed from concerns over AI safety and alleged abusive behavior.

Navigating Past Scandals and Industry Rivalries

Altman’s tenure has been marred by high-profile controversies, including a lawsuit from his sister Ann alleging sexual abuse from 1997 to 2006, as covered by the BBC. He has denied these claims, but they add to the narrative of personal and professional turmoil. In the interview, Altman addressed his dramatic 2023 ousting and reinstatement, attributing it to boardroom clashes over leadership and safety priorities.

He also touched on rivalries, particularly with Elon Musk, whom he accused of initially dismissing OpenAI’s prospects before launching a competing venture and filing lawsuits. This feud, highlighted in X posts and a Guardian profile, paints Altman as a resilient but polarizing leader who has outmaneuvered opponents like Musk and dissenting board members.

Vision for AI’s Future Amid Economic Warnings

Looking ahead, Altman expressed optimism about AI’s potential to create “transcendentally good” systems through new computing paradigms, as noted in a Yahoo Finance piece. Yet, he cautioned about an emerging AI bubble, likening it to the dot-com era in a CNBC report from August 2025, amid surging industry investments.

Altman advocated for open-source models to democratize AI, mentioning plans for powerful releases, per discussions at TED events. However, critics on X question his motives, pointing to OpenAI’s shift from nonprofit to for-profit status and price hikes for ChatGPT, which they argue prioritize profits over accessibility.

Balancing Innovation with Societal Safeguards

In addressing workforce changes, Altman predicted significant transformations by 2025, urging preparation for AI-driven disruptions while emphasizing ethical safeguards. He reflected on cultural shifts, preferring phone calls over endless meetings, as sparked in a Times of India debate, suggesting a return to efficient communication in an AI-augmented world.

Ultimately, Altman’s interview reveals a leader grappling with immense power and responsibility. As OpenAI pushes boundaries, from contextual AI awareness to global ethical frameworks, the controversies surrounding him highlight the high stakes of steering humanity’s technological frontier. With regulatory eyes watching and public sentiment divided, as evident in real-time X discussions, Altman’s path forward demands transparency to rebuild trust in an era where AI’s promise and perils are inextricably linked.



Source link

Continue Reading

Ethics & Policy

Sam Altman on AI morality, ethics and finding God in ChatGPT

Published

on



You look hard enough at an AI chatbot’s output, it starts to look like scripture. At least, that’s the unsettling undercurrent of Sam Altman’s recent interview with Tucker Carlson – a 57-minute exchange that had everything from deepfakes to divine design, from moral AI frameworks to existential dread, even touching upon the tragic death of an OpenAI whistleblower. To his credit, the man steering the most influential AI system on the planet – OpenAI’s ChatGPT – Sam Altman wasn’t evasive in his response. He was honest, vulnerable, even contradictory at times. Which made his answers all the more illuminating.

Survey

✅ Thank you for completing the survey!

“Do you believe in God?” Tucker Carlson asked, directly without mincing his words. “I think probably like most other people, I’m somewhat confused about this,” Sam Altman replied. “But I believe there is something bigger going on than… can be explained by physics.”

It’s the kind of answer you might expect from a quantum physicist or a sci-fi writer – not the CEO of a company that shapes how billions of people interact with knowledge. But that’s precisely what makes Altman’s quiet agnosticism so fascinating. He shows neither theistic certainty, nor waves the flag of militant atheism. He simply admits he doesn’t know. And yet, he’s helping build the most powerful simulation engine for human cognition we’ve ever known.

Altman on ChatGPT and AI’s moral compass and religion

In another question, Tucker Carlson described ChatGPT’s output as having “the spark of life,” and suggested many users treat it as a kind of oracle.

“There’s something divine about this,” Carlson said. “There’s something bigger than the sum total of the human inputs… it’s a religion.”

Sam Altman didn’t flinch when he said, “No, there’s nothing to me at all that feels divine about it or spiritual in any way. But I am also, like, a tech nerd. And I kind of look at everything through that lens.”

It’s a revealing response. Because what happens when someone who sees the world as a system of probabilities and matrices starts programming “moral” decisions into the machines we consult more often than our friends, therapists, or priests?

Also read: Sam Altman’s AI vision: 5 key takeaways from ChatGPT maker’s blog post

Altman does not deny that ChatGPT reflects a moral structure – it has to, to some degree, purely in order to function. But he’s clear that this isn’t morality in the biblical sense.

“We’re training this to be like the collective of all of humanity,” he explains. “If we do our job right… some things we’ll feel really good about, some things that we’ll feel bad about. That’s all in there.”

This idea – that ChatGPT is the average of our moral selves, a statistical mean of our human knowledge pool – is both radical and terrifying. Because when you average out humanity’s ethical behaviour, do you necessarily get what’s true and just? Or something that’s more bland, crowd-sourced, and neither here nor there?

Altman admits this: “We do have to align it to behave one way or another… there are absolute bounds that we draw.” But who decides those bounds? OpenAI? Nation-states? Market forces? A default setting on a server in an obscure datacenter?

As Carlson rightly pressed, “Unless [the AI model] admits what it stands for… it guides us in a kind of stealthy way toward a conclusion we might not even know we’re reaching.” Altman’s answer to this was to front the “model spec” – a living document outlining intended behaviours and moral defaults. “We try to write this all out,” he said. “People do need to know.” It’s a start. But let’s not confuse documentation for philosophy.

Altman on privacy, biometrics, and AI’s war on reality

If AI becomes the mirror in which humanity stares long enough to worship itself, what happens when that mirror is fogged, gamed, or deepfaked?

Altman is clear-eyed about the risks: “These models are getting very good at bio… they could help us design biological weapons.” But his deeper fear is more subtle. “You have enough people talking to the same language model,” he observed, “and it actually does cause a change in societal scale behaviour.”

He gave the example of users adopting the model’s voice – its rhythm, its diction, even its overuse of em dashes. That’s not a glitch. That’s the first sign of culture being rewritten, adapting and changing itself in the face of a growing new tech adoption.

Also read: What is Gentle Singularity: Sam Altman’s vision for the future of AI?

On the subject of AI deepfakes, Altman was pragmatic: “We are rapidly heading to a world where… you have to really have some way to verify that you’re not being scammed.” He mentioned cryptographic signatures for political messages. Crisis code words for families. It all sounds like spycraft in the face of growing AI tension. Because in a world where your child’s voice can be faked to drain your bank account, maybe it has to be.

What he resists, though, is mandatory biometric verification to use AI tools. “You should just be able to use ChatGPT from any computer,” he says.

That tension – between security and surveillance, authenticity and anonymity – will only grow sharper. In an AI-mediated world, proving you’re real might cost you your privacy.

What to make of Altman’s views on AI’s morality?

Watching Altman wrestle with the moral alignment and spiritual implications of (ChatGPT and) AI reminded me of Prometheus – not the Greek god, but the Ridley Scott movie. The one where humanity finally meets its maker only to find the maker just as confused as they were.

Sam Altman isn’t without flaws, no doubt. While grappling with Tucker Carlson’s questions on AI’s morality, religiosity and ethics, Altman came across as largely thoughtful, conflicted, and arguably burdened. But that doesn’t mean his creation isn’t dangerous.

The question is no longer whether AI will become godlike. The question is whether we’ve already started treating it like a god. And if so, what kind of faith we’re building around it. I don’t know if AI has a soul. But I know it has a style. And as of now, it’s ours. Let’s not give it more than that, shall we?

Also read: AI vision: How Zuckerberg, Musk and Altman see future of AI differently

Jayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile





Source link

Continue Reading

Trending