Connect with us

Ethics & Policy

Advisory Committee to start on AI Plan

Published

on


The Provost’s Advisory Committee on Artificial Intelligence will begin its work on an Academic Affairs Artificial Intelligence Plan this year, aiming to prepare and inform the DePaul community about AI by developing ethical guidelines and recommendations for all colleges and departments. 

The project was announced at academic convocation on Wednesday, Sept. 3, by Provost Salma Ghanem. John Shanahan, English professor and Associate Provost for Student Success and Accreditation, is leading this initiative.

“This is a chance for this group to help bring all of the stuff going on at DePaul, all the different versions of what artificial intelligence means and try to focus it so that strategically we can think ahead and get students prepared well,” Shanahan said. 

As part of the Designing DePaul Strategic Plan, a university community initiative launched in the 2023-2024 academic year, the Advisory Committee will include representatives of various DePaul departments. This includes Information Systems, Research Services, Center for Teaching and Learning, University Registrar, Library and the AI Institute. Additionally, members of the committee will be elected from the Staff Council, Faculty Council and SGA.

The first meeting will be in October, and Shanahan hopes to start assembling data for policy recommendations by winter. In spring, Shanahan expects a completed set of recommendations, updates to current policy and possibly new curriculum. 

“The idea of getting people together from faculty, staff and students is that when they talk together, we can figure out collectively, what are the best transparent processes?” Shanahan said. 

Shanahan says he anticipates monthly committee meetings that will gather best practices, feedback and learn from other universities’ approaches to AI. The group will be producing reports on their findings.

James Moore, instructor and director of online learning for the Driehaus College of Business, works closely with the DePaul AI Institute.

“The only way that you can change the culture is if you allow everyone to have a voice,” Moore said. “We’re focused on students.” 

Because AI technology is a part of the future, Moore says it is the university’s job to make sure students have the tools to discuss it and use it ethically. 

Bamshad Mobasher, director of DePaul’s AI Institute, says that having students on the committee allows recommendations to be more useful and impactful. 

“Students are using these tools,” Mobasher said. “So it is important for this committee to understand who is using it and in what ways, and what would be the impact of any recommendations.”

Shanahan says the AI advisory committee plans to hold open listening sessions with a moderator to discuss AI and how this technology is being used at DePaul in order to better inform their recommendations. Additionally, the committee will hold “ask me anything” sessions where an expert or panel of experts will answer questions. 

“I hope DePaul spends a lot of reflective time on this really transformative technology,” Shanahan said. “We want to get the right AI approach for our students and that is what this committee is for.” 

Moore explains that because AI touches all areas of study, it is critical for all students to have some level of AI literacy — something that the Academic Affairs Artificial Intelligence Plan will hopefully provide the tools for.

“So students coming out, no matter what they’ve studied in college, they’ve got a practical experience with AI because that’s what our stakeholders, our employers are looking for and we need to provide that,” Moore said. 

Many DePaul departments have training and tools concerning AI already, including DePaul’s Approach to Artificial Intelligence, Teaching Guides, and in the Center for Teaching and Learning, among others. 

“Its not just that the university’s woken up this week and said, ‘We’re doing this,’” Moore said. “There’s been lots of sorts of things in the background that were perhaps less promoted.”

Already this year, the Driehaus College of Business announced their 2025-26 Initiative, “AI@Driehaus.” They are planning to build AI literacy into the curriculum in the hopes of preparing students for their career. 

“We are embedding AI into the core of business education, challenging traditional models, and empowering our community to innovate,” Sulin Ba, dean of the Driehaus College of Business said in an email announcing the plan. 

The Academic Affairs Artificial Intelligence Plan aims to provide more robust university wide recommendations as generative AI continues to cause excitement and concern. 

“The problem with generative (AI) is accuracy,” Moore said. “There’s hallucinations, there’s bias, there’s ethical issues.” 

Mobasher says there is a need for universal, ethical usage clarity in order to navigate forward. 

“I hope that it provides some useful guidelines and resources for faculty, staff and students to navigate this really complex maze of all of these tools and different applications,” Mobasher said. 

 

Related Stories:

 







Support Student Journalism!


The DePaulia is DePaul University’s award-winning, editorially independent student newspaper. Since 1923, student journalists have produced high-quality, on-the-ground reporting that informs our campus and city.


We rely on reader support to keep doing what we do. Donations are tax deductible through DePaul’s giving page.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Shaping Global AI Governance at the 80th UNGA

Published

on


Global AI governance is currently at a critical juncture. Rapid advancements in technology are presenting exciting opportunities but also significant challenges. The rise of AI agents — AI systems that can reason, plan, and take direct action — makes strong international cooperation more crucial than ever. To create safer and more responsible AI that benefits people and society, we must work collectively on a global scale.

Partnership on AI (PAI) has been deeply engaged in these conversations, bridging the gap between AI development and responsible policy.

Our team has crossed the globe, connecting with partners and collaborators at key events this year, from the AI Action Summit in Paris, to the World AI Conference in Shanghai, and the Global AI Summit on Africa in Kigali. This builds on the discussion at PAI’s 2024 Policy Forum and the Policy Alignment on AI Transparency report published last October, both of which explored how AI governance efforts align with one another and highlighted the need for international cooperation and coordination on AI policy.

Our journey now takes us next to the 80th session of the United Nations General Assembly (UNGA), taking place in New York this week.

In addition to marking the 80th anniversary of the UN, this year’s UNGA is a call for renewed commitment to multilateralism. It also serves as the official launch of the new UN Global Dialogue on AI Governance. The UN is a crucial piece of the global AI governance puzzle, as a universal and inclusive forum where every nation, regardless of size or influence, has a voice in shaping the future of this technology.

To celebrate this milestone anniversary, PAI is bringing together its community of Partners, policymakers, and other stakeholders for a series of events alongside the UNGA. This is a pivotal moment that demands increased global cooperation amid a challenging geopolitical environment. Our community has identified two particularly important and challenging areas for global AI governance this year:

  1. The opportunities and challenges of AI agents (with 2025 dubbed the “year of agents”) across different fields, including AI safety, human connection, and public policy
  2. The need to build a more robust global AI assurance ecosystem; AI assurance being defined as the process to assess whether an AI system or model is trustworthy

To inform these important discussions and build on our support for the UN Global Digital Compact, PAI is bringing both topics to the attention of the community of UN stakeholders through a series of UNGA side events and publications on both issues. The issues align with the mandates of two new UN AI mechanisms: the UN Independent International Scientific Panel on AI and the Global Dialogue.

The Scientific Panel is tasked with issuing “evidence-based scientific assessments” that synthesize and analyze existing research on the opportunities, risks, and impacts of AI.

Meanwhile, the role of the Global Dialogue is to discuss international cooperation, share best practices and lessons learned, and to facilitate discussions on AI governance to advance the sustainable development goals (SDGs), including on the development of trustworthy AI systems; the protection of human rights in the field of AI; and transparency, accountability, and human oversight consistent with international law.

AI agents are a new research topic that the international community needs to better understand, considering opportunities and potential risks in areas such as human oversight, transparency, and human rights. We expect this topic to be taken up by the Scientific Panel and brought to the attention of the Global Dialogue.

PAI’s work on AI agents includes three key publications:

  1. A Real-time Failure Detection Framework that provides guidance on how to monitor and thereby prevent critical failures in the deployment of autonomous AI agents, which could lead to hazards or real-world incidents that can harm people, disrupt infrastructure, or violate human rights.
  2. An International Policy Brief that offers anticipatory guidance on how to manage the potential cross-border harms and human rights impacts of AI agents, leveraging foundational global governance tools, i.e., international law, non-binding global norms, and global accountability mechanisms.
  3. A Policy Research Agenda that outlines priority questions that policymakers and the scientific community should explore to ensure that we govern AI agents in an informed manner domestically, regionally, and globally.

At the same time, we believe a robust AI assurance ecosystem is crucial to enabling trust and unlocking opportunities for adoption in line with the SDGs and international law. Both the Scientific Panel and the Global Dialogue can help fill significant research and implementation gaps in this area.

Looking ahead, we will expand our focus on AI assurance, with plans to publish a white paper, progress report, and international policy brief at the end of 2025 and early 2026. These publications will touch on issues ranging from the challenges to effective AI assurance, such as insufficient incentives and access to documentation, to AI assurance needs in the Global South.

We hope these contributions will not only inform discussions at the UN but also in other important international AI governance forums, including the OECD’s Global Partnership on AI Expert Group Meeting in November, the G20 Summit in November, and the AI Impact Summit in India next year.

The global conversation on AI governance is still in the early stages, and PAI is committed to ensuring that it is an inclusive, informed, and effective one. To stay up to date on our work in this area, sign up for our newsletter.



Source link

Continue Reading

Ethics & Policy

Sam Altman Warns of AI Risks, Ethics, and Bubble in Carlson Interview

Published

on

By


Altman’s Candid Reflections on AI Ethics

In a revealing interview with Tucker Carlson, OpenAI CEO Sam Altman opened up about the sleepless nights plaguing him amid the rapid evolution of artificial intelligence. Altman confessed that the moral quandaries surrounding his company’s chatbot, ChatGPT, keep him awake, particularly decisions on handling sensitive user interactions like suicide prevention. He emphasized the weight of these choices, noting that OpenAI strives to set ethical boundaries while respecting user privacy.

Altman delved into broader societal impacts, warning of potential “AI privilege” where access to advanced tools could exacerbate inequalities. He called for global input to shape AI’s future, highlighting the need for inclusive regulation to mitigate risks like fraud and even engineered pandemics, as reported in a recent WebProNews article on his predictions for workforce transformation by 2025.

Confronting Conspiracy Theories and Personal Attacks

The conversation took a dramatic turn when Carlson pressed Altman on a conspiracy theory tied to the 2024 death of former OpenAI researcher Suchir Balaji, found with a gunshot wound in his San Francisco apartment. Altman firmly denied any involvement, expressing frustration over baseless accusations that have swirled online. This exchange, detailed in a Moneycontrol report, underscores the intense scrutiny Altman faces as AI’s public figurehead.

Posts on X have amplified these tensions, with users alleging Altman has a history of misleading statements, including claims from former board members about his ousting in 2023 due to dishonesty over safety testing for models like GPT-4. Such sentiments echo broader criticisms, as seen in Wikipedia’s account of his temporary removal from OpenAI, which stemmed from concerns over AI safety and alleged abusive behavior.

Navigating Past Scandals and Industry Rivalries

Altman’s tenure has been marred by high-profile controversies, including a lawsuit from his sister Ann alleging sexual abuse from 1997 to 2006, as covered by the BBC. He has denied these claims, but they add to the narrative of personal and professional turmoil. In the interview, Altman addressed his dramatic 2023 ousting and reinstatement, attributing it to boardroom clashes over leadership and safety priorities.

He also touched on rivalries, particularly with Elon Musk, whom he accused of initially dismissing OpenAI’s prospects before launching a competing venture and filing lawsuits. This feud, highlighted in X posts and a Guardian profile, paints Altman as a resilient but polarizing leader who has outmaneuvered opponents like Musk and dissenting board members.

Vision for AI’s Future Amid Economic Warnings

Looking ahead, Altman expressed optimism about AI’s potential to create “transcendentally good” systems through new computing paradigms, as noted in a Yahoo Finance piece. Yet, he cautioned about an emerging AI bubble, likening it to the dot-com era in a CNBC report from August 2025, amid surging industry investments.

Altman advocated for open-source models to democratize AI, mentioning plans for powerful releases, per discussions at TED events. However, critics on X question his motives, pointing to OpenAI’s shift from nonprofit to for-profit status and price hikes for ChatGPT, which they argue prioritize profits over accessibility.

Balancing Innovation with Societal Safeguards

In addressing workforce changes, Altman predicted significant transformations by 2025, urging preparation for AI-driven disruptions while emphasizing ethical safeguards. He reflected on cultural shifts, preferring phone calls over endless meetings, as sparked in a Times of India debate, suggesting a return to efficient communication in an AI-augmented world.

Ultimately, Altman’s interview reveals a leader grappling with immense power and responsibility. As OpenAI pushes boundaries, from contextual AI awareness to global ethical frameworks, the controversies surrounding him highlight the high stakes of steering humanity’s technological frontier. With regulatory eyes watching and public sentiment divided, as evident in real-time X discussions, Altman’s path forward demands transparency to rebuild trust in an era where AI’s promise and perils are inextricably linked.



Source link

Continue Reading

Ethics & Policy

Sam Altman on AI morality, ethics and finding God in ChatGPT

Published

on



You look hard enough at an AI chatbot’s output, it starts to look like scripture. At least, that’s the unsettling undercurrent of Sam Altman’s recent interview with Tucker Carlson – a 57-minute exchange that had everything from deepfakes to divine design, from moral AI frameworks to existential dread, even touching upon the tragic death of an OpenAI whistleblower. To his credit, the man steering the most influential AI system on the planet – OpenAI’s ChatGPT – Sam Altman wasn’t evasive in his response. He was honest, vulnerable, even contradictory at times. Which made his answers all the more illuminating.

Survey

✅ Thank you for completing the survey!

“Do you believe in God?” Tucker Carlson asked, directly without mincing his words. “I think probably like most other people, I’m somewhat confused about this,” Sam Altman replied. “But I believe there is something bigger going on than… can be explained by physics.”

It’s the kind of answer you might expect from a quantum physicist or a sci-fi writer – not the CEO of a company that shapes how billions of people interact with knowledge. But that’s precisely what makes Altman’s quiet agnosticism so fascinating. He shows neither theistic certainty, nor waves the flag of militant atheism. He simply admits he doesn’t know. And yet, he’s helping build the most powerful simulation engine for human cognition we’ve ever known.

Altman on ChatGPT and AI’s moral compass and religion

In another question, Tucker Carlson described ChatGPT’s output as having “the spark of life,” and suggested many users treat it as a kind of oracle.

“There’s something divine about this,” Carlson said. “There’s something bigger than the sum total of the human inputs… it’s a religion.”

Sam Altman didn’t flinch when he said, “No, there’s nothing to me at all that feels divine about it or spiritual in any way. But I am also, like, a tech nerd. And I kind of look at everything through that lens.”

It’s a revealing response. Because what happens when someone who sees the world as a system of probabilities and matrices starts programming “moral” decisions into the machines we consult more often than our friends, therapists, or priests?

Also read: Sam Altman’s AI vision: 5 key takeaways from ChatGPT maker’s blog post

Altman does not deny that ChatGPT reflects a moral structure – it has to, to some degree, purely in order to function. But he’s clear that this isn’t morality in the biblical sense.

“We’re training this to be like the collective of all of humanity,” he explains. “If we do our job right… some things we’ll feel really good about, some things that we’ll feel bad about. That’s all in there.”

This idea – that ChatGPT is the average of our moral selves, a statistical mean of our human knowledge pool – is both radical and terrifying. Because when you average out humanity’s ethical behaviour, do you necessarily get what’s true and just? Or something that’s more bland, crowd-sourced, and neither here nor there?

Altman admits this: “We do have to align it to behave one way or another… there are absolute bounds that we draw.” But who decides those bounds? OpenAI? Nation-states? Market forces? A default setting on a server in an obscure datacenter?

As Carlson rightly pressed, “Unless [the AI model] admits what it stands for… it guides us in a kind of stealthy way toward a conclusion we might not even know we’re reaching.” Altman’s answer to this was to front the “model spec” – a living document outlining intended behaviours and moral defaults. “We try to write this all out,” he said. “People do need to know.” It’s a start. But let’s not confuse documentation for philosophy.

Altman on privacy, biometrics, and AI’s war on reality

If AI becomes the mirror in which humanity stares long enough to worship itself, what happens when that mirror is fogged, gamed, or deepfaked?

Altman is clear-eyed about the risks: “These models are getting very good at bio… they could help us design biological weapons.” But his deeper fear is more subtle. “You have enough people talking to the same language model,” he observed, “and it actually does cause a change in societal scale behaviour.”

He gave the example of users adopting the model’s voice – its rhythm, its diction, even its overuse of em dashes. That’s not a glitch. That’s the first sign of culture being rewritten, adapting and changing itself in the face of a growing new tech adoption.

Also read: What is Gentle Singularity: Sam Altman’s vision for the future of AI?

On the subject of AI deepfakes, Altman was pragmatic: “We are rapidly heading to a world where… you have to really have some way to verify that you’re not being scammed.” He mentioned cryptographic signatures for political messages. Crisis code words for families. It all sounds like spycraft in the face of growing AI tension. Because in a world where your child’s voice can be faked to drain your bank account, maybe it has to be.

What he resists, though, is mandatory biometric verification to use AI tools. “You should just be able to use ChatGPT from any computer,” he says.

That tension – between security and surveillance, authenticity and anonymity – will only grow sharper. In an AI-mediated world, proving you’re real might cost you your privacy.

What to make of Altman’s views on AI’s morality?

Watching Altman wrestle with the moral alignment and spiritual implications of (ChatGPT and) AI reminded me of Prometheus – not the Greek god, but the Ridley Scott movie. The one where humanity finally meets its maker only to find the maker just as confused as they were.

Sam Altman isn’t without flaws, no doubt. While grappling with Tucker Carlson’s questions on AI’s morality, religiosity and ethics, Altman came across as largely thoughtful, conflicted, and arguably burdened. But that doesn’t mean his creation isn’t dangerous.

The question is no longer whether AI will become godlike. The question is whether we’ve already started treating it like a god. And if so, what kind of faith we’re building around it. I don’t know if AI has a soul. But I know it has a style. And as of now, it’s ours. Let’s not give it more than that, shall we?

Also read: AI vision: How Zuckerberg, Musk and Altman see future of AI differently

Jayesh Shinde

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile





Source link

Continue Reading

Trending