Connect with us

Ethics & Policy

Canadian Perspectives on AI Governance, Risks vs. Harms, and the Slippery Slope Ahead.

Published

on


Welcome to The AI Ethics Brief, a bi-weekly publication by the Montreal AI Ethics Institute. Stay informed on the evolving world of AI ethics with key research, insightful reporting, and thoughtful commentary. Learn more at montrealethics.ai/about.

Follow MAIEI on Bluesky and LinkedIn.

We invite you to join us in honouring the life and legacy of Abhishek Gupta, founder of the Montreal AI Ethics Institute (MAIEI), at a memorial gathering on Thursday, April 10, from 6:30 PM to 8:30 PM in Montreal, Quebec, Canada.

Please mark your calendars and register your interest here. This will be an in-person event. If you’re interested in a Zoom option, please indicate your preference when registering, and we will follow up if virtual attendance becomes available.

To learn more about Abhishek and share your memories or photos, please visit his digital memorial.

Register to attend or indicate interest in a Zoom option

  • Canada’s G7 Presidency: AI, Climate, and Accountability

  • Microsoft Pulls Back on AI Data Center Leases, Raising Questions About AI Demand

  • AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

  • ISED Launches AI Risk Management Guide Based on Voluntary Code

  • Risks vs. Harms: Unraveling the AI Terminology Confusion

  • Politics And The Perils Of AI: Exacerbating Social Divides In Canada – Forbes

  • Inside Elon Musk’s ‘Digital Coup’ – Wired

  • A Reddit moderation tool is flagging ‘Luigi’ as potentially violent content – The Verge

As Canada leads the G7 in 2025, AI governance, energy, and civic freedoms are at a crossroads. Prime Minister Mark Carney’s pledges on housing, energy, and AI infrastructure now face their real test.

The AI Strategy for the Federal Public Service (2025-2027) promises “responsible AI adoption,” but history reminds us that oversight and accountability—not just ambition—define success. In 2018, during Canada’s last G7 presidency, AI ethics discussions at the G7 Multistakeholder Conference on AI foreshadowed today’s challenges.

The late Abhishek Gupta warned that AI literacy and governance would be crucial—a lesson even more urgent now as AI systems increasingly dictate immigration, surveillance, and digital policy. In an excerpt from her book, Am I Literate? Redefining Literacy in the Age of Artificial Intelligence, Kate Arthur shares a story from that G7 conference, where she hesitated to ask:

“What about the kids? The future workforce? How are we preparing them to thrive in a world dominated by AI? Shouldn’t their education be part of this conversation?”

Abhishek recognized the importance of her perspective, encouraged her to speak, and emphasized that AI education is critical to ensuring ethical and inclusive decision-making in an automated world.

“Abhishek continued to add layers of context that made the gravity of the issue clear in ways I had not even considered. He pointed out that individuals needed to recognise how AI systems shape decisions, reinforce societal and systemic inequalities, and amplify existing biases. It is only by equipping the future workforce with AI literacy skills and tools, giving them a deep understanding of the ethical challenges, that we can ensure AI systems are built to support a healthy and inclusive society. Heads began to nod in agreement. The conversation deepened, shifting from the theoretical to the practical. We explored AI’s broader societal impacts, including the ethical dilemmas tied to its design, development, and deployment—and the role of education.”

Read the full excerpt here.

Carney now inherits Trudeau’s balancing act: advancing AI without compromising climate goals. Trudeau’s Paris remarks made clear AI’s vast energy demands, yet Canada’s role in sustainable AI remains uncertain.

The question isn’t just what Canada will do with AI during its G7 presidency, but who gets a say? With rising concerns over AI-powered surveillance and opaque decision-making, Canada must lead with transparency—or risk repeating the mistakes of unaccountable AI rollouts.

Canada’s G7 leadership offers a chance to push for transparent, accountable AI governance. However, the risks of bias, exclusion, and power imbalances in AI deployment—particularly in immigration, public services, and law enforcement—remain high.

For AI to serve the public good, Canada must commit to:

  • Transparent AI policies—ensuring all government AI systems are open to public scrutiny.

  • Stronger accountability mechanisms—defining clear responsibility when AI harms individuals or communities and providing accessible pathways for redress.

  • Public engagement—bringing diverse voices, including civil society, into AI governance decisions.

AI can be a force for good—but only if it is ethical, accountable, and inclusive.

A recent TD Cowen report reveals that Microsoft has cancelled hundreds of megawatts of U.S. data centre leases, roughly the capacity of two data centres. The company also terminated agreements with multiple private operators and halted some preliminary lease conversions.

While Microsoft maintains its $80 billion infrastructure investment plan, the pullback raises speculation about its AI computing strategy.

According to TD Cowen, possible factors include:

  • OpenAI potentially shifting workloads from Microsoft to Oracle as part of a new partnership

  • Microsoft reallocating investments from international to U.S. locations

  • The company possibly finding itself in an “oversupply position”

This comes as the industry grapples with AI’s long-term viability despite massive investment commitments.

Microsoft’s reported pullback on data centre leases raises a number of key ethical considerations:

  • Environmental Impact: Data centres consume vast energy. Does this signal efficiency gains or unchecked AI expansion straining sustainability?

  • Market Power & Governance: AI workloads are concentrated among a few dominant cloud providers—who controls AI development infrastructure, and are current governance structures sufficient to ensure fair access and accountability?

  • AI Hype vs. Reality: The TD Cowen report raises short-term concerns about Microsoft’s AI infrastructure capacity planning. Is Microsoft adjusting for real demand or reacting to market pressures?

For a sharper critique, check out Ed Zitron’s Power Cut.

Did we miss anything? Let us know in the comments below.

Leave a comment

The chilling Axios report on the U.S. State Department using AI to revoke the visas of foreign students who appear to support Hamas is a stark reminder of the slippery slope we’re on.

AI-driven decision-making in immigration and national security has long been fraught with risksopacity, lack of oversight, and bias. But now, we are seeing the direct consequences of these risks intersecting with free speech and due process: automated systems policing speech with no clear accountability.

Who decides what counts as “pro-Hamas”? What signals will AI models be trained on? Social media posts? Books read? Associations?

And more importantly—who will be next?

The opacity of these AI systems means that those affected may have no way of understanding or challenging these decisions. There is no clear ownership of AI failures, making redress nearly impossible. Bias will go unaddressed, and these AI systems will continue to operate in the shadows, amplifying injustices without accountability.

As Taylor Lorenz aptly warns, “The attacks on free speech should terrify us all.”

Timnit Gebru echoes a similar concern, noting that due process seems to have disappeared entirely. Yale University recently suspended a scholar after an AI-powered news site accused them of a terrorist link—without transparent evidence or accountability.

“Watch what comes next for you, courtesy of so-called ‘AI-powered news’ sites,” Gebru remarks on LinkedIn, “targeting you and institutions who can’t wait to comply, unaware that they’re setting the stage for their own targeting. If you’re accused of being ‘a terrorist,’ then anything goes.”

With the United States now added to the CIVICUS Monitor Watchlist due to growing threats to human rights and civic freedoms under the Trump administration, it’s unclear where this ends.

Today, it’s international students and scholars. Tomorrow, who else?

Please share your thoughts with the MAIEI community:

Leave a comment

Help us keep The AI Ethics Brief free and accessible for everyone by becoming a paid subscriber on Substack for the price of a coffee or making a one-time or recurring donation at montrealethics.ai/donate

Your support sustains our mission of Democratizing AI Ethics Literacy, honours Abhishek Gupta’s legacy, and ensures we can continue serving our community.

For corporate partnerships or larger donations, please contact us at support@montrealethics.ai

In each edition, we highlight a question from the MAIEI community and share our insights. Have a question on AI ethics? Send it our way, and we may feature it in an upcoming edition!

Leave a comment

We want to hear your thoughts on how AI governance should balance human oversight with automation’s efficiency gains. As AI systems take on more decision-making roles—from hiring processes to content moderation—finding the right balance between human judgment and AI-driven speed is crucial.

Should humans always have the final say, or can AI be trusted to operate autonomously with ethical safeguards? Where should we draw the line between efficiency and accountability?

💡 Vote and share your thoughts!

  1. Human-first approach – Prioritize human decision-making in high-stakes areas.

  2. AI-assisted, human-approved – Use AI for efficiency, but require human final oversight.

  3. Automation with safeguards – Automate where possible, ensuring ethical protections.

  4. Full automation – Maximize AI for speed and scalability, minimizing human involvement.

Our latest informal poll (n=34) reveals key insights into public sentiment regarding AI-generated content. The results indicate a strong preference for transparency, with 56% of respondents emphasizing the importance of AI-generated content being disclosed. This suggests that while AI is becoming more integrated in content creation, trust and transparency remain critical factors in its acceptance.

  • Transparency is a Priority:
    The most popular response (56%) was that AI-generated content should always be disclosed. This highlights concerns about authenticity and the potential for AI-generated misinformation.

  • Human Creativity Still Matters:
    26% of respondents indicated that “Human touch matters,” reflecting a belief that AI-generated content lacks the emotional depth, creativity, and nuance that human creators bring. This suggests that AI is seen as a tool rather than a replacement for human content creators.

  • Context Influences Perception:
    12% of respondents highlighted that “Context is key,” indicating that people may be more accepting of AI-generated content in certain scenarios (e.g., data analysis, summaries) but less so in others (e.g., journalism, creative writing).

  • Substance Over Source?:
    Only 6% of respondents said, “It’s all about substance,” implying that for most people, the way content is created (AI vs. human) does matter beyond just the quality of the final output. This challenges the idea that audiences are indifferent to AI-generated content as long as it meets quality standards.

These results reflect broader AI governance and ethical concerns related to disclosure, authenticity, and human involvement in AI-generated content. The emphasis on transparency aligns with growing regulatory discussions on AI labeling policies and the need for clearer guidelines on AI-generated materials. Additionally, the preference for human involvement suggests that AI should remain a tool to assist, rather than replace, human creativity.`

Please share your thoughts with the MAIEI community:

Leave a comment

AI Policy Corner: The Turkish Artificial Intelligence Law Proposal

By Selen Dogan Kosterit. This article is part of our AI Policy Corner series, a collaboration between the Montreal AI Ethics Institute (MAIEI) and the Governance and Responsible AI Lab (GRAIL) at Purdue University. The series provides concise insights into critical AI policy developments from the local to international levels, helping our readers stay informed about the evolving landscape of AI governance. This inaugural piece spotlights Turkey’s AI law proposal, examining its strengths and the gaps in aligning with global AI governance frameworks.

To dive deeper, read the full article here.

ISED Launches AI Risk Management Guide Based on Voluntary Code

By Sun Gyoo Kang. ISED’s new Implementation guide for managers of Artificial intelligence systems offers practical governance strategies despite Canada’s stalled AI legislation. The Guide, complementing ISED’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, provides actionable frameworks across five key principles: Safety (comprehensive risk assessment), Accountability (robust policies and procedures), Human Oversight & Monitoring (preventing autonomous operation), Transparency (clear AI identification), and Validity & Robustness (ensuring reliable performance across conditions). While the absence of binding regulations like Bill C-27 leaves significant gaps, the Guide serves as a valuable educational resource with international alignment, detailed best practices, and a repository of standards that may function as a de facto benchmark for responsible AI management in Canada’s evolving regulatory landscape.

To dive deeper, read the full article here.

Risks vs. Harms: Unraveling the AI Terminology Confusion

Op-Ed by Charlie Pownall and Maki Kanayama. Distinguishing between risks and harms seems simple and obvious: risks are negative impacts that will occur, while harms are forms of damage or loss that have already occurred. However, research AIAAIC has conducted into selected AI and algorithmic harm and risk taxonomies reveals that industry and academia regularly misunderstand the two terms. These conflations are not merely semantic issues but may have real-world implications, leading to confused and frustrated users and citizens, misguided legislation, and companies neglecting actual, present harms. They also raise important questions about why this is happening to the extent that it is, and what can be done to address the problem.

To dive deeper, read the full article here.

Politics And The Perils Of AI: Exacerbating Social Divides In Canada – Forbes

  • What happened: Finding itself at a critical point in its approach to AI, Canada risks exacerbating, rather than reducing, the equality gap in Canadian society if it is not “intentional” with its AI usage.

  • Why it matters: AI is often promoted as a way to level the playing field, yet in Canada and across North America, its benefits remain concentrated among those with greater resources. Michelle Baldwin, former senior advisor of transformation at Community Foundations of Canada, highlights that among Canada’s 170,000 nonprofits—organizations dedicated to serving marginalized communities—only 7% have adopted AI tools. This signals a disconnect between AI’s rapid advancement and its ability to support social good.

  • Between the lines: AI’s potential to drive social equity is overshadowed by its role in reinforcing existing power structures. The organizations and communities that most need AI-driven efficiencies lack access to the resources required to implement them, while corporations and well-funded institutions accelerate their adoption. If AI is to be truly transformative, policies must ensure it serves the public interest rather than deepening technological and economic divides. Ethical AI governance should focus not just on AI’s capabilities but on who benefits—and who gets left behind.

To dive deeper, read the full summary here.

Inside Elon Musk’s ‘Digital Coup’ – Wired

  • What happened: Elon Musk, the head of the Department of Government Efficiency (DOGE), believes the US government needs to be reset and “debugged,” pushing for an aggressive overhaul of federal operations, cutting funding and gaining access to private databases across the US government. Through firsthand accounts, the article explores how these actions have amounted to what some call a “digital coup.”

  • Why it matters: The article paints the picture of how Elon Musk has gained access to top government offices within a short span of time. It sheds light on the unchecked influence of a tech billionaire within government operations, raising concerns about the consolidation of power, the erosion of institutional safeguards, and the long-term consequences of handing over critical infrastructure to private entities.

  • Between the lines: Musk’s maneuvering reflects broader AI governance issues—who controls data, how decisions are made, and the ethical risks of automating bureaucratic functions. The unchecked expansion of AI-driven decision-making in government could bypass democratic oversight, embedding biases and vulnerabilities into public systems while reducing transparency and accountability.

To dive deeper, read the full summary here.

A Reddit moderation tool is flagging ‘Luigi’ as potentially violent content – The Verge

  • What happened: Reddit’s Automoderator system mistakenly flagged the word “Luigi” as potentially malicious in the popular subreddit r/popculture due to its perceived links with the Luigi Mangione case despite its unrelated uses, including in a Nintendo context (i.e. Mario and Luigi).

  • Why it matters: While AI moderation tools help ease the load on human content moderators, these tools still lack sufficient contextual awareness, leading to false positives.

  • Between the lines: As AI takes on more content moderation tasks, its lack of nuance highlights the ongoing need for human oversight.

To dive deeper, read the full summary here.

👇 Learn more about why it matters in AI Ethics via our Living Dictionary.

Explore the Living Dictionary!

We’d love to hear from you, our readers, about any recent research papers, articles, or newsworthy developments that have captured your attention. Please share your suggestions to help shape future discussions!

Leave a comment



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Ethics & Policy

Formulating An Artificial General Intelligence Ethics Checklist For The Upcoming Rise Of Advanced AI

Published

on


In today’s column, I address a topic that hasn’t yet gotten the attention it rightfully deserves. The matter entails focusing on the advancement of AI to become artificial general intelligence (AGI), along with encompassing suitable AGI Ethics mindsets and practices during and once we arrive at AGI. You see, there are already plenty of AI ethics guidelines for conventional AI, but few that are attuned to the envisioned semblance of AGI.

I offer a strawman version of an AGI Ethics Checklist to get the ball rolling.

Let’s talk about it.

This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).

Heading Toward AGI And ASI

First, some fundamentals are required to set the stage for this discussion.

There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI).

AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here.

We have not yet attained AGI.

In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI.

Doomers Versus Accelerators

AI insiders are generally divided into two major camps right now about the impacts of reaching AGI or ASI. One camp consists of the AI doomers. They are predicting that AGI or ASI will seek to wipe out humanity. Some refer to this as “P(doom),” which means the probability of doom, or that AI zonks us entirely, also known as the existential risk of AI (i.e., x-risk).

The other camp entails the upbeat AI accelerationists.

They tend to contend that advanced AI, namely AGI or ASI, is going to solve humanity’s problems. Cure cancer, yes indeed. Overcome world hunger, absolutely. We will see immense economic gains, liberating people from the drudgery of daily toils. AI will work hand-in-hand with humans. This benevolent AI is not going to usurp humanity. AI of this kind will be the last invention humans have ever made, but that’s good in the sense that AI will invent things we never could have envisioned.

No one can say for sure which camp is right, and which one is wrong. This is yet another polarizing aspect of our contemporary times.

For my in-depth analysis of the two camps, see the link here.

Trying To Keep Evil Away

We can certainly root for the upbeat side of advanced AI. Perhaps AGI will be our closest friend, while the pesky and futuristic ASI will be the evil destroyer. The overall sense is that we are likely to attain AGI first before we arrive at ASI.

ASI might take a long time to devise. But maybe the length of time will be a lot shorter than we envision if AGI will support our ASI ambitions. I’ve discussed that AGI might not be especially keen on us arriving at ASI, thus there isn’t any guarantee that AGI will willingly help propel us toward ASI, see my analysis at the link here.

The bottom line is that we cannot reasonably bet our lives that the likely first arrival, namely AGI, is going to be a bundle of goodness. There is an equally plausible chance that AGI could be an evildoer. Or that AGI will be half good and half bad. Who knows? It could be 1% bad, 99% good, which is a nice dreamy happy face perspective. That being said, AGI could be 1% good and 99% bad.

Efforts are underway to try and prevent AGI from turning out to be evil.

Conventional AI already has demonstrated that it is capable of deceptive practices, and even ready to perform blackmail and extortion (see my discussion at the link here). Maybe we can find ways to stop conventional AI from those woes and then use those same approaches to keep AGI on the upright path to abundant decency and high virtue.

That’s where AI ethics and AI laws come into the big picture.

The hope is that we can get AI makers and AI developers to adopt AI ethics techniques and abide by AI-devising legal guidelines so that current-era AI will stay within suitable bounds. By setting conventional AI on a proper trajectory, AGI might come out in the same upside manner.

AI Ethics And AI Laws

There is an abundance of conventional AI ethics frameworks that AI builders can choose from.

For example, the United Nations has an extensive AI ethics methodology (see my coverage at the link here), the NIST has a robust AI risk management scheme (see my coverage at the link here), and so on. They are easy to find. There isn’t an excuse anymore that an AI maker has nothing available to provide AI ethics guidance. Plenty of AI ethics frameworks exist and are readily available.

Sadly, some AI makers don’t care about such practices and see them as impediments to making fast progress in AI. It is the classic belief that it is better to ask forgiveness than to get permission. A concern with this mindset is that we could end up with AGI which has a full-on x-risk, after which things will be far beyond our ability to prevent catastrophe.

AI makers should also be keeping tabs on the numerous new AI laws that are being established and that are rapidly emerging, see my discussion at the link here. AI laws are considered the hard or tough side of regulating AI since laws usually have sharp teeth, while AI ethics is construed as the softer side of AI governance due to typically being of a voluntary nature.

From AI To AGI Ethics Checklist

We can stratify the advent of AGI into three handy stages:

  • (1) Pre-AGI. This includes today’s conventional AI and the rest of the pathway up to attaining AGI.
  • (2) Attained-AGI. This would be the time at which AGI has been actually achieved.
  • (3) Post-AGI. This is after AGI has been attained and we are dealing with an AGI era upon us.

I propose here a helpful AGI Ethics Checklist that would be applicable across all three stages. I’ve constructed the checklist by considering the myriads of conventional AI versions and tried to boost and adjust to accommodate the nature of the envisioned AGI.

To keep the AGI Ethics Checklist usable for practitioners, I opted to focus on the key factors that AGI warrants. The numbering of the checklist items is only for convenience of reference and does not denote any semblance of priority. They are all important. Generally speaking, they are all equally deserving of attention.

Here then is my overarching AGI Ethics Checklist:

  • (1) AGI Alignment and Safety Policies. Key question: How can we ensure that AGI acts in ways that are beneficial to humanity and avoid catastrophic risks (which, in the main, entail alignment with human values, and the safety of humankind)?
  • (2) AGI Regulations and Governance Policies.Key question: What is the impact of AGI-related regulations such as new laws, existing laws, etc., and the emergence of efforts to instill AI governance modalities into the path to and attainment of AGI?
  • (3) AGI Intellectual Property (IP) and Open Access Policies. Key question: In what ways will IP laws restrict or empower the advent of AGI, and likewise, how will open source versus closed source have an impact on AGI?
  • (4) AGI Economic Impacts and Labor Displacement Policies. Key question: How will AGI and the pathway to AGI have economic impacts on society, including for example labor displacement?
  • (5) AGI National Security and Geopolitical Competition Policies. Key question: How will AGI have impacts on national security such as bolstering the security and sovereignty of some nations and undermining other nations, and how will the geopolitical landscape be altered for those nations that are pursuing AGI or that attain AGI versus those that are not?
  • (6) AGI Ethical Use and Moral Status Policies. Key question: How will the use of AGI in unethical ways impact the pathway and advent of AGI, how would positive ethical uses that are encoded into AGI be of benefit or detriment, and in what way would recognizing AGI as having legal personhood or moral status be an impact?
  • (7) AGI Transparency and Explainability Policies. Key question: How will the degree of AGI transparency and interpretability or explainability impact the pathway and attainment of AGI?
  • (8) AGI Control, Containment, and “Off-Switch” Policies. Key question: A societal concern is whether AGI can be controlled, and/or contained, and whether an off-switch or deactivation mechanism will be possible or might be defeated and readily overtaken by AGI (so-called runaway AGI) – what impact do these considerations have on the pathway and attainment of AGI?
  • (9) AGI Societal Trust and Public Engagement Policies. Key question: During the pathway and the attainment of AGI, what impact will societal trust in AI and public engagement have, especially when considering potential misinformation and disinformation about AGI (along with secrecy associated with the development of AGI)?
  • (10) AGI Existential Risk Management Policies. Key question: A high-profile worry is that AGI will lead to human extinction or human enslavement – what impact will this have on the pathway and attainment of AGI?

In my upcoming column postings, I will delve deeply into each of the ten. This is the 30,000-foot level or top-level perspective.

Related Useful Research

For those further interested in the overall topic of AI Ethics checklists, a recent meta-analysis examined a large array of conventional AI checklists to see what they have in common, along with their differences. Furthermore, a notable aim of the study was to try and assess the practical nature of such checklists.

The research article is entitled “The Rise Of Checkbox AI Ethics: A Review” by Sara Kijewski, Elettra Ronchi, and Effy Vayena, AI and Ethics, May 2025, and proffered these salient points (excerpts):

  • “We identified a sizeable and highly heterogeneous body of different practical approaches to help guide ethical implementation.”
  • “These include not only tools, checklists, procedures, methods, and techniques but also a range of far more general approaches that require interpretation and adaptation such as for research and ethical training/education as well as for designing ex-post auditing and assessment processes.”
  • “Together, this body of approaches reflects the varying perspectives on what is needed to implement ethics in the different steps across the whole AI system lifecycle from development to deployment.”

Another insightful research study delves into the specifics of AGI-oriented AI ethics and societal implications, doing so in a published paper entitled “Navigating Artificial General Intelligence (AGI): Societal Implications, Ethical Considerations, and Governance Strategies” by Dileesh Chandra Bikkasani, AI and Ethics, May 2025, which made these key points (excerpts):

  • “Artificial General Intelligence (AGI) represents a pivotal advancement in AI with far-reaching implications across technological, ethical, and societal domains.”
  • “This paper addresses the following: (1) an in‐depth assessment of AGI’s transformative potential across different sectors and its multifaceted implications, including significant financial impacts like workforce disruption, income inequality, productivity gains, and potential systemic risks; (2) an examination of critical ethical considerations, including transparency and accountability, complex ethical dilemmas and societal impact; (3) a detailed analysis of privacy, legal and policy implications, particularly in intellectual property and liability, and (4) a proposed governance framework to ensure responsible AGI development and deployment.”
  • “Additionally, the paper explores and addresses AGI’s political implications, including national security and potential misuse.”

What’s Coming Next

Admittedly, getting AI makers to focus on AI ethics for conventional AI is already an uphill battle. Trying to add to their attention the similar but adjusted facets associated with AGI is certainly going to be as much of a climb and probably even harder to promote.

One way or another, it is imperative and requires keen commitment.

We need to simultaneously focus on the near-term and deal with the AI ethics of conventional AI, while also giving due diligence to AGI ethics associated with the somewhat longer-term attainment of AGI. When I refer to the longer term, there is a great deal of debate about how far off in the future AGI attainment will happen. AI luminaries are brazenly predicting AGI within the next few years, while most surveys of a broad spectrum of AI experts land on the year 2040 as the more likely AGI attainment date.

Whether AGI is a few years away or perhaps fifteen years away, it is nonetheless a matter of vital urgency and the years ahead are going to slip by very quickly.

Eleanor Roosevelt eloquently made this famous remark about time: “Tomorrow is a mystery. Today is a gift. That is why it is called the present.” We need to be thinking about and acting upon AGI Ethics right now, presently, or else the future is going to be a mystery that is resolved in a means we all will find entirely and dejectedly unwelcome.



Source link

Continue Reading

Ethics & Policy

How Nonprofits Can Harness AI Without Losing Their Mission

Published

on


Artificial intelligence is reshaping industries at a staggering pace, with nonprofit leaders now facing the same challenges and opportunities as their corporate counterparts. According to a Harvard Business Review study of 100 companies deploying generative AI, four strategic archetypes are emerging—ranging from bold innovators to disciplined integrators. For nonprofits, the stakes are even higher: harnessing AI effectively can unlock access, equity, and efficiency in ways that directly impact communities.

How can mission-driven organizations adopt emerging technologies without compromising their purpose? And what lessons can for-profit leaders learn from nonprofits already navigating this balance of ethics, empowerment, and revenue accountability?

Welcome to While You Were Working, brought to you by Rogue Marketing. In this episode, host Chip Rosales sits down with futurist and technologist Nicki Purcell, Chief Technology Officer at Morgan’s. Their conversation spans the future of AI in nonprofits, the role of inclusivity in innovation, and why rigor and curiosity must guide leaders through rapid change.

The conversation delves into…

  • Empowerment over isolation: Purcell shares how Morgan’s embeds accessibility into every initiative, ensuring technology empowers both employees and guests across its inclusive parks, hotels, and community spaces.

  • Revenue with purpose: She explains how nonprofits can apply for-profit rigor—like quarterly discipline and expense analysis—while balancing the complexities of donor, grant, and state funding.

  • AI as a nonprofit advantage: Purcell argues that AI’s efficiency and cost-cutting potential makes it essential for nonprofits, while stressing the importance of ethics, especially around disability inclusion and data privacy.

Article written by MarketScale.



Source link

Continue Reading

Ethics & Policy

Blackboard vs Whiteboard | Release Date, Reviews, Cast, and Where to Watch

Published

on


Hindi2 hr 32 mins Release Date Apr 11, 2019

Blackboard vs Whiteboard : Release Date, Trailer, Cast & Songs

Title Blackboard vs Whiteboard
Release status Released
Release date Apr 11, 2019
Language Hindi
Genre DramaFamily
Actors Raghubir YadavAshok SamarthAkhilendra Mishra
Director Tarun Bisht Tarun s Bisht
Critic Rating 5.8
Streaming On Airtel Xstream
Duration 2 hr 32 mins

Blackboard vs Whiteboard Storyline

Blackboard vs Whiteboard – Star Cast And Crew


Image Gallery




Source link

Continue Reading

Trending