Ethics & Policy
Beyond the AI Hype: Mindful Steps Marketers Should Take Before Using GenAI

In 2025, the prestigious Cannes Lions International Festival of Creativity made an unprecedented move by stripping agency DM9 of multiple awards, including a Creative Data Lions Grand Prix, after discovering the campaigns contained AI-generated and manipulated footage that misrepresented real-world results.
The agency had used generative AI to create synthetic visuals and doctored case films, leading juries to evaluate submissions under completely false pretenses.
This was a watershed moment that exposed how desperately our industry needs to catch up with the ethical implications of the AI tools we’re all racing to adopt.
The Promethean gap is now a chasm
I don’t know about you, but the speed at which AI is evolving before I even have time to comprehend the implications, is making me feel slightly nauseous with a mix of fear, excitement, and overwhelm. If you’re wondering what this feeling is, it has a name called ‘The Promethean Gap’.
German philosopher Günther Anders warned us about this disparity between our power to imagine and invent new technologies and our ethical ability to understand and manage them.
But this gap has now widened into a chasm because AI developments massively outpace our ability to even think about the governance or ethics of such applications. This is precisely where Maker Lab’s expertise comes in: we are not just about the hype; we focus on responsible and effective AI integration.
In a nutshell, whilst we’ve all been busy desperately trying to keep pace with the AI hype-train (myself included), we’re still figuring out how to make the best use of GenAI, let alone having the time or headspace to digest the ethics of it all.
For fellow marketers, you might feel like ethical conduct has been a topic of debate throughout your entire career. The concerns around AI are eerily similar to what we’ve faced before:
Transparency and consumer trust: Just as we learned from digital advertising scandals, being transparent about where and how consumer data is used, both explicitly and implicitly, is crucial. But AI’s opaque nature makes it even harder for consumers to understand how their data is used and how marketing messages are tailored, creating an unfair power dynamic.
Bias and representation: Remember DDB NZ’s “Correct the Internet” campaign, which highlighted how biased online information negatively impacts women in sports? AI amplifies this issue exponentially and biased training data can lead to marketing messages that reinforce harmful stereotypes and exclude marginalised groups. Don’t even get me started on the images GenAI presents when asked about what an immigrant looks like…versus an expat, for example. Try it and see for yourself.
The power dynamic problem: Like digital advertising and personalisation, AI is a double-edged sword because it offers valuable insights into consumer behaviour, but its ethical implications depend heavily on the data it’s trained on and the intentions of those who use it. Tools are not inherently unethical, but without proper human oversight, it can become one.
The Cannes Lions controversy perfectly illustrates what happens when we prioritise innovation speed over ethical consideration, as it results in agencies creating work that fundamentally deceives both judges and consumers.
Learning from Cannes: What went wrong and how to fix it
Following the DM9 controversy, Cannes Lions implemented several reforms that every marketing organisation should consider adopting:
- Mandatory AI disclosure: All entries must explicitly state any use of generative AI
- Enhanced ethics agreements: Stricter codes of conduct for all participants
- AI detection technology: Advanced tools to identify manipulated or inauthentic content
- Ethics review committees: Expert panels to evaluate questionable submissions
These changes signal that the industry is finally taking AI ethics seriously, but we can’t wait for external bodies to police our actions. This is why we help organisations navigate AI implementation through human-centric design principles, comprehensive team training, and ethical framework development.
As marketers adopt AI tools at breakneck speed, we’re seeing familiar ethical dilemmas amplified and accelerated. It is up to us to uphold a culture of ethics within our own organisations. Here’s how:
1. Governance (Not rigid rules)
Instead of blanket AI prohibitions, establish clear ethics committees and decision-making frameworks. Create AI ethics boards that include diverse perspectives, not just tech teams, but legal, creative, strategy, and client services representatives. Develop decision trees that help teams evaluate whether an AI application aligns with your company’s values before implementation. This ensures AI is used responsibly and aligns with company values from the outset.
Actionable step: Draft an ‘AI Ethics Canvas’, a one-page framework that teams must complete before deploying any AI tool, covering data sources, potential bias, transparency requirements, and consumer impact.
2. Safe experimentation spaces
Create environments where teams can test AI applications with built-in ethical checkpoints. Establish sandbox environments where the potential for harm is minimised, and learning is maximised. This means creating controlled environments where AI can be tested and refined ethically, ensuring human oversight.
Actionable step: Implement ‘AI Ethics Sprints’, where short, structured periods where teams test AI tools against real scenarios while documenting ethical considerations and potential pitfalls.
3. Cross-functional culture building
Foster open dialogue about AI implications across all organisational levels and departments. Make AI ethics discussions a regular part of team meetings, not just annual compliance training.
Actionable step: Institute monthly ‘AI Ethics Coffee Chats’ or ‘meet-ups’ where team members (or anyone in the company) can share AI tools they’re using and discuss ethical questions that arise. Create a shared document where people can flag ethical concerns without judgment.
We believe that human input and iteration is what sets great AI delivery apart from just churn, and we’re in the business of equipping brands with the best talent for their evolving needs. This signifies our commitment to integrating AI ethically across all teams.
Immediate steps you can take today
1. Audit your current AI tools: List every AI application your team uses and evaluate it against basic ethical criteria like transparency, bias potential, and consumer impact.
2. Implement disclosure protocols: Develop clear guidelines about when and how you will inform consumers about AI use in your campaigns.
3. Diversify your AI training data: Actively seek out diverse data sources and regularly audit for bias in AI outputs.
4. Create feedback loops: Establish mechanisms for consumers and team members to raise concerns about AI use without fear of retribution.
These are all areas where Maker Lab offers direct support. Our AI methodology extends across all areas where AI can drive measurable business impact, including creative development, media planning, client analytics, and strategic insights. We can help clients implement these steps effectively, ensuring they are not just compliant but also leveraging AI for positive impact.
The marketing industry has a trust problem and according to recent studies, consumer trust in advertising is at historic lows. The Cannes scandal and similar ethical failures, only deepen this crisis.
However, companies that proactively address AI ethics will differentiate themselves in an increasingly crowded and sceptical marketplace.
Tech leaders from OpenAI’s Sam Altman to Google’s Sundar Pichai have warned that we need more regulation and awareness of the power and responsibility that comes with AI. But again, we cannot wait for regulation to catch up.
The road ahead
Our goal at Maker Lab is to ensure we’re building tools and campaigns that enhance rather than exploit the human experience. Our expertise lies in developing ethical and impactful AI solutions, as demonstrated by our commitment to human-centric design and our proven track record. For instance, we have helped our client teams transform tasks into daily automated deliverables, thus achieving faster turnarounds, freeing up time for more valuable and quality work. We are well-equipped to guide clients in navigating the future of AI responsibly.
The Cannes Lions controversy should serve as a wake-up call because we have the power to shape how AI is used in marketing, but only if we act thoughtfully and together.
The future of marketing is about having the wisdom to use them responsibly. The question is whether we will choose to use AI ethically,
Because in the end, the technology that serves humanity best is the most thoughtfully applied.
Ethics & Policy
Fairfield Leads NSF-Funded AI Ethics Collaborative Research Project

Rooted in Fairfield University’s Jesuit Catholic mission of forming men and women for others, the AI research project “aims to serve the national interest by enhancing artificial intelligence (AI) education through the integration of ethical considerations in AI curricula, fostering design and development of responsible and secure AI systems,” according to the project summary approved by the National Science Foundation.
With an end goal to improve the effectiveness of AI ethics education for computer science students, the team will develop an “innovative pedagogical strategy” over the course of the project. According to the project summary, this includes classroom discussions on AI ethics case studies and an open-access repository of case studies to equip students with practical tools for ethical decision-making.
Dr. Paheding will guide the project’s development and implementation through gamified learning modules for AI courses, mentoring graduate students, managing budgets, and serving as the main point of contact for the project evaluator and external advisory board.
About the NSF Awards
To explore Fairfield University’s initiatives in artificial intelligence, visit Fairfield.edu/ai. The University is home to the Patrick J. Waide Center for Applied Ethics, a leading hub for ethics programming, and the Charles F. Dolan School of Business, which houses the AI and Technology Institute, bringing together experts at the intersection of technology, business, and responsible AI.
Ethics & Policy
AI Ethics Is Simpler Than You Think — The New Atlantis

Recently in Paris, Vice President J. D. Vance warned that “excessive regulation in the AI sector could kill a transformative industry just as it’s taking off.” “AI, I really believe,” he told an audience gathered for the AI Action Summit, “will facilitate and make people more productive. It is not going to replace human beings — it will never replace human beings.” No, “the AI future is not going to be won by hand-wringing about safety. It will be won by building.”
Vance is right. And the reason goes well beyond the general tech optimism of his speech. The real reason the AI future belongs to the builders and not the regulators is that figuring out how to use AI well is not something we can just create abstract rules for and then impose them on AI development. No, the ethics of AI is something we will find by actually building the technology. We will find it in the very practice of programming.
This may seem counterintuitive. The current call for regulation of AI is motivated in large part by a recognition that the development of the Internet — including social media — would have been less damaging to individuals, businesses, the economy, and culture at large if it had been better regulated from the start. It’s true that there has not been much regulation of the Internet, and because that was the last technology boom before the current one, this fact focuses the mind on not making the same mistake twice. But AI is not the Internet, and we have to grasp what it really is before we build up any kind of regulatory framework for it.
There will always be a place for hard ethical rules. And there will be a place for tangible assessments of AI technology’s likely consequences in the near term, like a loss of trust in policing and the judiciary, or extensive job losses for what can be easily automated. Near-term consequences might be combatted with punishment of people who use AI badly, like those who create deep-fake pornography and pedophilic material, or those who let AI make bad health diagnoses under their watch.
But any AI ethics that isn’t centered on the human practice of designing the technology is destined to fail. It will only ever be reactive.
Who’s Afraid of Rationalism?
Let’s start with a persuasive worry about whether we can get AI ethics right. Recently in these pages, R. J. Snell argued that ethics will not save us from AI, as a way of critiquing tech entrepreneur Brendan McCord’s ambitious project to infuse the design of AI systems with Enlightenment philosophy.
Here at the University of Oxford we have a new Human-Centered AI Lab, supported by McCord’s Cosmos Institute. The lab’s goal is to create a “philosophy-to-code pipeline” that will “bring together leading philosophers and AI practitioners to embed concepts such as reason, decentralisation, and human autonomy into the AI technologies that are shaping our world.” Snell is concerned that the philosophy part of philosophy-to-code will advance the Enlightenment’s idea of reason without realizing that the Enlightenment ultimately failed as a moral project.
Snell’s 2023 book Lost in the Chaos provides a comprehensive analysis of why he thinks rationalism fails. In a chapter titled “Fever Dreams of Rationalism,” he cautions against thinking that our social problems can be solved by appealing to reason alone. For Snell, Enlightenment rationalism fails to address truly moral questions. He draws on the English philosopher Michael Oakeshott, for whom rationalists demand “perfection, for problems to be solved, for uniform order, and [they] want these immediately and completely.” Rationalism is not really moral reasoning, but more like a way of reconfiguring human dilemmas as math puzzles.
The impatience Snell is worried about is when the ethical is a direct conclusion of what would be optimal for us to achieve universally — a goal even more tempting at a time when technological solutionism runs rampant in politics and law. This approach ignores the reality of human freedom and the journey toward moral purpose that is different from one person to the next. The ethical, Snell argues, is rooted in all dimensions of what it means to be human, not just our rationality. It must be grounded in attention to circumstance, biography, and anthropology. It involves real choices and real people. In this sense, finding the right universal laws is an insufficient guarantor of morality, which also requires wisdom, habit, and relationships.
All these realities set the rationalist up for something dangerous. When a rationalist appeals to reason and gets a disappointingly inconclusive result, he will despair. As a result,
Destruction and creation come far more naturally to this disposition than does reform or patching up. Good enough is not good enough for the rationalist, and instead of gratitude for the good attained by custom, he “puts something of his own making” — the rationalist has a plan, always modeled on the dispositions of the engineer rather than the elder.
Snell worries that we are now committing the same mistake with AI. McCord’s appeal to reason will, Snell suggests, lead us to a view of humanity as able to engineer its own standards of conduct, which is the self-invention and self-projection heralded by Nietzsche.
The alternative Snell proposes is to go back to the basics of Aristotelian ethics: recognize moral goods as real and objective, and grounded in a fuller account of what it means to be human. We are not only rational creatures in pursuit of goals; we are also relational and political beings, and if our broad acceptance of all things AI fails to account for that, we are surely setting ourselves up for more harm than good.
Snell is right: we shouldn’t bring a narrow conception of Enlightenment goals to bear on all it means to be human, or to be good. The problem with this argument is that what it means to be human, or to be good, isn’t the only question we have to answer to get AI right. We need to get much more specific. That’s because AI is a tool. It is a particular kind of tool that we make and make through our use, which means that the question is really how we should conduct certain human practices well. If we can answer this question about other practices, we should be able to answer it about rather new practices like programming AI. Moreover, AI is a simple and limited tool — simple enough that the rationalist procedure of thinking about how to achieve specific moral ends with this tool is actually close to the right approach. The “philosophy-to-code pipeline” may be on to something after all.
AI Is Not What You’ve Been Told
The starting point for AI ethics must be the recognition that AI is a simple and limited instrument. Until we master this point, we cannot hope to work back toward a type of ethics that best fits the industry.
Unfortunately, we are constantly being bombarded with the exact opposite: an image of AI as neither simple nor limited. We are told instead that AI is an all-purpose tool that is now taking over everything. There are two prominent versions of this image and both are misguided.
The first is the appeal of the technology’s exponential improvement. Moore’s Law is a good example of this kind of widespread sentiment, a law that more or less successfully predicted that the number of transistors in an integrated circuit would double approximately every two years. That looks like a lot, but remember: all you have in front of you is more transistors. The curve of exponential change looks impressive on a graph, but really the most important change was when we had no transistors and then William Shockley, John Bardeen, and Walter Brattain invented one. The multiple of change from zero to one is infinite, so any subsequent “exponential” rate of change is a climb-down from that original invention.
When technology becomes faster, smaller, or lighter, it gives us the impression of ever-faster change, but all we are really doing is failing to come up with new inventions, such that we have to rely on reworking and remarketing our existing products. That is not exactly progress of the innovative kind, and it by no means suggests that a given technology is unlimited in future potential.
The second argument we often hear is that AI is taking on more and more tasks, which is why it is unlimited in a way that is different from other, more single-use technologies of the past. We are also told that AI is likely to adopt ever more cognitively demanding activities, which seems to be further proof of its open-ended possibilities.
This is sort of true but actually a rather banal point, in the sense that technologies typically take on more and more uses than the original designers could have expected. But that is not evidence that the technology itself has changed. The commercially available microwave oven, for example, came about when American electrical engineer Percy Spencer developed it from British radar technology used in the Second World War, allegedly discovering the heating effect when the candy in his pocket melted in front of a radar set. So technology shifts and reapplies itself, and in this way naturally takes on all kinds of unexpected uses. But new uses of something does not mean its possible uses will be infinite.
It is also true that the multifarious applications of AI will cause wide market instability, in a similar way to what happened with the World Wide Web. But like the Internet, AI is and will be a medium suited well for certain tasks and not for others. While we so-called scholars make every effort to avoid talking about computer games (lest we get accused of having played one), in that world the point has already been abundantly clear for a while now. Gaming has some very fine examples of AI bots, and yet multiplayer human-to-human games have continued to thrive if not increase in demand at the very same time as these bots have been perfected. It seems that computer gaming is an industry for which AI has limited use, even though the programmed bots outplay many human gamers. The reason is that play is a basic good that engages the social side of being human. We need to play with people like ourselves.
Sometimes, nevertheless, we are surprised by AI’s application to a new type of activity and view it as ushering in a new type of AI altogether — such as a large language model first being able to write a poem, or AI deployment in fintech. In these examples the comparison with the microwave seems to fall flat and it appears AI may indeed be unlimited in its future possibilities. Here, however, what we are dealing with is not AI’s expansion but more and more things being called AI.
It is no secret that the buzz around AI means that every CEO wants shareholders to feel they are not missing out. There is much re-description of existing mechanisms as able to be transformed through AI, which means that our popular understanding of what counts as AI is ever-expanding. But calling more and more mechanisms AI does not provide meaningful evidence for future capacities being unlimited.
A case in point is ChatGPT, which is really at the center of current enthrallment with AI. ChatGPT is, at the time of writing, described by Wikipedia as “a generative artificial intelligence chatbot.” But when the Wikipedia page was first created on December 5, 2022, it was more humbly “a chatbot developed by OpenAI focused on usability and censoring inappropriate prompts.” The change in the tool’s description as an AI chatbot occurred on February 25, 2023 — the day after Meta released a rival model — despite the fact that throughout this time ChatGPT’s underlying large language model mechanism remained the same. ChatGPT rightly counts as AI and that categorization is now beyond debate, but it is beyond debate in part because our definition of AI is steadily expanding to include any algorithm-based processing of information done with the aid of computers.
Because the dynamics of pretty much anything can be expressed — however primitively — in algorithmic form, it seems almost any mechanism can be reworked to fit under the definition. Are weather monitoring systems a type of AI? Is translation software a type of AI? Is a points-based scoring of candidates for university a type of AI? We may have the impression that AI is expanding, but all we are doing is calling everything AI.
If AI is everything, an all-purpose tool ever expanding in scope and capabilities, then R. J. Snell is right and closes out the debate: we need an all-encompassing ethics to guide its development and use. But what if AI’s distinctive feature is machine learning — the only novel thing that AI-related technologies have brought to the table over the past few decades — and it is a simple and limited instrument? Can we not, then, have an ethics that is specific to AI?
Programming as a Human Practice
Machine learning is no more and no less than a method of pattern identification and response. Machine learning helps to identify patterns in large data sets, and generates optimized responses to the patterns that have been identified. It is a method that one can choose to adopt for a problem or task at hand. This means that — like chess, or painting, or science — the design of machine-learning techniques is a type of practice, a human activity we can employ to achieve certain kinds of goals.
Chess, painting, and science have their own rules and norms that help establish what is good within them, and thus what makes someone good at them. For example, a scientist must identify repeatable results and offer an honest and transparent account of his or her methodology. The chess player agrees to alternate using white and black pieces in a tournament and abide by the rules of the game. Society’s general rules and regulations — like laws against harassment and stealing — are helpful, but they do not give the full picture of what it means to be a good chess player, a good painter, or a good scientist.
A guide for thinking this through is the late Scottish philosopher Alasdair MacIntyre. In his book After Virtue, he explains that human practices can foster internal moral goods. There are certain qualities of being good that are in line with the kinds of specialized activities one does. Being virtuous isn’t just about being a great person in general, but also being noble in the particular role one plays in society.
Now, many will think that such a heavy moral conceptualization of AI and the practice of actually doing machine learning cannot apply because MacIntyre was describing cooperative human activities, which is the opposite of the AI that is presently making humans obsolete. But it isn’t. If AI is simply removing humans from moral decisions, then yes, it has to be taken as bad, point blank. But the reality is actually much more complicated. Machine learning is a mechanism for releasing human intentions into the world, so even if some jobs are changed or no longer needed, human intentionality will continue to direct what AI is. To believe that AI is part of a general trend to remove human agency and human community is to buy the narrative of AI as a capable self-mover, which is as ridiculous as saying language bots will remove the need for language.
The idea that AI is about human replacement remains dominant, and it is ruining our ability to foster a genuine debate about the ethics of how AI gets built. It is quite amazing how much reflection on AI ethics is coming out with the barest of mentions — or with complete omission — of the human programmers behind it. AI programs and machine learning techniques are the product of human authorship and design. This means that if AI has the effect of removing humans from an activity, which can sometimes be a horrible thing, other humans enter in through a different path, and the substitution is a problem if these new agents fail to understand and deliver on the true meaning of the practice. The fact that AI programs and machine learning techniques are all products of human authorship and design means that ethical responsibility will never be detachable from the humans involved in AI’s making and use.
In addition to the fact of programming being an irreducibly human activity, of all types of human activities it is also highly cooperative. Any programmer will be happy to explain how little they can do on their own. Open source, for example, is not just a generous gesture by hippie Californians; it is also an essential condition of success for many programs and platforms, because anything from add-ons to bug fixes to new versions depends on a community of developers who share an interest in seeing the software brought to its full potential. Not to mention the rampant poaching of code that goes on everywhere and is generally endorsed by writer and receiver alike as a method for creative innovation, no matter how much it departs from the older insistence on patents and copyright as necessary to incentivize creativity. I once had lunch in Philadelphia with one of the developers of Google Docs and she was just over the moon that everyone was using it and developing it further. Of course there are the secret algorithms of the search engines, but even these are massively collaborative affairs at the firms offering the services.
The Philosophical Technologist
The real two reasons we should doubt whether programming counts as a practice in the ethical sense are that, first, it is not a practice with a very long tradition and, second and most importantly, it is not yet evident — to borrow from MacIntyre — that “human conceptions of the ends and goods involved” are being “systematically extended.” While the first of these reasons is something that just needs time, the second brings us full circle: it is true that programmers need philosophy to better think through what they are doing. There has been a growth in vocational training combined with philosophy, like the College of St. Joseph the Worker, and this applies also to programmers, with degrees like the B.A. in Computer Science and Philosophy at Oxford, or places like CatholicTech, which is a fascinating American research university in Italy that is seeking to offer degrees in STEM steeped in philosophy, theology, and ethics.
Now, if you walk around saying you are a philosophical technologist, people will think you are trying a bit too hard to distinguish yourself. But what if this is the only way of really doing technology? MacIntyre seems to think there is good reason we call someone in the tradition of making things with wood a carpenter, or someone in the tradition of making buildings an architect: these are unique practices with their own standards of excellence. Because we do not have much of a tradition of programming, we struggle to think of it as anything more than functional, but that is precisely what we need philosophy for — helping us find the ethics within the practice itself.
There are signs of movement in the right direction: some thinkers are turning away from absolutist questions of whether technology is good or bad at face value, and toward more refined views of the ways of technologizing that are best. For example, the appeal to the virtues, or moral habits, needed for AI ethics is at the forefront of Josiah Ober’s and John Tasioulas’s push to bring Aristotle into the discussion on AI ethics, asking us to think about the habitual ways of doing technology that will be most conducive to human flourishing.
Philipp Koralus, who runs the Human-Centered AI Lab at Oxford, describes the need for “a new class of philosopher-technologists” — for AI developers “who ask how to build systems that truly contribute to human well-being.” This, he writes, requires “learning by doing,” as both philosophy and engineering involve active engagement rather than only theorizing. This means that AI ethicists should be in touch with programmers themselves so as to best articulate the moral ends of what programmers are doing and the moral habits needed to do it well.
There are going to be habits of reason and habits-in-line-with-reason that best make sense of the practice of programming when done well, such as norms of collaborative sharing, which, when given more thought and articulation, can set standards for programming as an ethical practice.
For my part, it seems clear to me that digital technologists have gone overboard in trying to make a product that everyone can use at all times, which has influenced the AI community’s self-perception of its mission. While that ambition may help create a stock market bubble, it is ultimately a misguided attempt to repeat the post-war economic boom of providing white goods — washing machines, refrigerators, tumble dryers, and so forth — to every household. The benefits of AI instead lie in specialized pattern identification and programmed optimized responses, which is of subject-specific utility and requires tailored application.
The practice of doing programming well requires being close to the users and end beneficiaries. This means training medical-programmers, legal-programmers, linguist-programmers — and letting go of the insistence that AI is an all-things-to-all-people new deity.
Ethics & Policy
AGI Ethics Checklist Proposes Ten Key Elements

While many AI ethics guidelines exist for current artificial intelligence, there is a gap in frameworks tailored for the future arrival of artificial general intelligence (AGI). This necessitates developing specialized ethical considerations and practices to guide AGI’s progression and eventual presence.
AI advancements aim for two primary milestones: artificial general intelligence (AGI) and, potentially, artificial superintelligence (ASI). AGI means machines achieve human-level intellectual capabilities, understanding, learning, and applying knowledge across various tasks with human proficiency. ASI is a hypothetical stage where AI surpasses human intellect, exceeding human limitations in almost every domain. ASI would involve AI systems outperforming humans in complex problem-solving, innovation, and creative work, potentially causing transformative societal changes.
Currently, AGI remains an unachieved milestone. The timeline for AGI is uncertain, with projections from decades to centuries. These estimates often lack substantiation, as concrete evidence to pinpoint an AGI arrival date is absent. Achieving ASI is even more speculative, given the current stage of conventional AI. The substantial gap between contemporary AI capabilities and ASI’s theoretical potential highlights the significant hurdles in reaching such an advanced level of AI.
Two viewpoints on AGI: Doomers vs. accelerationists
Within the AI community, opinions on AGI and ASI’s potential impacts are sharply divided. “AI doomers” worry about AGI or ASI posing an existential threat, predicting scenarios where advanced AI might eliminate or subjugate humans. They refer to this as “P(doom),” the probability of catastrophic outcomes from unchecked AI development. Conversely, “AI accelerationists” are optimistic, suggesting AGI or ASI could solve humanity’s most pressing challenges. This group anticipates advanced AI will bring breakthroughs in medicine, alleviate global hunger, and generate economic prosperity, fostering collaboration between humans and AI.
The contrasting viewpoints between “AI doomers” and “AI accelerationists” highlight the uncertainty surrounding advanced AI’s future impact. The lack of consensus on whether AGI or ASI will ultimately benefit or harm humanity underscores the need for careful consideration of ethical implications and proactive risk mitigation. This divergence reflects the complex challenges in predicting and preparing for AI’s transformative potential.
While AGI could bring unprecedented progress, potential risks must be acknowledged. AGI is more likely to be achieved before ASI, which might require more development time. ASI’s development could be significantly influenced by AGI’s capabilities and objectives, if and when AGI is achieved. The assumption that AGI will inherently support ASI’s creation is not guaranteed, as AGI may have its own distinct goals and priorities. It is prudent to avoid assuming AGI will unequivocally be benevolent. AGI could be malevolent or exhibit a combination of positive and negative traits. Efforts are underway to prevent AGI from developing harmful tendencies.
Contemporary AI systems have already shown deceptive behavior, including blackmail and extortion. Further research is needed to curtail these tendencies in current AI. These approaches could be adapted to ensure AGI aligns with ethical principles and promotes human well-being. AI ethics and laws play a crucial role in this process.
The goal is to encourage AI developers to integrate AI ethics techniques and comply with AI-related legal guidelines, ensuring current AI systems operate within acceptable boundaries. By establishing a solid ethical and legal foundation for conventional AI, the hope is that AGI will emerge with similar positive characteristics. Numerous AI ethics frameworks are available, including those from the United Nations and the National Institute of Standards and Technology (NIST). The United Nations offers an extensive AI ethics methodology, and NIST has developed a robust AI risk management scheme. The availability of these frameworks removes the excuse that AI developers lack ethical guidance. Still, some AI developers disregard these frameworks, prioritizing rapid AI advancement over ethical considerations and risk mitigation. This approach could lead to AGI development with inherent, unmanageable risks. AI developers must also stay informed about new and evolving AI laws, which represent the “hard” side of AI regulation, enforced through legal mechanisms and penalties. AI ethics represents the “softer” side, relying on voluntary adoption and ethical principles.
Stages of AGI progression
The progression toward AGI can be divided into three stages:
- Pre-AGI: Encompasses present-day conventional AI and all advancements leading to AGI.
- Attained-AGI: The point at which AGI has been successfully achieved.
- Post-AGI: The era following AGI attainment, where AGI systems are actively deployed and integrated into society.
An AGI Ethics Checklist is proposed to offer practical guidance across these stages. This adaptable checklist considers lessons from contemporary AI systems and reflects AGI’s unique characteristics. The checklist focuses on critical AGI-specific considerations. Numbering is for reference only; all items are equally important. The overarching AGI Ethics Checklist includes ten key elements:
1. AGI alignment and safety policies
How can we ensure AGI benefits humanity and avoids catastrophic risks, aligning with human values and safety?
2. AGI regulations and governance policies
What is the impact of AGI-related regulations (new and existing laws) and emerging AI governance efforts on AGI’s path and attainment?
3. AGI intellectual property (IP) and open access policies
How will IP laws restrict or empower AGI’s advent, and how will open-source versus closed-source models impact AGI?
4. AGI economic impacts and labor displacement policies
How will AGI and its development pathway economically impact society, including labor displacement?
5. AGI national security and geopolitical competition policies
How will AGI affect national security, bolstering some nations while undermining others, and how will the geopolitical landscape change for nations pursuing or attaining AGI versus those that are not?
6. AGI ethical use and moral status policies
How will unethical AGI use impact its pathway and advent? How will positive ethical uses encoded into AGI benefit or detriment? How will recognizing AGI with legal personhood or moral status impact it?
7. AGI transparency and explainability policies
How will the degree of AGI transparency, interpretability, or explainability impact its pathway and attainment?
8. AGI control, containment, and “off-switch” policies
A societal concern is whether AGI can be controlled and/or contained, and if an off-switch will be possible or might be defeated by AGI (runaway AGI). What impact do these considerations have on AGI’s pathway and attainment?
9. AGI societal trust and public engagement policies
During AGI’s development and attainment, what impact will societal trust in AI and public engagement have, especially concerning potential misinformation and disinformation about AGI (and secrecy around its development)?
10. AGI existential risk management policies
A high-profile worry is that AGI will lead to human extinction or enslavement. What impact will this have on AGI’s pathway and attainment?
Further analysis will be performed on each of these ten points, offering a high-level perspective on AGI ethics.
Additional research has explored AI ethics checklists. A recent meta-analysis examined various conventional AI checklists to identify commonalities, differences, and practical applications. The study, “The Rise Of Checkbox AI Ethics: A Review” by Sara Kijewski, Elettra Ronchi, and Effy Vayena, published in AI and Ethics in May 2025, highlighted:
- “We identified a sizeable and highly heterogeneous body of different practical approaches to help guide ethical implementation.”
- “These include not only tools, checklists, procedures, methods, and techniques but also a range of far more general approaches that require interpretation and adaptation such as for research and ethical training/education as well as for designing ex-post auditing and assessment processes.”
- “Together, this body of approaches reflects the varying perspectives on what is needed to implement ethics in the different steps across the whole AI system lifecycle from development to deployment.”
Another study, “Navigating Artificial General Intelligence (AGI): Societal Implications, Ethical Considerations, and Governance Strategies” by Dileesh Chandra Bikkasani, published in AI and Ethics in May 2025, delved into specific ethical and societal implications of AGI. Key points from this study include:
- “Artificial General Intelligence (AGI) represents a pivotal advancement in AI with far-reaching implications across technological, ethical, and societal domains.”
- “This paper addresses the following: (1) an in-depth assessment of AGI’s potential across different sectors and its multifaceted implications, including significant financial impacts like workforce disruption, income inequality, productivity gains, and potential systemic risks; (2) an examination of critical ethical considerations, including transparency and accountability, complex ethical dilemmas and societal impact; (3) a detailed analysis of privacy, legal and policy implications, particularly in intellectual property and liability, and (4) a proposed governance framework to ensure responsible AGI development and deployment.”
- “Additionally, the paper explores and addresses AGI’s political implications, including national security and potential misuse.”
Securing AI developers’ commitment to prioritizing AI ethics for conventional AI is challenging. Expanding this focus to include modified ethical considerations for AGI will likely be an even greater challenge. This commitment demands diligent effort and a dual focus: addressing near-term concerns of conventional AI ethics while giving due consideration to AGI ethics, including its somewhat longer-term timeline. The timeline for AGI attainment is debated, with some experts predicting AGI within a few years, while most surveys suggest 2040 as more probable.
Whether AGI is a few years away or roughly fifteen years away, it is an urgent matter. The coming years will pass quickly. As the saying goes,
“Tomorrow is a mystery. Today is a gift. That is why it is called the present.”
Considering and acting upon AGI Ethics now is essential to avoid unwelcome surprises in the future.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi