Ethics & Policy
When it comes to AI incidents, safety and security are not the same

The OECD common reporting framework for AI Incidents is progressing fast to become the basis for a standard for AI incident reporting, with ETSI, and ISO considering building upon it. The framework significantly enhances the ability to share data about AI incidents and hazards within the AI ecosystem.
However, there is an obstacle that could block the framework’s broad adoption by the security community: a lack of information to better handle cybersecurity issues.
Non-security AI Incidents – Accidental | AI Security Incidents – Intentional (caused by an attacker) |
“Traffic Camera Misread Text on Pedestrian’s Shirt as License Plate, Causing UK Officials to Issue Fine to an Unrelated Person”. AIID“Alexa Recommended Dangerous TikTok Challenge to Ten-Year-Old Girl”. AIID“Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails”. AIID, AIM | “AI Face-Swapping Allegedly Used to Bypass Facial Recognition and Commit Financial Fraud in China,” AIID, AIM“‘Jewish Baby Strollers’ Provided Anti-Semitic Google Images, Allegedly Resulting from Hate Speech Campaign” AIID“Alleged Fraudulent Prompts via AIXBT Dashboard Led Purported AI Trading Agent to Transfer 55.5 ETH from Simulacrum Wallet” . AIID |
After decades of hard-fought advancements, the security community broadly knows what a good process looks like. However, they are still figuring out how to best address security incidents related to AI. One challenge is to obtain information about how an attacker may produce harmful events, i.e., incidents, which is central to security processes. Without information about the capacity malevolent actors have to exploit a vulnerability, the security community cannot weigh the benefits of disclosing weaknesses against the cost of empowering attackers with greater knowledge.
To speed up the adoption of the common incident expression, it would be helpful to (a) explicitly set aside security incident reporting or (b) provide extensions to the OECD framework that would be conditional and necessary for AI security reporting. The latter is in line with the flexibility that the OECD framework provides and would benefit from the international consensus that the OECD framework has gathered.
Let’s have a more detailed look.

Square and round pegs
To be clear, the practice of “security” may be viewed as a subgroup of “safety” with the additional challenge of addressing malicious actors. Despite being part of the broader “safety” ecosystem, recent standardisation efforts have exposed tensions between the product safety and security communities. In particular, cybersecurity practitioners entering into product safety spaces can become frustrated with insufficient attention to adversarial threats. Although adversarial threats can be an element of product safety, not all AI incidents involve adversaries. The OECD framework’s focus on everything from deprivation of fundamental human rights to injuries related to robotics is much broader than the specific requirements of security incidents.
Safety communities and the problems they address | |||||
Someone or something could be impacted… | …by people wanting to do bad things… | …involving an AI system… | …that is attacked… | …or is used to attack. | |
AI Safety | ✓ | Maybe | ✓ | Maybe | Maybe |
AI Security | ✓ | ✓ | ✓ | ✓ | Maybe |
Computer/ Cybersecurity | ✓ | ✓ | Maybe | Maybe | Maybe |
Offensive AI | ✓ | ✓ | ✓ | Maybe | ✓ |
>> REPORT: Towards a common reporting framework for AI incidents <<
The common data useful across all AI incidents are necessary for AI security incident reporting, but so are others. Attempts to force cybersecurity norms into non-security incidents have proven equally problematic. When security professionals require detailed attack documentation for human rights or robotics incidents, they risk obscuring the essential elements that make the OECD framework valuable for broader AI incident reporting. Just as a leaf of lettuce can kill without anyone attacking the food supply, AI products can kill without any bad intentions.
Unfortunately, the square peg of security methodology doesn’t fit neatly into the round hole of common incident reporting. Still, the common expression of the OECD framework includes necessary information for assessing the impact of AI security incidents.

We can and should collect the fields of the common incident expression for AI security incidents, but we should also look to understand what is not there that facilitates cybersecurity.
Understanding the distinction between common AI incidents and security data
The first thing to understand is that while all security incidents are harm events, not all harm events are security incidents. For example, a facial recognition system that systematically misidentifies people with certain facial features as non-human may harm people with those features. But people may also modify their features to become invisible to the same cameras deployed as a home security system. In the latter case, where there is deception, the harm and the processes for mitigating its recurrence are dependent on how easily people might modify their features and how well-known the exploit is. On a technical level, it also depends on how commonplace the security camera is and if other products share the same components. Today, these aspects fall outside of the scope of the OECD’s incidents framework.

The presence of malicious actors in security incidents necessitates collecting and distributing additional data within a responsible disclosure process. These additional requirements are not bureaucratic overhead; they are essential for preventing future attacks and building collective defence capabilities. The security-specific data could help determine how companies respond to an incident. In Cybersecurity, it is recognised that if a vulnerability is widely known to attackers, it can be widely communicated. But if very few people know about the vulnerability, the balance of equities tilts towards keeping the vulnerability secret until systems can be patched.
Current standardisation efforts for the broader class of AI incidents do not address these security needs. They lack the specific elements required for effective AI security incident management, frustrating AI security professionals.
A path forward – together or apart
Explicitly acknowledging and accommodating these differences is the obvious solution. We need extensions to the OECD Framework that specifically address AI security reporting requirements. These extensions should include fields for vulnerability classification and other security-specific elements that don’t apply to all incidents.
Alternatively, implementations of the OECD framework could explicitly flag AI security incidents and allow for parallel processes tailored to security needs while maintaining compatibility with the broader framework. This approach would let security and product safety processes coexist without forcing either community to compromise essential practices. Even when operating separately, the uncertainty of whether an incident evidences a vulnerability means clear escalation paths from “AI safety incident” to “AI security incident” are necessary.
How we structure these frameworks is not just a taxonomic exercise; it will determine whether organisations can effectively learn from incidents. It will influence how AI security teams share threat intelligence and whether non-security incidents are forced into security-oriented reporting structures ill-adapted to product safety and other needs.
The cultural divide between security and safety communities reflects real differences in priorities, methodologies, and requirements. Rather than treating this divide as a problem to be papered over, we should recognise it as a reflection of genuine needs that must be addressed. Only by explicitly acknowledging these differences can we build incident reporting frameworks that serve both communities effectively and, ultimately, make AI systems safer and more secure for everyone. The OECD AI incidents reporting framework offers a solid foundation for different communities to build upon, allowing each to address their unique needs while using a common and interoperable reporting base.
Acknowledgements
This blog post benefits from collaborations with many people and projects. We would like to specifically thank John Leo Tarver, Bénédicte Rispal and Daniel Atherton for their feedback and contributions to this specific blog post, along with Lukas Bieringer, Nicole Nichols, Kevin Paeth, Jochen Stängler, Andreas Wespi, and Alexandre Alahi, with whom we collaborated in authoring Position: Mind the Gap-the Growing Disconnect Between Established Vulnerability Disclosure and AI Security.
The post When it comes to AI incidents, safety and security are not the same appeared first on OECD.AI.
Ethics & Policy
7 Life-Changing Books Recommended by Catriona Wallace | Books

7 Life-Changing Books Recommended by Catriona Wallace (Picture Credit – Instagram)
Some books ignite something immediate. Others change you quietly, over time. For Dr Catriona Wallace—tech entrepreneur, AI ethics advocate, and one of Australia’s most influential business leaders, books are more than just ideas on paper. They are frameworks, provocations, and spiritual companions. Her reading list offers not just guidance for navigating leadership and technology, but for embracing identity, power, and inner purpose. These seven titles reflect a mind shaped by disruption, ethics, feminism, and wisdom. They are not trend-driven. They are transformational.
1. Lean In by Sheryl Sandberg
A landmark in feminist career literature, Lean In challenges women to pursue their ambitions while confronting the structural and cultural forces that hold them back. Sandberg uses her own journey at Facebook and Google to dissect gender inequality in leadership. The book is part memoir, part manifesto, and remains divisive for valid reasons. But Wallace cites it as essential for starting difficult conversations about workplace dynamics and ambition. It asks, simply: what would you do if you weren’t afraid?

2. Women and Power: A Manifesto by Mary Beard
In this sharp, incisive book, classicist Mary Beard examines the historical exclusion of women from power and public voice. From Medusa to misogynistic memes, Beard exposes how narratives built around silence and suppression persist today. The writing is fiery, brief, and packed with centuries of insight. Wallace recommends it for its ability to distil complex ideas into cultural clarity. It’s a reminder that power is not just a seat at the table; it is a script we are still rewriting.
3. The World of Numbers by Adam Spencer
A celebration of mathematics as storytelling, this book blends fun facts, puzzles, and history to reveal how numbers shape everything from music to human behaviour. Spencer, a comedian and maths lover, makes the subject inviting rather than intimidating. Wallace credits this book with sparking new curiosity about logic, data, and systems thinking. It’s not just for mathematicians. It’s for anyone ready to appreciate the beauty of patterns and the thinking habits that come with them.
4. Small Giants by Bo Burlingham
This book is a love letter to companies that chose to be great instead of big. Burlingham profiles fourteen businesses that opted for soul, purpose, and community over rapid growth. For Wallace, who has founded multiple mission-driven companies, this book affirms that success is not about scale. It is about integrity. Each story is a blueprint for building something meaningful, resilient, and values-aligned. It is a must-read for anyone tired of hustle culture and hungry for depth.
5. The Misogynist Factory by Alison Phipps
A searing academic work on the production of misogyny in modern institutions. Phipps connects the dots between sexual violence, neoliberalism, and resistance movements in a way that is as rigorous as it is radical. Wallace recommends this book for its clear-eyed confrontation of how systemic inequality persists beneath performative gestures. It equips readers with language to understand how power moves, morphs, and resists change. This is not light reading. It is a necessary reading for anyone seeking to challenge structural harm.
6. Tribes by Seth Godin
Godin’s central idea is simple but powerful: people don’t follow brands, they follow leaders who connect with them emotionally and intellectually. This book blends marketing, leadership, and human psychology to show how movements begin. Wallace highlights ‘Tribes’ as essential reading for purpose-driven founders and changemakers. It reminds readers that real influence is built on trust and shared values. Whether you’re leading a company or a cause, it’s a call to speak boldly and build your own tribe.
7. The Tibetan Book of Living and Dying by Sogyal Rinpoche
Equal parts spiritual guide and philosophical reflection, this book weaves Tibetan Buddhist teachings with Western perspectives on mortality, grief, and rebirth. Wallace turns to it not only for personal growth but also for grounding ethical decision-making in a deeper sense of purpose. It’s a book that speaks to those navigating endings—personal, spiritual, or professional and offers a path toward clarity and compassion. It does not offer answers. It offers presence, which is often far more powerful.

The books that shape us are often those that disrupt us first. Catriona Wallace’s list is not filled with comfort reads. It’s made of hard questions, structural truths, and radical shifts in thinking. From feminist manifestos to Buddhist reflections, from purpose-led business to systemic critique, this bookshelf is a mirror of her own leadership—decisive, curious, and grounded in values. If you’re building something bold or seeking language for change, there’s a good chance one of these books will meet you where you are and carry you further than you expected.
Ethics & Policy
Hyderabad: Dr. Pritam Singh Foundation hosts AI and ethics round table at Tech Mahindra

The Dr. Pritam Singh Foundation and IILM University hosted a Round Table on “Human at Core: AI, Ethics, and the Future” in Hyderabad. Leaders and academics discussed leveraging AI for inclusive growth while maintaining ethics, inclusivity, and human-centric technology.
Published Date – 30 August 2025, 12:57 PM
Hyderabad: The Dr. Pritam Singh Foundation, in collaboration with IILM University, hosted a high-level Round Table Discussion on “Human at Core: AI, Ethics, and the Future” at Tech Mahindra, Cyberabad.
The event, held in memory of the late Dr. Pritam Singh, pioneering academic, visionary leader, and architect of transformative management education in India, brought together policymakers, business leaders, and academics to explore how India can harness artificial intelligence (AI) while safeguarding ethics, inclusivity, and human values.
In his keynote address, Padmanabhaiah Kantipudi, IAS (Retd.), Chairman of the Administrative Staff College of India (ASCI),
paid tribute to Dr. Pritam Singh, describing him as a nation-builder who bridged academia, business, and governance.
The Round Table theme, Leadership: AI, Ethics, and the Future, underscored India’s opportunity to leverage AI for inclusive growth across healthcare, agriculture, education, and fintech—while ensuring technology remains human-centric and trustworthy.
Ethics & Policy
AI ethics: Bridging the gap between public concern and global pursuit – Pennsylvania

(The Center Square) – Those who grew up in the 20th and 21st centuries have spent their lives in an environment saturated with cautionary tales about technology and human error, projections of ancient flood myths onto modern scenarios in which the hubris of our species brings our downfall.
They feature a point of no return, dubbed the “singularity” by Manhattan Project physicist John von Neumann, who suggested that technology would advance to a stage after which life as we know it would become unrecognizable.
Some say with the advent of artificial intelligence, that moment has come. And with it, a massive gap between public perception and the goals of both government and private industry. While states court data center development and tech investments, polling from Pew Research indicates Americans outside the industry have strong misgivings about AI.
In Pennsylvania, giants like Amazon and Microsoft have pledged to spend billions building the high-powered infrastructure required to enable the technology. Fostering this progress is a rare point of agreement between the state’s Democratic and Republican leadership, even bringing Gov. Josh Shapiro to the same event – if not the same stage – as President Donald Trump.
Pittsburgh is rebranding itself as the “global capital of physical AI,” leveraging its blue-collar manufacturing reputation and its prestigious academic research institutions to depict the perfect marriage of code and machine. Three Mile Island is rebranding itself as Crane Clean Energy Center, coming back online exclusively to power Microsoft AI services. Some legislators are eager to turn the lights back on fossil fuel-burning plants and even build new ones to generate the energy required to feed both AI and the everyday consumers already on the grid.
– Advertisement –
At the federal level, Trump has revoked guardrails established under the Biden administration with an executive order entitled “Removing Barriers to American Leadership in Artificial Intelligence.” In July, the White House released its “AI Action Plan.”
The document reads, “We need to build and maintain vast AI infrastructure and the energy to power it. To do that, we will continue to reject radical climate dogma and bureaucratic red tape, as the Administration has done since Inauguration Day. Simply put, we need to ‘Build, Baby, Build!’”
To borrow an analogy from Shapiro’s favorite sport, it’s a full-court press, and there’s hardly a day that goes by that messaging from the state doesn’t tout the thrilling promise of the new AI era. Next week, Shapiro will be returning to Pittsburgh along with a wide array of luminaries to attend the AI Horizons summit in Bakery Square, a hub for established and developing tech companies.
According to leaders like Trump and Shapiro, the stakes could not be higher. It isn’t just a race for technological prowess — it’s an existential fight against China for control of the future itself. AI sits at the heart of innovation in fields like biotechnology, which promise to eradicate disease, address climate collapse, and revolutionize agriculture. It also sits at the heart of defense, an industry that thrives in Pennsylvania.
Yet, one area of overlap in which both everyday citizens and AI experts agree is that they want to see more government control and regulation of the technology. Already seeing the impacts of political deepfakes, algorithmic bias, and rogue chatbots, AI has far outpaced legislation, often to disastrous effect.
In an interview with The Center Square, Penn researcher Dr. Michael Kearns said that he’s less worried about autonomous machines becoming all-powerful than the challenges already posed by AI.
– Advertisement –
Kearns spends his time creating mathematical models and writing about how to embed ethical human principles into machine code. He believes that in some areas like chatbots, progress may have reached a point where improvements appear incremental for the average user. He cites the most recent ChatGPT update as evidence.
“I think the harms that are already being demonstrated are much more worrisome,” said Kearns. “Demographic bias, chatbots hurling racist invectives because they were trained on racist material, privacy leaks.”
Kearns says that a major barrier to getting effective regulatory policy is incentivizing experts to leave behind engaging work in the field as researchers and lucrative roles in tech in order to work on policy. Without people who understand how the algorithms operate, it’s difficult to create “auditable” regulations, meaning there are clear tests to pass.
Kearns pointed to ISO 420001. This is an international standard that focuses on process rather than outcome to guide developers in creating ethical AI. He also noted that the market itself is a strong guide. When someone gets hurt or hurts someone else using AI, it’s bad for business, incentivizing companies to do their due diligence.
He also noted crossroads where two ethical issues intersect. For instance, companies are entrusted with their users’ personal data. If policing misuse of the product requires an invasion of privacy, like accessing information stored on the cloud, there’s only so much that can be done.
OpenAI recently announced that it is scanning user conversations for concerning statements and escalating them to human teams, who may contact authorities when deemed appropriate. For some, the idea of alerting the police to someone suffering from mental illness is a dangerous breech. Still, it demonstrates the calculated risks AI companies have to make when faced with reports of suicide, psychosis, and violence arising out of conversations with chatbots.
Kearns says that even with the imperative for self-regulation on AI companies, he expects there to be more stumbling blocks before real improvement is seen in the absence of regulation. He cites watchdogs like the investigative journalists at ProPublica who demonstrated machine bias against Black people in programs used to inform criminal sentencing in 2016.
Kearns noted that the “headline risk” is not the same as enforceable regulation and mainly applies to well-established companies. For the most part, a company with a household name has an investment in maintaining a positive reputation. For others just getting started or flying under the radar, however, public pressure can’t replace law.
One area of AI concern that has been widely explored in the media is the use of AI by those who make and enforce the law. Kearns said, for his part, he’s found “three-letter agencies” to be “among the most conservative of AI adopters just because of the stakes involved.
In Pennsylvania, AI is used by the state police force.
In an email to The Center Square, PSP Communications Director Myles Snyder wrote, “The Pennsylvania State Police, like many law enforcement agencies, utilizes various technologies to enhance public safety and support our mission. Some of these tools incorporate AI-driven capabilities. The Pennsylvania State Police carefully evaluates these tools to ensure they align with legal, ethical, and operational considerations.”
PSP was unwilling to discuss the specifics of those technologies.
AI is also used by the U.S. military and other militaries around the world, including those of Israel, Ukraine, and Russia, who are demonstrating a fundamental shift in the way war is conducted through technology.
In Gaza, the Lavender AI system was used to identify and target individuals connected with Hamas, allowing human agents to approve strikes with acceptable numbers of civilian casualties, according to Israeli intelligence officials who spoke to The Guardian on the matter. Analysis of AI use in Ukraine calls for a nuanced understanding of the way the technology is being used and ways in which it should be regulated by international bodies governing warfare in the future.
Then, there are the more ephemeral concerns. Along with the long-looming “jobpocalypse,” many fear that offloading our day-to-day lives into the hands of AI may deplete our sense of meaning. Students using AI may fail to learn. Workers using AI may feel purposeless. Relationships with or grounded in AI may lead to disconnection.
Kearns acknowledged that there would be disruption in the classroom and workplace to navigate but it would also provide opportunities for people who previously may not have been able to gain entrance into challenging fields.
As for outsourcing joy, he asked “If somebody comes along with a robot that can play better tennis than you and you love playing tennis, are you going to stop playing tennis?”
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Business1 day ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies