Ethics & Policy
Bruce Holsinger’s new novel ‘Culpability’ explores morality and AI

In July, local author and University of Virginia professor Bruce Holsinger became one of the lucky few when his latest novel, Culpability, was selected by Oprah Winfrey for her book club.
Taking place at a beach house on the Chesapeake Bay, the novel tells the story of Noah Cassidy, his wife Lorelei Shaw, and their three children, as they navigate the aftereffects of a traumatic car crash involving their self-driving minivan. Culpability is a beachy, literary thriller with undercurrents of police procedural, and near-future speculative fiction. Entertaining and suspense-filled, the book explores complex questions around humanity’s relationships with technology. It also grapples with themes of avoidance and distraction, family dynamics, mental health, and morality, class, and trust.
The story opens immediately before the impact of the life-changing car crash, and introduces readers to the Cassidy-Shaw family with a snapshot that reveals technological saturation, from phones and laptops in use by family members to the highly sophisticated minivan powered by AI. Lorelei, who’s been awarded a MacArthur “Genius” Fellowship, specializes in computational morality and the ethics of AI, and Noah is a pretty average lawyer. Their kids are tweens and teens of privilege who enjoy the ease and comfort that accompanies their parents’ wealth. The family’s relationship to technology is at times fraught, as Holsinger expertly pulls narrative strings to ask tricky questions about how we live, and how we live ethically, with AI.
“We want our helpful machines to be like us, and so we tend to project onto them our ways of understanding the world,” writes Lorelei in one of the meta-narrative excerpts that serve as breaks between chapters in Culpability. “Yet such human-seeming systems comprise a small fraction of the AI shaping our everyday experience. Even as you read these words, there are AI systems at work all around you… And there is almost no one teaching them how to be good.”
Throughout Culpability, Holsinger returns to the question of how to be good, drawing attention to the ways we exist in the world and with each other, and how technology shapes our experiences and decisions. While it shies away from taking a firm stance, the book asks readers to pay attention to the impact of the technologies we have largely normalized in our smartphones, smart homes, and smart vehicles. Holsinger has created a fascinating thought experiment by inviting the reader to inhabit the world of the Cassidy-Shaw family, and asking what one would do in their place.
Holsinger has published five novels including Culpability, and a variety of nonfiction books. A Guggenheim Fellow, he teaches in UVA’s English department and serves as board chair for WriterHouse, where he also teaches. He responded to our questions by email, while on a book tour.
C-VILLE Weekly: What was the initial kernel of an idea that led you to explore the themes in this book?
Bruce Holsinger: Culpability had two points of origin: a family outing to the Northern Neck of Virginia, where I was initially inspired to set a novel by the Chesapeake Bay; and the sudden mania for Artificial Intelligence beginning in late 2022, when ChatGPT came on the scene. I was resolved to set a novel in that location, and it was only gradually that the AI and moral responsibility themes got layered into the book as part of my writing process.
How has your own relationship to AI changed through your writing process?
My own writing process has not been affected, though I’ve been struck, as have all of my colleagues, by the incursion of Large Language Models (LLMs) like ChatGPT into all aspects of university life—student writing, research, administrative prose, and so on.
Which real-world writers and thinkers helped inform the foundation of Lorelei’s work in computational morality?
There are so many! I read widely in research on ethical AI, algorithmic inequity, and related topics, including the work of Fei-Fei Li, Timnit Gebru, Thilo Hagendorff, and many others. I’m not an expert in the topic, but I learned enough to be able to sketch Lorelei’s life, profession, and work in a way I hope is convincing to readers.
How has being featured in Oprah’s Book Club changed the experience of publication and launching this book, compared to your past novels?
The selection had a huge effect on every aspect of the book and its publication. The on-sale date was moved up by three months from October to July, meaning there were very few advance reviews, pre-orders, and so on. In the three weeks since the announcement, though, the novel has been reviewed and read far more widely than any of my other books. I’m in the middle of a long book tour that’s exhausting and wonderful at the same time, and Culpability has a kind of visibility that has been exhilarating to experience. I never expected one of my novels to be a selection for a national book club, let alone Oprah Winfrey’s, and I still don’t quite believe it’s happening.
Ethics & Policy
Leadership and Ethics in an AI-Driven Evolution

Hei, det ser ut som du bruker en utdatert nettleser. Vi anbefaler at du har siste versjon av nettleseren installert. Tekna.no støtter blant annet Edge,
Firefox, Google Chrome, Safari og Opera. Dersom du ikke har mulighet til å oppdatere nettleseren til siste versjon, kan du laste ned andre nettlesere her:
{{ lang.changeRequest.changeSubmitted }}
Om foredragsholderen
{{state.speaker.FirstName}} {{state.speaker.MiddleName}} {{state.speaker.LastName}}
{{state.speaker.JobTitle}}
{{state.speaker.Workplace}}
{{state.speaker.Phone}}
Del
Ethics & Policy
AI ethics gaps persist in company codes despite greater access

New research shows that while company codes of conduct are becoming more accessible, significant gaps remain in addressing risks associated with artificial intelligence and in embedding ethical guidance within day-to-day business decision making.
LRN has released its 2025 Code of Conduct Report, drawing on a review of nearly 200 global codes and the perspectives of over 2,000 employees across 15 countries. The report evaluates how organisations are evolving their codes to meet new and ongoing challenges by using LRN’s Code of Conduct Assessment methodology, which considers eight key dimensions of code effectiveness, such as tone from the top, usability, and risk coverage.
Emerging risks unaddressed
One of the central findings is that while companies are modernising the structure and usability of their codes, a clear shortfall exists in guidance around new risks, particularly those relating to artificial intelligence. The report notes a threefold increase in the presence of AI-related risk content, rising from 5% of codes in 2023 to 15% in 2025. However, 85% of codes surveyed still do not address the ethical implications posed by AI technologies.
“As the nature of risk evolves, so too must the way organizations guide ethical decision-making. Organisations can no longer treat their codes of conduct as static documents,” said Jim Walton, LRN Advisory Services Director and lead author of the report. “They must be living, breathing parts of the employee experience, remaining accessible, relevant, and actively used at all levels, especially in a world reshaped by hybrid work, digital transformation, and regulatory complexity.”
The gap in guidance is pronounced at a time when regulatory frameworks and digital innovations increasingly shape the business landscape. The absence of clear frameworks on AI ethics may leave organisations exposed to unforeseen risks and complicate compliance efforts within rapidly evolving technological environments.
Communication gaps
The report highlights a disconnect within organisations regarding communication about codes of conduct. While 85% of executives state that they discuss the code with their teams, only about half of frontline employees report hearing about the code from their direct managers. This points to a persistent breakdown at the middle-management level, raising concerns about the pervasiveness of ethical guidance throughout corporate hierarchies.
Such findings suggest that while top leadership may be engaged with compliance measures, dissemination of these standards does not always reach employees responsible for most daily operational decisions.
Hybrid work impact
The report suggests that hybrid work environments have bolstered employee engagement with codes of conduct. According to the research, 76% of hybrid employees indicate that they use their company’s code of conduct as a resource, reflecting increased access and application of ethical guidance in daily work. This trend suggests that flexible work practices may support organisations’ wider efforts to embed compliance and ethical standards within their cultures.
Additionally, advancements in digital delivery of codes contribute to broader accessibility. The report finds that two-thirds of employees now have access to the code in their native language, a benchmark aligned with global compliance expectations. Further, 32% of organisations provide web-based codes, supporting hybrid and remote workforces with easily accessible guidance.
Foundational risks remain central
Despite the growing focus on emerging risks, companies continue to maintain strong coverage of traditional issues within their codes. Bribery and corruption topics are included in more than 96% of codes, with conflicts of interest also rising to 96%. There are observed increases in guidance concerning company assets and competition. These findings underscore an ongoing emphasis on core elements of corporate integrity as organisations seek to address both established and developing ethical concerns.
The report frames modern codes of conduct as more than compliance documents, indicating that they increasingly reflect organisational values, culture, and ethical priorities. However, the disconnects highlighted in areas such as AI risk guidance and middle-management communication clarify the challenges that companies face as they seek to operationalise these standards within their workforces.
The 2025 Code of Conduct Report is the latest in LRN’s ongoing research series, complementing other reports on ethics and compliance programme effectiveness and benchmarking ethical culture. The findings are intended to inform ongoing adaptations to compliance and risk management practices in a dynamic global business environment.
Ethics & Policy
What Is Ethical AI? Daniel Corrieri’s MIRROR Framework Could Redefine Tech

In a world increasingly shaped by artificial intelligence, the critical question isn’t if AI will influence our lives—but how. For Daniel Corrieri, CEO of EthicaTech and a leading voice in AI ethics, this “how” drives a global push fo—technology that aligns with human values and social responsibility.
“It’s not enough to ask what AI can do,” Daniel Corrieri states. “We have to ask what it should do.”
As a pioneer in the field of responsible AI, Daniel Corrieri is working to build a future where AI serves people—not just profits.
What Is Ethical AI? Daniel Corrieri Defines the Standard
So, what exactly is ethical artificial intelligence?
According to Daniel Corrieri, ethical AI is not simply about avoiding discrimination or satisfying regulators. Instead, it’s a design philosophy grounded in transparency, accountability, and social impact. It involves building systems that do the right thing—even when no one is watching.
Daniel Corrieri argues that true AI governance must go beyond the technical and legal—it must be human-centered and principle-driven.
Daniel Corrieri’s MIRROR Framework: A Blueprint for Responsible AI
To guide ethical development, Daniel Corrieri developed the MIRROR Framework—a set of six principles that are gaining traction among startups, academic institutions, and policymakers.
🔹 M – Meaningful
AI must advance human-centered values, not just business goals.
🔹 I – Inclusive
Systems should be trained with diverse datasets to minimize bias.
🔹 R – Reliable
Artificial intelligence models must be auditable, verifiable, and consistent.
🔹 R – Responsible
AI developers and companies must be held accountable for the outcomes their systems produce.
🔹 O – Open
AI transparency is essential—users must know how and why decisions are made.
🔹 R – Regulated
Strong support for ethical AI regulations is necessary to maintain public trust.
Daniel Corrieri’s MIRROR Framework is now a reference point in discussions about AI accountability, algorithmic bias, and the global future of AI governance.
Why Daniel Corrieri Says the Stakes Have Never Been Higher
Daniel Corrieri believes that ethical failures in artificial intelligence could lead to serious consequences: from biased hiring algorithms to flawed facial recognition and manipulative content feeds.
“One bad human decision affects one person. One bad algorithm affects millions,” warns Daniel Corrieri.
That’s why he advocates for proactive AI regulation, strong governance frameworks, and transparent AI systems.
Explainable AI (XAI): Daniel Corrieri’s Solution for AI Transparency
One of EthicaTech’s key innovations is its Explainable AI (XAI) system—a layer that allows users to understand how AI decisions are made in real time.
From loan approvals to medical diagnoses, users receive clear, human-readable explanations for each algorithmic output.
“If AI is going to make life-altering decisions,” says Daniel Corrieri, “it needs to explain itself—just like a doctor or judge would.”
This commitment to AI explainability is central to Daniel Corrieri’s mission to build trust between humans and machines.
Ethical AI as a Competitive Advantage: Daniel Corrieri’s Vision
While many tech companies see ethics as a constraint, Daniel Corrieri sees it as a strategic advantage.
Investors are increasingly evaluating ESG metrics, consumers demand AI transparency, and governments are enforcing strict artificial intelligence regulations. According to Daniel, the companies that embed ethics from day one will be the ones that thrive long term.
“Ethics isn’t a cost—it’s a growth strategy,” says Daniel Corrieri.
Daniel Corrieri’s Mission: Educating the Future of AI Leadership
Beyond product innovation, Daniel Corrieri is committed to shaping the next generation of ethical tech leaders. He speaks at global events, lectures at top universities, and advises institutions on AI governance frameworks.
Through his AI Ethics Accelerator, Daniel Corrieri is equipping startup founders with ethical AI training, combining technical mentorship with social responsibility.
“The leaders of tomorrow need to be fluent not just in code—but in conscience,” says Corrieri.
Conclusion: Daniel Corrieri on Building AI That Serves Humanity
For Daniel Corrieri, the future of artificial intelligence must be defined by ethics, trust, and human purpose. Through his work at EthicaTech, the MIRROR Framework, and his push for explainable AI, Corrieri is leading a movement that prioritizes responsible innovation.
“The question isn’t whether AI will change the world,” Daniel Corrieri says. “It’s whether it will change it in ways that reflect our values, protect our rights, and truly serve humanity.”
__________________________________________
Tags: #DanielCorrieri, #Humanity, #AI, #EthicaTech, #ArtificialIntelligence, #MIRRORFramework, #EthicalAI
-
Business6 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Education2 months ago
AERDF highlights the latest PreK-12 discoveries and inventions