Ethics & Policy
Bruce Holsinger’s new novel ‘Culpability’ explores morality and AI

In July, local author and University of Virginia professor Bruce Holsinger became one of the lucky few when his latest novel, Culpability, was selected by Oprah Winfrey for her book club.
Taking place at a beach house on the Chesapeake Bay, the novel tells the story of Noah Cassidy, his wife Lorelei Shaw, and their three children, as they navigate the aftereffects of a traumatic car crash involving their self-driving minivan. Culpability is a beachy, literary thriller with undercurrents of police procedural, and near-future speculative fiction. Entertaining and suspense-filled, the book explores complex questions around humanity’s relationships with technology. It also grapples with themes of avoidance and distraction, family dynamics, mental health, and morality, class, and trust.
The story opens immediately before the impact of the life-changing car crash, and introduces readers to the Cassidy-Shaw family with a snapshot that reveals technological saturation, from phones and laptops in use by family members to the highly sophisticated minivan powered by AI. Lorelei, who’s been awarded a MacArthur “Genius” Fellowship, specializes in computational morality and the ethics of AI, and Noah is a pretty average lawyer. Their kids are tweens and teens of privilege who enjoy the ease and comfort that accompanies their parents’ wealth. The family’s relationship to technology is at times fraught, as Holsinger expertly pulls narrative strings to ask tricky questions about how we live, and how we live ethically, with AI.
“We want our helpful machines to be like us, and so we tend to project onto them our ways of understanding the world,” writes Lorelei in one of the meta-narrative excerpts that serve as breaks between chapters in Culpability. “Yet such human-seeming systems comprise a small fraction of the AI shaping our everyday experience. Even as you read these words, there are AI systems at work all around you… And there is almost no one teaching them how to be good.”
Throughout Culpability, Holsinger returns to the question of how to be good, drawing attention to the ways we exist in the world and with each other, and how technology shapes our experiences and decisions. While it shies away from taking a firm stance, the book asks readers to pay attention to the impact of the technologies we have largely normalized in our smartphones, smart homes, and smart vehicles. Holsinger has created a fascinating thought experiment by inviting the reader to inhabit the world of the Cassidy-Shaw family, and asking what one would do in their place.
Holsinger has published five novels including Culpability, and a variety of nonfiction books. A Guggenheim Fellow, he teaches in UVA’s English department and serves as board chair for WriterHouse, where he also teaches. He responded to our questions by email, while on a book tour.
C-VILLE Weekly: What was the initial kernel of an idea that led you to explore the themes in this book?
Bruce Holsinger: Culpability had two points of origin: a family outing to the Northern Neck of Virginia, where I was initially inspired to set a novel by the Chesapeake Bay; and the sudden mania for Artificial Intelligence beginning in late 2022, when ChatGPT came on the scene. I was resolved to set a novel in that location, and it was only gradually that the AI and moral responsibility themes got layered into the book as part of my writing process.
How has your own relationship to AI changed through your writing process?
My own writing process has not been affected, though I’ve been struck, as have all of my colleagues, by the incursion of Large Language Models (LLMs) like ChatGPT into all aspects of university life—student writing, research, administrative prose, and so on.
Which real-world writers and thinkers helped inform the foundation of Lorelei’s work in computational morality?
There are so many! I read widely in research on ethical AI, algorithmic inequity, and related topics, including the work of Fei-Fei Li, Timnit Gebru, Thilo Hagendorff, and many others. I’m not an expert in the topic, but I learned enough to be able to sketch Lorelei’s life, profession, and work in a way I hope is convincing to readers.
How has being featured in Oprah’s Book Club changed the experience of publication and launching this book, compared to your past novels?
The selection had a huge effect on every aspect of the book and its publication. The on-sale date was moved up by three months from October to July, meaning there were very few advance reviews, pre-orders, and so on. In the three weeks since the announcement, though, the novel has been reviewed and read far more widely than any of my other books. I’m in the middle of a long book tour that’s exhausting and wonderful at the same time, and Culpability has a kind of visibility that has been exhilarating to experience. I never expected one of my novels to be a selection for a national book club, let alone Oprah Winfrey’s, and I still don’t quite believe it’s happening.
Ethics & Policy
Universities Bypass Ethics Reviews for AI Synthetic Medical Data

In the rapidly evolving field of medical research, artificial intelligence is reshaping how scientists handle sensitive data, potentially bypassing traditional ethical safeguards. A recent report highlights how several prominent universities are opting out of standard ethics reviews for studies using AI-generated medical data, arguing that such synthetic information poses no risk to real patients. This shift could accelerate innovation but raises questions about oversight in an era where AI tools are becoming indispensable.
Representatives from four major medical research centers, including institutions in the U.S. and Europe, have informed Nature that they’ve waived typical institutional review board (IRB) processes for projects involving these fabricated datasets. The rationale is straightforward: synthetic data, created by algorithms that mimic real patient records without including any identifiable or traceable information, doesn’t involve human subjects in the conventional sense. This allows researchers to train AI models on vast amounts of simulated health records, from imaging scans to genetic profiles, without the delays and paperwork associated with ethics approvals.
The Ethical Gray Zone in AI-Driven Research
Critics, however, warn that this approach might erode the foundational principles of medical ethics, established in the wake of historical abuses like the Tuskegee syphilis study. By sidestepping IRBs, which typically scrutinize potential harms, data privacy, and informed consent, institutions could inadvertently open the door to biases embedded in the AI systems generating the data. For instance, if the algorithms are trained on skewed real-world datasets, the synthetic outputs might perpetuate disparities in healthcare outcomes for underrepresented groups.
Proponents counter that the benefits outweigh these concerns, particularly in fields like drug discovery and personalized medicine, where data scarcity has long been a bottleneck. One researcher quoted in the Nature article emphasized that synthetic data enables rapid prototyping of AI diagnostics, potentially speeding up breakthroughs in areas such as cancer detection or rare disease modeling. Universities like those affiliated with the report are already integrating these methods into their workflows, viewing them as a pragmatic response to regulatory hurdles that can stall projects for months.
Implications for Regulatory Frameworks
This trend is not isolated; it’s part of a broader push to adapt ethics guidelines to AI’s capabilities. In the U.S., the Food and Drug Administration has begun exploring how to regulate AI-generated data in clinical trials, while European bodies under the General Data Protection Regulation (GDPR) are debating whether synthetic datasets truly escape privacy rules. Industry insiders note that companies like Google and IBM are investing heavily in synthetic data generation, seeing it as a way to comply with strict data protection laws without compromising on innovation.
Yet, the lack of uniform standards could lead to inconsistencies. Some experts argue for a hybrid model where synthetic data undergoes a lighter review process, focusing on algorithmic transparency rather than patient rights. As one bioethicist told Nature, “We’re trading one set of risks for another—real patient data breaches for the unknown perils of AI hallucinations in medical simulations.”
Balancing Innovation and Accountability
Looking ahead, this development could transform how medical research is conducted globally. With AI tools becoming more sophisticated, the line between real and synthetic data blurs, promising faster iterations in machine learning models for epidemiology or vaccine development. However, without robust guidelines, there’s a risk of public backlash if errors in synthetic data lead to flawed research outcomes.
Institutions are responding by forming internal committees to self-regulate, but calls for international standards are growing. As the Nature report underscores, the key challenge is ensuring that this shortcut doesn’t undermine trust in science. For industry leaders, the message is clear: embrace AI’s potential, but proceed with caution to maintain the integrity of ethical oversight in an increasingly digital research environment.
Ethics & Policy
Canadian AI Ethics Report Withdrawn Over Fabricated Citations

In a striking irony that underscores the perils of artificial intelligence in academic and policy circles, a comprehensive Canadian government report advocating for ethical AI deployment in education has been exposed for citing over 15 fabricated sources. The document, produced after an 18-month effort by Quebec’s Higher Education Council, aimed to guide educators on responsibly integrating AI tools into classrooms. Instead, it has become a cautionary tale about the very technology it sought to regulate.
Experts, including AI researchers and fact-checkers, uncovered the discrepancies when scrutinizing the report’s bibliography. Many of the cited works, purportedly from reputable journals and authors, simply do not exist—hallmarks of AI-generated hallucinations, where language models invent plausible but nonexistent references. This revelation, detailed in a recent piece by Ars Technica, highlights how even well-intentioned initiatives can falter when relying on unverified AI assistance.
The Hallucination Epidemic in Policy Making
The report’s authors, who remain unnamed in public disclosures, likely turned to AI models like ChatGPT or similar tools to expedite research and drafting. According to the Ars Technica analysis, over a dozen citations pointed to phantom studies on topics such as AI’s impact on student equity and data privacy. This isn’t an isolated incident; a study from ScienceDaily warns that AI’s “black box” nature exacerbates ethical lapses, leaving decisions untraceable and potentially harmful.
Industry insiders point out that such fabrications erode trust in governmental advisories, especially in education where AI is increasingly used for grading, content creation, and personalized learning. The Quebec council has since pulled the report for revisions, but the damage raises questions about accountability in AI-augmented workflows.
Broader Implications for AI Ethics in Academia
Delving deeper, this scandal aligns with findings from a AAUP report on artificial intelligence in higher education, which emphasizes the need for faculty oversight to mitigate risks like algorithmic bias and privacy breaches. Without stringent verification protocols, AI tools can propagate misinformation at scale, as evidenced by the Canadian case.
Moreover, a qualitative study published in Scientific Reports explores ethical issues in AI for foreign language learning, noting that unchecked use could undermine academic integrity. For policymakers and educators, the takeaway is clear: ethical guidelines must include robust human review to prevent AI from fabricating the evidence base itself.
Calls for Reform and Industry Responses
In response, tech firms are under pressure to enhance transparency in their models. A recent Ars Technica story on a Duke University study reveals that professionals who rely on AI often face reputational stigma, fearing judgment for perceived laziness or inaccuracy. This cultural shift is prompting calls for mandatory disclosure of AI involvement in official documents.
Educational bodies worldwide are now reevaluating their approaches. For instance, a report from the Education Commission of the States discusses state-level responses to AI, advocating balanced innovation with ethical safeguards. As AI permeates education, incidents like the Quebec report serve as a wake-up call, urging a hybrid model where human expertise tempers technological efficiency.
Toward a More Vigilant Future
Ultimately, this episode illustrates the double-edged sword of AI: its power to streamline complex tasks is matched by its potential for undetected errors. Industry leaders argue that investing in AI literacy training for researchers and policymakers could prevent future mishaps. With reports like one from Brussels Signal noting a surge in ethical breaches, the path forward demands not just better tools, but a fundamental rethinking of how we integrate them into critical domains like education policy.
Ethics & Policy
CTO Sandhya Arun on ethics, AI-first delivery and reimagining the enterprise

At a media roundtable in Bengaluru yesterday, Sandhya Arun, Chief Technology Officer at Wipro, went beyond corporate talking points to offer a candid view of how the company is reimagining innovation. From the launch of the Wipro Innovation Network to her unflinching stance on AI ethics and delivery transformation, she outlined a vision that is both practical and ambitious.
Moving from labs to networks
Wipro believes that innovation can no longer happen in closed rooms. Wipro Innovation Network (WIN) is designed to accelerate strategic, client-centric co-innovation. The network will leverage frontier technologies ranging from Artificial Intelligence (AI) to Quantum Computing to solve some of the most challenging problems for clients across industries.
“As a company, we believe that collaboration fuels innovation,” said Srini Pallia, CEO and Managing Director, Wipro. “The Wipro Innovation Network is a catalyst for AI-powered co-innovation. By bringing together our global clients, partners, academia, and tech communities, we aim to accelerate innovation that solves real-world challenges, unlocks bold new possibilities, and drives competitive edge for our clients.”
The 60,000 sq. ft. Innovation Lab in Kodathi, Bengaluru, is now a flagship hub where clients work with Wipro experts to explore frontier technologies. Other centres in Mountain View, London, Sydney and Dubai extend this network globally.
WIN focuses on five frontier themes: agentic AI, embodied AI and robotics, quantum computing, blockchain and digital ledger technologies, and quantum and AI safe cyber resilience.
Applied innovation in action
During the roundtable, Wipro showcased real-world solutions that illustrate WIN’s approach. BuildAI is an AI-powered SDLC orchestration tool that accelerates development and boosts collaboration. InspectAI uses drones, robotic dogs and crawlers to transform plant inspections into safer, predictive workflows. Wipro showcased its quantum solution for drug discovery, designed to tackle molecular optimisation challenges and potentially cut down years in pharmaceutical R&D.
These solutions, along with Smart Factories, Wealth AI and the Cloud Car, demonstrate how Wipro is applying AI-first thinking to reshape industries.
Ethics and governance matter
When I asked her directly about the ethical framework around AI, Arun was clear that responsibility sits at the centre of Wipro’s adoption strategy. “We have a strong Responsible AI leader, Ivana, and a council that meets every week with representatives from across the company,” she said. “AI cannot own IP. We have had deep legal discussions on patents and what we must put into contracts.”
On productivity claims, Arun was emphatic. “Do not call out a number. It has no mathematical foundation and it varies by client context. Beyond productivity, the real question is how to reimagine the enterprise with an AI-first mindset.”
The Responsible AI programme is led by Ivana Bartoletti, Wipro’s Global Chief Privacy & AI Governance Officer. A well-known voice on AI ethics and author of An Artificial Revolution: On Power, Politics and AI, she co-founded the network Women Leading in AI and advises global institutions on governance and rights. At Wipro she anchors the Responsible AI Council, ensuring innovation is backed by legal defensibility and ethical guardrails.
Shifting delivery from tactical to strategic
I also asked her whether developers would move into more strategic roles as AI takes over repetitive work. Arun responded with conviction. “Much of the tactical work is now AI assisted or AI taken over,” she said. “Humans must become supervisors of AI, making judgments and reimagining processes. Age does not matter. Without strong foundations in business, software and data engineering you cannot use AI effectively.”
She described how hackathons are now won by consultants from mergers and acquisitions or presales, not just engineers. “People who understand business and customer experience can leverage AI better,” she noted. Wipro is embedding this mindset in its NextGen associates fresh from campus and has mandated AI training for leadership, including board members.
From pilots to outcomes
For Arun, impact is the only measure that matters. An idea must become a client solution and then scale across industries. This is backed by Wipro’s Horizon programme, which funds innovations with both near-term ROI and long-term potential.
On the commercial side, she said, “Clients do not care how many agents or people are inside the box. They want outcomes, quality and sustained impact. Pricing is increasingly moving towards outcome-based models.”
Examples of this shift include everyday agent solutions for leave and travel management, contract analysis and M&A due diligence, which reduce weeks of effort to hours with humans still in the loop.
The bigger picture
The Innovation Network is powered by around 200 direct innovation staff and 100 distinguished technologists, supported by thousands more through ventures, partners and crowdsourcing. “Criticism of R&D spend is easy,” Arun said in closing. “What matters is impact, ideas that transform client businesses and shape industries.”
Wipro’s AI game
Wipro’s AI game is clear. By combining a distributed network of labs, start-ups, partners, and academic collaborations with a strong focus on ethics and delivery transformation, the company is moving beyond pilots to industry-scale solutions. From agentic AI to quantum and blockchain, Sandhya Arun’s vision places Wipro in the middle of some of the most consequential shifts in enterprise technology. If the roundtable showed anything, it is that Wipro wants to lead not only in deploying AI, but in reimagining how enterprises work, deliver, and compete in the years ahead.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi