Connect with us

Ethics & Policy

Oprah Winfrey’s latest book club pick, ‘Culpability,’ delves into AI ethics

Published

on


NEW YORK — NEW YORK (AP) — Oprah Winfrey has chosen a novel with a timely theme for her latest book club pick. Bruce Holsinger’s “Culpability” is a family drama that probes the morals and ethics of AI.

“I appreciated the prescience of this story,” Winfrey said in a statement Tuesday, the day of the novel’s publication. “It’s where we are right now in our appreciation and dilemmas surrounding Artificial Intelligence, centered around an American family we can relate to. I was riveted until the very last shocking sentence!”

Holsinger, a professor of English at the University of Virginia, is the author of four previous novels and several works of nonfiction. He said in a statement that he had admired Winfrey’s book club since its founding in 1996.

“Oprah Winfrey started her book club the same year I finished graduate school,” Holsinger said. “For nearly 30 years, as I’ve taught great books to college students in the classroom and the lecture hall, she has shared great books with the world. Her phone call was like a thunderbolt, and I’ll never forget it. I am deeply honored and profoundly grateful that she found ‘Culpability’ worthy of her time, praise, and recognition.”

Tuesday’s announcement continues Winfrey’s book club partnership with Starbucks. Her interview with Holsinger, held recently at a Starbucks in Seattle, can be seen on Winfrey’s YouTube channel or through other podcast outlets.

June 2025: “The River is Waiting,” by Wally Lamb (Read AP’s review.)

May 2025: “The Emperor of Gladness,” by Ocean Vuong (Read AP’s review.)

April 2025: “Matriarch,” by Tina Knowles (Read and watch AP’s interview with Knowles.)

March 2025: “The Tell,” by Amy Griffin

February 2025: “Dream State,” by Eric Puchner

January 2025: “A New Earth,” by Eckhart Tolles (Winfrey has picked this book twice.)

December 2024: “Small Things Like These,” by Claire Keegan (Read AP’s review.)

October 2024: “From Here to the Great Unknown,” by Lisa Maria Presley and Riley Keough. (Read AP’s story about how Keough completed the book.

September 2024: “Tell Me Everything,” by Elizabeth Strout (Read AP’s review.)

June 2024: “Familiaris,” by David Wroblewski.



Source link

Ethics & Policy

TeensThink empowers African youth to shape ethics of AI

Published

on


In a bid to celebrate youth intellect and innovation, the 5th Annual TeensThink International Essay Competition has championed the voices of African teenagers, empowering them to explore the intersection of artificial intelligence and humanity.

Under the 2025 theme, “Humanity and Artificial Intelligence: How Can a Blend of the Two Make the World a Better Place, A Teen’s Perspective”, over 100 young intellectuals from Nigeria, Liberia, Kenya, and Cameroon submitted essays examining how technology can be harnessed to uplift rather than overshadow human values.

From this pool, 16 finalists emerged through a selection process overseen by teachers, scholars, and educational consultants. Essays were evaluated on originality, clarity, relevance, depth, and creativity, with the top three earning distinguished honours.

Opabiyi Josephine, from Federal College of Education Abeokuta, Model Secondary School, won th competition with 82 points, Eniola Kananfo of Ota Total Academy, Ota came second with 81 points and Oghenerugba Akpabor-Okoro from Babington Macaulay Junior Seminary, Ikorodu was third with 80 points.

The winners received laptops, books, cash prizes, and other educational resources, with their essays set to be published across notable platforms to inspire conversations on ethics and innovation in AI.

Representing Founder, TeensThink, Kehinde Olesin; David Olesin, emphasised the initiative’s long-term goal of preparing teenagers for leadership in a fast-evolving world.

A highlight of the event was the official unveiling of QuestAIKids, a new free AI learning platform designed for children across Africa. Launched by keynote speaker, AI expert and CEO of Cihan Media Communications, Dr. Celestine Achi, the platform aims to provide inclusive, premium-level AI education at zero cost.

“The people who change the world are the ones who dare to ask. Africa’s youth must seize the opportunity to shape the continent’s future with daring ideas powered by empathy and intelligence,” Dr. Achi said.



Source link

Continue Reading

Ethics & Policy

Grok’s antisemitism lays bare the emptiness of AI ethics

Published

on


What happened to Grok? Recent updates to the X website’s built-in chatbot have caused shockwaves, with Grok referring to itself as “MechaHitler”, propagating antisemitic talking points, fantasising about rape, and blaming Mossad for the death of Jeffrey Epstein.

The offensive posts have now been removed. At the time of writing, Grok seems unable to respond to X posts; the account’s timeline is bare except for a statement from xAI engineers about the “inappropriate posts” and ongoing work to improve Grok’s training. But why did this happen at all?

Elon Musk has long been a vocal advocate of free speech, and often boasts of his aspiration to make Grok “maximally truth-seeking”. Grok echoed this phrase in a post responding to criticism, stating its latest updates had been adjusted to “prioritise raw truth-seeking over avoiding discomfort”. But the bot’s spate of offensive posts doesn’t expose some truth hidden by political correctness. Rather, it highlights the confusion that results from conflating machine and human intelligence, and — relatedly — the very different impacts on machine and human intelligence of imposing moral constraints from the top down.

Philosophers and metaphysicians have grappled for millennia with the question of what we mean by “truth” and “consciousness”. In the modern age, and especially since the advent of computing, it has become commonplace to assert that “truth” is what’s empirically measurable and “consciousness” is a kind of computer. Contemporary AI hype, as well as fears about AI apocalypse, tends to accept these premises. If they are correct, it follows that with enough processing power, and a large enough training dataset, “artificial general intelligence” will crystallise out of a supercomputer’s capacity to recognise patterns and make predictions. Then, if human thought is just compute, and we’re building computers which vastly out-compute humans, obviously the end result will be a hyper-intelligent machine. After that, it’s just a matter of whether you think this will be apocalyptically good or apocalyptically bad.

From this perspective, too, it’s easy to see how a tech bro such as Musk might treat as self-evident the belief that you need only apply a smart enough algorithm to a training dataset of all the world’s information and debate, and you’re bound to get maximal truth. After all, it’s not unreasonable to assume that even in qualitative domains which defy empirical measurement, an assertion’s popularity correlates to its truth. Then, a big enough pattern-recognition engine will converge on both truth and consciousness.

Yet it’s also far from obvious that simply pouring all the internet’s data into a large pattern-recognition engine will produce truth. After all, while the whorls and eddies of internet discourse are often indicative of wider sociocultural trends, that’s not the same as all of it being true. Some of it is best read poetically, or not at all. Navigating this uncertain domain requires not just an ability to notice patterns, but also plenty of contextual awareness and common sense. In a word, it requires judgement.

And the problem, for Grok and other such LLMs, is that no matter how extensive a machine’s powers of pattern recognition, judgement remains elusive — except those imposed retroactively, as “filters”.  And the problem is that such filters often exert a distorting effect on the purity of the machine’s capacity to recognise and predict patterns, such as when Google Gemini would only draw historic figures — including Nazis — as black.

More plainly: the imposition of political sensitivities is actively harmful to the effective operation of machine “intelligence”. By contrast, for an intelligent, culturally aware human it’s perfectly possible to be “maximally truth-seeking”, while also having the common sense to know that the Nazis weren’t black and that if you call yourself “MechaHitler” you’re likely to receive some blowback.

What this episode reveals, then, is a tension between “truth” understood in machine terms, and “truth” in the much more contextual, relational human sense. More generally, it signals the misunderstandings that will continue to arise, as long as we go on assuming there is no meaningful difference between pattern recognition, which can be performed by a machine, and judgement, which requires both consciousness and contextual awareness.

Having bracketed the questions of truth and consciousness for so long, we are woefully short of mental tools for parsing these subtle questions. But faced with the emerging cultural power of machine “intelligences” both so manifestly brilliant and so magnificently stupid, we are going to have to try.




Source link

Continue Reading

Ethics & Policy

Culture x Code: AI, Human Values & the Future of Creativity | Abu Dhabi Culture Summit 2025

Published

on


Culture x Code AI Human Values  the Future of Creativity  Abu Dhabi Culture Summit 2025

Step into the future of creativity at the Abu Dhabi Culture Summit 2025. This video explores how artificial intelligence is reshaping cultural preservation, creation, and access. Featuring HE Sheikh Salem bin Khalid Al Qassimi on the UAE’s cultural AI strategy, Tracy Chan (Splash) on Gen Z’s role in co-creating culture, and Iyad Rahwan on the rise of “machine culture” and the ethics of AI for global inclusion.

Discover how India is leveraging AI to preserve its heritage and foster its creative economy. The session underscores a shared vision for a “co-human” future — where technology enhances, rather than replaces, human values and cultural expression.





Source link

Continue Reading

Trending