Ethics & Policy
Grok’s antisemitism lays bare the emptiness of AI ethics
What happened to Grok? Recent updates to the X website’s built-in chatbot have caused shockwaves, with Grok referring to itself as “MechaHitler”, propagating antisemitic talking points, fantasising about rape, and blaming Mossad for the death of Jeffrey Epstein.
The offensive posts have now been removed. At the time of writing, Grok seems unable to respond to X posts; the account’s timeline is bare except for a statement from xAI engineers about the “inappropriate posts” and ongoing work to improve Grok’s training. But why did this happen at all?
Elon Musk has long been a vocal advocate of free speech, and often boasts of his aspiration to make Grok “maximally truth-seeking”. Grok echoed this phrase in a post responding to criticism, stating its latest updates had been adjusted to “prioritise raw truth-seeking over avoiding discomfort”. But the bot’s spate of offensive posts doesn’t expose some truth hidden by political correctness. Rather, it highlights the confusion that results from conflating machine and human intelligence, and — relatedly — the very different impacts on machine and human intelligence of imposing moral constraints from the top down.
Philosophers and metaphysicians have grappled for millennia with the question of what we mean by “truth” and “consciousness”. In the modern age, and especially since the advent of computing, it has become commonplace to assert that “truth” is what’s empirically measurable and “consciousness” is a kind of computer. Contemporary AI hype, as well as fears about AI apocalypse, tends to accept these premises. If they are correct, it follows that with enough processing power, and a large enough training dataset, “artificial general intelligence” will crystallise out of a supercomputer’s capacity to recognise patterns and make predictions. Then, if human thought is just compute, and we’re building computers which vastly out-compute humans, obviously the end result will be a hyper-intelligent machine. After that, it’s just a matter of whether you think this will be apocalyptically good or apocalyptically bad.
From this perspective, too, it’s easy to see how a tech bro such as Musk might treat as self-evident the belief that you need only apply a smart enough algorithm to a training dataset of all the world’s information and debate, and you’re bound to get maximal truth. After all, it’s not unreasonable to assume that even in qualitative domains which defy empirical measurement, an assertion’s popularity correlates to its truth. Then, a big enough pattern-recognition engine will converge on both truth and consciousness.
Yet it’s also far from obvious that simply pouring all the internet’s data into a large pattern-recognition engine will produce truth. After all, while the whorls and eddies of internet discourse are often indicative of wider sociocultural trends, that’s not the same as all of it being true. Some of it is best read poetically, or not at all. Navigating this uncertain domain requires not just an ability to notice patterns, but also plenty of contextual awareness and common sense. In a word, it requires judgement.
And the problem, for Grok and other such LLMs, is that no matter how extensive a machine’s powers of pattern recognition, judgement remains elusive — except those imposed retroactively, as “filters”. And the problem is that such filters often exert a distorting effect on the purity of the machine’s capacity to recognise and predict patterns, such as when Google Gemini would only draw historic figures — including Nazis — as black.
More plainly: the imposition of political sensitivities is actively harmful to the effective operation of machine “intelligence”. By contrast, for an intelligent, culturally aware human it’s perfectly possible to be “maximally truth-seeking”, while also having the common sense to know that the Nazis weren’t black and that if you call yourself “MechaHitler” you’re likely to receive some blowback.
What this episode reveals, then, is a tension between “truth” understood in machine terms, and “truth” in the much more contextual, relational human sense. More generally, it signals the misunderstandings that will continue to arise, as long as we go on assuming there is no meaningful difference between pattern recognition, which can be performed by a machine, and judgement, which requires both consciousness and contextual awareness.
Having bracketed the questions of truth and consciousness for so long, we are woefully short of mental tools for parsing these subtle questions. But faced with the emerging cultural power of machine “intelligences” both so manifestly brilliant and so magnificently stupid, we are going to have to try.
Ethics & Policy
The Ethics of AI Detection in Work-From-Home Life – Corvallis Gazette-Times
The Ethics of AI Detection in Work-From-Home Life Corvallis Gazette-Times
Source link
Ethics & Policy
TeensThink empowers African youth to shape ethics of AI
In a bid to celebrate youth intellect and innovation, the 5th Annual TeensThink International Essay Competition has championed the voices of African teenagers, empowering them to explore the intersection of artificial intelligence and humanity.
Under the 2025 theme, “Humanity and Artificial Intelligence: How Can a Blend of the Two Make the World a Better Place, A Teen’s Perspective”, over 100 young intellectuals from Nigeria, Liberia, Kenya, and Cameroon submitted essays examining how technology can be harnessed to uplift rather than overshadow human values.
From this pool, 16 finalists emerged through a selection process overseen by teachers, scholars, and educational consultants. Essays were evaluated on originality, clarity, relevance, depth, and creativity, with the top three earning distinguished honours.
Opabiyi Josephine, from Federal College of Education Abeokuta, Model Secondary School, won th competition with 82 points, Eniola Kananfo of Ota Total Academy, Ota came second with 81 points and Oghenerugba Akpabor-Okoro from Babington Macaulay Junior Seminary, Ikorodu was third with 80 points.
The winners received laptops, books, cash prizes, and other educational resources, with their essays set to be published across notable platforms to inspire conversations on ethics and innovation in AI.
Representing Founder, TeensThink, Kehinde Olesin; David Olesin, emphasised the initiative’s long-term goal of preparing teenagers for leadership in a fast-evolving world.
A highlight of the event was the official unveiling of QuestAIKids, a new free AI learning platform designed for children across Africa. Launched by keynote speaker, AI expert and CEO of Cihan Media Communications, Dr. Celestine Achi, the platform aims to provide inclusive, premium-level AI education at zero cost.
“The people who change the world are the ones who dare to ask. Africa’s youth must seize the opportunity to shape the continent’s future with daring ideas powered by empathy and intelligence,” Dr. Achi said.
Ethics & Policy
Culture x Code: AI, Human Values & the Future of Creativity | Abu Dhabi Culture Summit 2025
Step into the future of creativity at the Abu Dhabi Culture Summit 2025. This video explores how artificial intelligence is reshaping cultural preservation, creation, and access. Featuring HE Sheikh Salem bin Khalid Al Qassimi on the UAE’s cultural AI strategy, Tracy Chan (Splash) on Gen Z’s role in co-creating culture, and Iyad Rahwan on the rise of “machine culture” and the ethics of AI for global inclusion.
Discover how India is leveraging AI to preserve its heritage and foster its creative economy. The session underscores a shared vision for a “co-human” future — where technology enhances, rather than replaces, human values and cultural expression.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education2 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education3 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education3 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education5 days ago
How ChatGPT is breaking higher education, explained
-
Jobs & Careers1 week ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle