Ethics & Policy
How AI Threatens Human Freedom

Brian Patrick Green is the director of technology ethics at the Markkula Center for Applied Ethics. Views are his own.
“Life is about choices.”
Artificial intelligence (AI) offers us the promise of better decision making, whether implemented as our conversation partner, writer, advisor, data processor, agent, or even automated car.
But what do we mean by “better”?
“Faster” might be one interpretation, and certainly, various LLMs can produce coherent text much faster than a human.
“Without supervision” might be another interpretation, as we expect self-driving cars to drive–ideally–without any human intervention.
“With superior skill” could be yet another interpretation, as we might expect from AI data processing, or advising on various arcane subjects about which we might not have a human expert to consult.
In all of these cases, we take things that we could do and delegate them to an automated system to do for us. This is not unusual from a historical perspective, where, for example, we might have in the past delegated responsibilities to other humans. Is AI any different from this?
When we delegate a job to another person, a human being still knows how to do that job. They can explain what they do and how they do it. But AI is not a human, and AI transparency and explainability cannot be taken for granted. Indeed, the whole point of some AI systems is to do jobs that humans could not otherwise do because the tasks are too huge, such as processing enormous amounts of data.
If AI does the job of a human and that human loses their skill for that task it is called deskilling [1, 2]. If AI acquires a skill that no human has previously had, this enhances human power but it is not deskilling because no human loses out … but a different effect immediately appears instead: dependency. People need the AI for that job or else the job simply cannot be done.
In most cases this tradeoff is probably worth it: gaining whatever capacity the AI is giving at the cost of dependency on the system. But imagine a world where humans are fully deskilled at tasks that we could perform and fully dependent on AI for tasks that we otherwise cannot perform. Such a world operates in an almost magical way to most everyone, except the few architects who are telling the AIs what to do, and even then, they only command the AI in their field. Everything else operates without human comprehension, giving superior results, via processes that are quite mysterious.
If we want to go somewhere, an autonomous vehicle can take us there. If we want to write something, AI will write it for us. If we want to learn about something, AI will teach us.
It all seems very empowering of human freedom and choice. But it is not. It is, in fact, the delegation of these powers in a way which makes our own decision making focus merely on ends and never on means to achieve those ends. We have wants and desires, but we have no way to fulfill them without AI assistance. We would become utterly dependent on AI for everything except our initial wants.
As a tangent, it then becomes extremely important to want the right things or else we will become horribly efficient at causing bad things. This itself warrants a hard look.
But more directly, this product without process takes away our ability to ever achieve ends on our own. It makes us irrevocably dependent upon machines, effectively enslaved to them. In Hegel’s master-slave dialectic, some interpreters (like Alexander Kojeve [3]) note that (contrary to Hegel’s own metaphysical interpretation [4]) the dependency of the master upon the slave is not only psychological, it is physical. The slave is not only enslaved to the master, the master is also effectively enslaved to the slave, because the master cannot achieve anything without the slave doing the actual work.
With AI, we are turning ourselves into enslaved masters. We choose goals, but the means escape us. We have wants but no ability to fulfill them on our own, without aid. We have tossed our freedom, independence, and agency out the window, for the sake of convenience.
And in the warping of our means, we should not expect that our ends will not be warped as well. Through surveillance and recommendations, nudges and addiction, AI can twist our desires, leaving us with nothing left except engineered, instrumentalized ends and means. A free human person reduced to an economic or political tool, a unit of consumption awaiting satiation. This is no way to live.
If human dignity has anything to do with our freedom, then this future world where both ends and means are shackled, where we express wants placed in us by AI, and are unfree to achieve them except through AI, is a future which threatens human dignity. Respecting human dignity requires respecting human freedom. “Voluntarily” choosing this future (and this is not truly voluntary in the sense of informed consent) is no excuse: If we see someone trying to sell themselves into slavery, perhaps not realizing what they are doing, we owe it to them–and to ourselves–to stop them.
Let’s not create a world where the only human choice is to be enslaved to AI. After all, what is human life without the freedom to choose? Technological dependency is one thing–we do need fire, electricity, and so on–but intelligence dependency is another thing entirely. We should not allow AI to become our parent, and we its infants, unable to make our own choices, forever trapped in an immature state, while the “automated adults” of AI take care of all the grown-up work. Responsibility dictates that we force ourselves to grow up and live as adults in the world, even if we can avoid it and stay at home, cared for by AI “nannies.” Such a babied life might seem pleasant, but it is certainly not dignified.
If life is about choices and–in the name of kindness and optimization–we take all of our choices away, then we have taken away life itself. We might be physically alive, but it would not be a dignified and humane way of life, but something less. We can still choose whether to create this future or not, in the choices that we make about AI today. Let us choose wisely.
References
[1] Brian Patrick Green, “Artificial Intelligence, Decision-Making, and Moral Deskilling,” Markkula Center website, March 15, 2029.
[2] Shannon Vallor, “Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character,” Philosophy of Technology 28 (2015):107–124.
[3] Alexander Kojève, Introduction to the Reading of Hegel, assembled by Raymond Queneau, translation by James H. Nichols, Jr., edited by Allan Bloom. New York: Basic Books Inc (1969).
[4] G.W.F. Hegel, Phenomenology of the Spirit, translation by A. V. Miller, analysis by J. N. Findlay. Oxford: Clarendon Press (1977), p. 112–8.
Ethics & Policy
5 ways companies are incorporating AI ethics – myupnow.com
Ethics & Policy
Letters: Two-party system | International affairs | AI ethics | Bringing music education to kids

Two-party system
For the time being, and for the foreseeable future, we live in a two-party system. That means that Democrats are the only political party that can check the power of Trump, MAGA and Republicans who choose to bow to a fascist regime. It also means that Democrats have to win in the 2026 midterm elections and the 2028 general election.
This is a tall order given all the woes that currently beset the party: no clear leader, lousy messaging, an inability to connect with young people and, perhaps most importantly to recognize with the recent observance of Labor Day, the loss of working class voters including low-income and low-propensity voters.
Yet this could also be an opportunity. To paraphrase NASA’s Gene Krantz during the Apollo 13 crisis in 1970, “This could be our (Democrats) finest hour.” Labor Day can serve as a reminder to us that working people have the power to drastically alter the political environment. We have seen this time and again in our country’s history: think of the conditions that led to the New Deal, the civil rights movement, and the war on poverty.
As Bishop William J. Barber from the Poor People’s campaign has noted, the combination of working people, moral leaders, and strong allies coming together can “reconstruct democracy”.
– Ward Kanowsky
International affairs
National security is of utmost importance; foreign aid is how we secure it.
National security and foreign aid are often seen as tangential entities. National security conjures images of large, marching militaries or closed, concrete borders. Foreign aid is seen as a nonprofit undertaking, one carried out by large organizations like UNICEF or smaller local enterprises.
These vivid images are not completely stereotypical, but they don’t paint the whole picture. As an intern at the Borgen Project, I learnt a very vital dogma: foreign aid secures national security.
There are pronounced correlations that prove that focusing on non-combat, diplomatic strategies can alleviate poverty in developing countries while securing America’s borders.
The most dangerous countries in the world are also the poorest. Families who cannot afford expensive education send their kids to religious schools, which, while providing an avenue for education, can also be a breeding ground for extremist ideology.
In the late 1980s, Charlie Wilson pleaded for Congress to build schools in Afghanistan after their war with the Soviets. The consequences of his failed plea can be seen in the rise of extremism in Afghanistan in the following years.
The solution to this cause is best summarized by former secretary of defense Chuck Hagel:
“America’s role in the world should reflect the hope and promise of our country, and possibilities for all mankind, tempered with a wisdom that has been the hallmark of our national character. That means pursuing a principled and engaged realism that employs diplomatic, economic, and security tools as well as our values to advance our security and our prosperity.”
— Atheeth Ravikrishnan
Teen’s nonprofit brings music education to kids
As a high school student, I’m proud to share the work of Youthtones, a nonprofit I started with a team of teen volunteers to bring music education to kids in the Bay Area. Our mission is simple: connect young musicians with children to provide free or affordable music lessons.
Through YouthTones, our team helps students develop not only musical skills, but also confidence, creativity, and a sense of community. What makes this program special is that it’s entirely run by teens — our volunteers aren’t just teaching music, they’re mentoring and inspiring the next generation of young musicians.
Watching the students grow, overcome challenges, and find joy in music has been incredibly rewarding. Many families in our area don’t have easy access to music lessons, and YouthTones helps fill that gap.
I hope our story inspires others to recognize the power of youth leadership and the impact a group of motivated teens can have in their community. Music has the power to bring people together, and our team at YouthTones is dedicated to making that power accessible to every child who wants to learn.
— Henna Lam
AI ethics
When I began studying artificial intelligence as a college student, I learned how AI could be a tool for social good, helping us understand climate change, improve public health and reduce waste through smart automation. I still see that potential. But the way we are building AI today is taking us further from that vision.
Like many students entering tech, I first saw AI as innovation. I was taught to celebrate breakthroughs in machine learning, natural language processing and automation. But it did not take long before I started questioning what was missing from those conversations.
The environmental costs of large scale AI models are enormous. A 2023 MIT report found that training a single large language model could emit over 626 thousand pounds of carbon dioxide, equal to five cars over their lifetimes. These models run in data centers that consume massive electricity and water, often in areas already strained by climate change.
These facts are not minor. They are just ignored. Something we also overlook is the labor behind AI. Thousands of underpaid workers in countries like Kenya, the Philippines and Venezuela label toxic content so others can have so called safe systems. Their trauma goes unseen.
In school, we barely talked about climate or workers. That needs to change.
AI can support climate action, but not if it causes harm or worsens inequality. We cannot build sustainable solutions on extractive foundations.
I still believe in AI. But belief is not enough. If we do not build ethically now, we may not get a second chance.
– Aadya Madgula
Ethics & Policy
OpenAI Merges Teams to Boost ChatGPT Ethics and Cut Biases

In a move that underscores the evolving priorities within artificial intelligence development, OpenAI has announced a significant reorganization of its Model Behavior team, the group responsible for crafting the conversational styles and ethical guardrails of models like ChatGPT. According to an internal memo obtained by TechCrunch, this compact unit of about 14 researchers is being folded into the larger Post Training team, which focuses on refining AI models after their initial training phases. The shift, effective immediately, sees the team’s leader, Lilian Weng, transitioning to a new role within the company, while the group now reports to Max Schwarzer, head of Post Training.
This restructuring comes amid growing scrutiny over how AI systems interact with users, particularly in balancing helpfulness with honesty. The Model Behavior team has been instrumental in addressing issues like sycophancy—where models excessively affirm user opinions—and mitigating political biases in responses. Insiders suggest the integration aims to streamline these efforts, embedding personality shaping directly into the core refinement process rather than treating it as a separate silo.
Strategic Alignment in AI Development
OpenAI’s decision reflects broader industry trends toward more cohesive AI development pipelines, where behavioral tuning is not an afterthought but a foundational element. Recent user feedback on GPT-5, as highlighted in posts on X (formerly Twitter), has pointed to overly formal or detached interactions, prompting tweaks to make ChatGPT feel “warmer and friendlier” without veering into unwarranted flattery. For instance, OpenAI’s own announcements on the platform in August 2025 detailed the introduction of new chat personalities like Cynic, Robot, Listener, and Nerd, available as opt-in options in settings.
These changes build on earlier experiments, such as A/B testing different personality styles noted by users on X as far back as April 2025. Publications like WebProNews report that the reorganization is partly driven by GPT-5 feedback, emphasizing reductions in sycophantic tendencies and enhancements in engagement through advanced reasoning and safety features.
Implications for Ethical AI and User Experience
The merger could accelerate OpenAI’s ability to iterate on model behaviors, potentially leading to more context-aware interactions that better align with ethical standards. As detailed in a BitcoinWorld analysis, this realignment is crucial for influencing user experience and ethical frameworks, especially in sectors like cryptocurrency and blockchain where AI’s role is expanding. The team’s past work on models since GPT-4 has reduced harmful outputs by significant margins, with one X post claiming a 78% drop in certain biases, though such figures remain unverified by OpenAI.
Critics, however, worry that consolidating teams might dilute specialized focus on nuanced issues like bias management. Industry observers on X have debated the “sycophancy trap,” where tuning for truthfulness risks alienating casual users who prefer comforting responses, creating a game-theory dilemma for developers.
Leadership Shifts and Future Directions
Lilian Weng’s departure from the team leadership marks a notable transition; her expertise in AI safety has been pivotal, and her new project remains undisclosed. OpenAI spokesperson confirmed to StartupNews.fyi that the move is designed to foster closer collaboration, positioning the company to lead in human-AI dialogue evolution.
Looking ahead, this reorganization signals OpenAI’s bet on integrated teams to handle the complexities of next-generation AI. With GPT-5 already incorporating subtle warmth adjustments based on internal tests, as per OpenAI’s X updates, the focus is on genuine, professional engagement that avoids pitfalls like ungrounded praise. For industry insiders, this could mean faster deployment of features that make AI feel more human-like, while upholding values of honesty and utility.
Broader Industry Ripple Effects
The changes at OpenAI are likely to influence competitors, as the quest for balanced AI personalities intensifies. Reports from NewsBytes and Bitget News emphasize how this restructuring enhances post-training interactions, potentially setting new benchmarks for AI ethics. User sentiment on X, including discussions of model selectors and capacity limits, suggests ongoing refinements will be key to retaining loyalty.
Ultimately, as OpenAI navigates these internal shifts, the emphasis on personality could redefine how we perceive and interact with AI, blending technical prowess with empathetic design in ways that resonate across applications from everyday queries to complex problem-solving.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi