Tools & Platforms
The classical key to the AI revolution

Since the time of Socrates, philosophy has operated as a small and badly funded intellectual insurgency against the hype and nonsense that inevitably afflicts our public discourse. Today, there is no greater prompt for hype and nonsense than AI technology. One important variant of AI hype centres on democracy. On a pessimistic view, prominent in circles such as the World Economic Forum, AI threatens the annihilation of democracy. AI algorithms supercharge the spread of misinformation and disinformation online. They intensify political polarisation through micro-targeting and social-media echo chambers. All this corrodes both the epistemic ecosystem and the ethos of citizen solidarity upon which democracy relies.
This widely credited narrative, in which AI subverts our fellow citizens’ rational faculties, beguiling them into voting for authoritarian populist leaders, seems overblown. It ignores the powerful empirical evidence that people are, if anything, unduly sceptical towards supposed information that conflicts with their pre-existing beliefs. It also sidelines plausible explanations of why people vote as they do – that they have different values and priorities arising from their distinctive personalities, life trajectories, socio-economic status, and so on.
Indeed, salient facts in seeking to explain the populist backlash include the following: in the US between 1980 and 2014 national income grew by 61 per cent, but the income of the poorest half grew by only one per cent; the income of the top 10 per cent grew by 121 per cent, and the income of the top one per cent tripled. Similarly, there is no correlation in the US between majority political opinion and policy outcomes once one controls for the preferences of the richest 10 per cent.
In the final analysis, the worry that AI is anti-democratic in this way may itself be an anti-democratic sentiment. After all, vital to a healthy democratic culture is the widespread attitude that one’s fellow citizens possess the threshold capacities that render them worthy of democratic political participation.
There is another variant of AI hype that runs in the opposite direction, to the effect that AI systems can radically enhance democracy by replacing democratic deliberation and decision-making by the citizenry as a whole. On this view, the AI system would amass the relevant preferences of all citizens and then identify the option that enjoys the greatest overall level of support.
This automated vision of democracy is deeply flawed on three grounds. Democracy is not about aggregating the preferences and values of citizens, but rather their considered judgments about the common good. Moreover, even if the AI system could identify these judgments, nothing ensures that they were formed through a process of deliberation, including informed debate among citizens situated as free and equal. Finally, democracy is not just a seminar-room discussion writ large, but a process of collective decision. This brings out the individual and collective agency, and corresponding accountability, that is an essential part of democratic decision-making.
Still, not everything to be said about the relationship between AI and democracy falls into the category of hype. I will now try to substantiate this hypothesis by advancing five propositions:
First proposition: classical democracy is a highly participatory form of government, yet one that is intermediate between crude majoritarianism and liberal democracy.
Democracy answers the question: who should rule? Its answer: all of the citizens as free and equal members of the polity. Democracy is collective self-government, in which free and equal citizens participate in political deliberation and decision-making on matters affecting the common good. The rationale for democracy is twofold: (1) efficacy, in that democracy does better than other systems of government in delivering vital common goods such as peace, prosperity equitably distributed, and the protection of rights; and (2) empowerment, in that democracy affirms the dignity of each citizen by enabling them to participate as free and equals in collective self-government.
Although democracy is a majoritarian decision-making procedure, it is much more demanding than crude majoritarianism. This is because it has exacting pre-conditions. Citizens must enjoy certain rights, such as the right to speak freely on political matters, there must be restrictions on economic and other inequalities that prevent some citizens, through poverty or ill-health, from being able to participate in politics, and there needs to be access to education and information so that deliberation can be well-informed, among other conditions.
As Josiah Ober has argued in his book, Demopolis, the conditions on democracy do not go so far as to necessitate that a democracy is liberal in character: democracy is historically and conceptually distinct from liberalism. This is the lesson Ober draws from taking ancient Athenian democracy as his historical paradigm. A society can be democratic while pursuing illiberal policies such as imposing capital punishment, banning abortion, establishing a state religion, and outlawing offensive speech. Even if liberal democracy is superior to democracy, democracy itself is not inherently liberal.
One of the virtues of this classical conception of democracy is that it stands as a counterweight to the two main anti-democratic forces in contemporary Western societies: authoritarian populism, on the one hand, and technocracy, on the other.
Against the populists, the classical democrat insists that simple majoritarianism is not enough, and that all citizens must be included in democratic decision-making, not just a subset – ‘the real people’ – in the exclusionary language of populism. Against the technocrats, they insist that political options cannot be pre-filtered by an unaccountable elite – such as central bankers or supreme court judges. Experts are essential, of course, to inform democratic decision-making, but as the saying goes, experts must be on tap, not on top.
Second proposition: The discourse around AI poses an ideological threat to the humanistic ethos needed to sustain democracy.
There are two inter-related threats under this heading: to our self-understanding and to our values.
The first threat is an impoverished understanding of our human capabilities, one that erases profound differences between humans and AI systems. All of the big tech corporations claim they are pursuing the goal of Artificial General Intelligence (AGI). This is the idea of an AI system that can simulate human cognitive capacities across the board, from making a medical diagnosis to writing a poem. But the very pursuit of that goal will create powerful incentives for them to pour their vast resources into propaganda campaigns that blur the distinction between human and AI capabilities, with anthropomorphising talk of AI systems being ‘assistants’, ‘friends’, and, ultimately, fully adequate or even superior replacements for humans.
In reality, however, human nature differs from machines in fundamental ways: large language models, like ChatGPT, which operate on the basis of identifying statistical patterns in a vast corpus of data, lack anything like human capacity for genuine understanding of the world, for communication, and for rational autonomous choice. The very notion of intelligence – even potentially ‘superintelligence’ – exemplified by AI is narrowly instrumentalist, not one that can grasp worthwhile values and the legitimate means of pursuing them.
Why does it matter that we retain a grasp of our distinctive nature as human beings? Because of the Aristotelian idea that what a good life for a human being consists in is precisely the exercise of our distinctive human capabilities. And it is the promotion of the good of all that is the ultimate objective of political community. Moreover, democratic politics itself is a key domain in which we exercise these capabilities. If we lose a vivid sense of our distinctive human capacities – if we lose our species confidence, as it were – this will lead to an impoverished ethics and a hollowing-out of our conception of the dignity of democratic citizens. And here, of course, the association between trans-humanist ideology and Silicon Valley circles already serves as a warning of AI’s anti-humanistic agenda.
Another ideological threat posed by AI is to our values themselves. Proposals for using AI systems are overwhelmingly advanced on the basis that they will produce valuable outcomes – such as cancer diagnoses and hiring decisions – that are at least as good as those produced by humans, but generated far more efficiently. But this relentless focus on valuable outcomes ignores important values that are centred on the process through which outcomes are achieved. It ignores the wisdom in Constantine Cavafy’s poem Ithaka that the journey matters inherently, not just reaching the final destination.
For example: we naturally want correct legal decisions from judges. We should also want decisions made for reasons that justify the decision. Lex Machina is an AI system that can predict the outcome of US patent litigation as accurately as a top patent lawyer, so why not use it as a judge? An immediate problem is that it reaches its predictions not on the basis of the law, but extraneous factors such as the amount of money involved in the case, the names of the judge and the lawyers. This is like the AI system that was successful in distinguishing huskies from wolves, but used the presence of snow in the picture as the determining factor: right decision, wrong reason.
We also want a decision for which the judge can be held personally accountable, given the potentially catastrophic effects on life, liberty, and wealth. An AI system, lacking autonomy, cannot be held responsible in the same way. Finally, we value the fact that a judge can empathise with us in the plight we confronted. So, that, even if the AI system can, say, pass a merciful sentence, it won’t reflect the empathetic response that a human judge can have for another’s situation. The sentence may be identical, but it will convey a different message.
Third proposition: AI technology poses a power-based threat to democracy, insofar as it facilitates a massive shift of power away from states and towards corporations.
New technologies have tilted the balance of power from governments to corporations with a resulting reduction in the democratic control over the exercise of such power.
The digital revolution has enabled corporations increasingly to usurp governance functions associated with the democratic state. This power shift, which Marietje Schaake in her recent book calls the ‘tech coup’, is not only disturbing in itself as an anti-democratic development, it is further aggravated by the fact that the corporations often discharge these functions in ways that are non-transparent and generally not responsive to other relevant values that bear on the exercise of governance functions, such as the rule of law and human rights.
The usurpation of governance functions by tech companies has two broad dimensions. First, corporations are doing things that fall within the scope of governmental power, such as control of vast quantities of digital data, maintaining digital infrastructure, ensuring national security, playing a role in law enforcement, supervising elections, and policing borders. Second, corporations are doing these things through the de facto exercise of governing power, by setting standards to which others are subjected. The failure of the state to take up the governance functions, or to regulate them effectively, often means that corporations are setting standards that should be set by democratically accountable governments, thereby arrogating to themselves vast amounts of ‘de facto governing power’ in contravention of democratic principles. Think here of content moderation online by companies such as Meta or the creation of risk-assessment digital tools for use in bail decisions.
In short, I have described two anti-democratic shifts wrought by the advent of the AI revolution. An anti-humanist ideological shift that undermines our democratic ethos; and a power shift, from governments to corporations. And the two are inter-related: the more we embrace the anti-democratic ideology associated with AI, the more likely we are to tolerate the incursion of AI technology and corporations into more and more areas of human life; and the more tech corporations augment their power, the more that they are able to embed the anti-democratic ideology into our social ethos.
Fourth proposition: The classical conception of democracy, with its strong emphasis on citizen participation, can help us resist both of these threats, and pro-democratic AI tools can be an important part of the process of revitalising democracy.
One element of this revitalisation involves departing from a heavy reliance on representative models of democracy, which are today the object of considerable popular dissatisfaction. As we know from ancient Athens, democracy does not have to be conceived in primarily representative terms: there can be direct participation by the citizenry as a whole, and there can be other forms of representation, such as sortition or vote delegation, that do not necessarily involve the existence of a class of professional politicians liable to capture by powerful corporate interests.
The idea of a radically participatory democracy has always faced serious objections about the feasibility of scaling it up to the level of modern states with their vast and pluralistic citizenries. Would citizens have sufficient opportunity, or even the desire, seriously to engage with political deliberation given their other commitments in a pluralistic society? Would they be sufficiently informed? And how might they engage in genuinely respectful and productive deliberation given the huge number of citizens involved?
Part of the answer here is that, properly directed, AI tools can play an important role in shaping a more radically participatory democratic process by performing such functions as: tailoring information to the distinctive learning styles of citizens; identifying those most directly affected by a given political question and facilitating moderated deliberation among them; where the impact of a decision is more diffuse, and ‘face to face’ deliberation and decision-making less feasible, AI could help to select an agenda-setting and advisory council by identifying random samples of larger populations; and identifying and circulating alternative proposals and measuring the depth (intensity) and breadth (numbers) of audience responses to each, driving towards a decisive vote on a measure likely to gain wide support.
This is not mere speculation. The government of Taiwan’s Pol.is platform, for example, has demonstrated the potential value of digital tools for enabling mass participatory democracy.
A second element in the revitalisation of democracy is to ensure that the process of democratisation is not confined to formal law-making but ranges more broadly. In particular, it should embrace decision-making by tech corporations. If tech corporations are increasingly performing governance roles, shouldn’t they do so through processes that involve greater democratic accountability?
Meta, for example, has established an Oversight Board composed of around two dozen experts, including human rights experts. But why should a tiny and homogeneous expert body specify corporate human rights responsibilities that affect users of Meta’s online platforms worldwide? Why not a broader range of stakeholders, enabled by AI deliberative tools, engaged in democratic deliberation and decision-making?
It is doubtful that a more democratic body would have acquiesced in the restriction of the Oversight Board’s remit to scrutinising compliance with the right of free speech, as opposed to the many other human rights affected by Meta’s activity, such as the rights of its employees engaged in content moderation to just and favourable conditions of work. Many of these employees, after all, are in the Global South, working for very low pay, and subjected to psychologically disturbing images of violence and degradation on a daily basis as part of their work duties.
Moreover, given the considerable latitude for discretion involved in specifying corporate human-rights obligations, and also the fact that people in different parts of the world may have legitimate reasons for specifying them in different ways in line with their cultural and other circumstances, this seems more like a legislative task, in which those affected by the norms laid down should have an active role in shaping them, rather than a strictly adjudicative task of applying pre-existing norms.
Fifth proposition: the classical conception of democracy can form a principled basis for the global regulations we need in the age of AI.
Someone might ask: how can democracy help with the global regulation that AI urgently needs, given that there is no unified global democratic citizenry at the global level?
One answer is provided by Anu Bradford in her book Digital Empires. Bradford predicts the US will increasingly move away from its market-based approach to digital regulation and towards the EU rights-based approach. One factor in doing so is the need to counterbalance the growing number of states adopting China’s state-driven regulatory template by forming a coalition of liberal democratic states.
This analysis ignores the fact that there can be frameworks for co-operating on global regulation that do not require comprehensive buy-in to the US market-driven, the EU rights-based, or the Chinese statist regulatory templates. In particular, contrary to Bradford, regulation does not need to be grounded on liberal democracy writ large. It is here, I think, that the classical conception of democracy, which does not equate democracy with liberal democracy, comes into its own. Its practical benefits could play out at multiple levels.
It could be the focus of a consensus grounded in democratic values between the US and the EU, without having to adjudicate their often radical differences about matters, such as free speech rights or the extent to which markets should be regulated. Similarly, within the EU itself, it could encompass those states that claim to be democratic but reject liberalism, such as Hungary. Similarly, India, which is a leading country on the AI scene, and the world’s largest democracy, albeit with illiberal attitudes on matters such as privacy rights or freedom of religion.
Consider also China, the other AI superpower alongside the US. Here, one might reasonably bet, along with the leading Chinese philosopher, Jiwei Ci, that the increasingly democratic character of Chinese society will generate escalating pressure for democratic political participation. This will eventually spark a crisis for the Chinese Communist Party, confronting it with the choice of either embarking on political democratisation or else engaging in costly forms of repression with dubious prospects of long-term success.
If this potentially explosive democratic expectation exists, we hardly need think of it as also encompassing a demand for liberalism. Similar observations apply to Islamic societies.
To conclude: the founder of DeepMind, Demis Hassabis, like me a member of the Greek diaspora, summed up his mantra as: ‘Solve AI then use it to solve everything else.’ Ancient Athens inspires an alternative mantra: ‘Solve democracy then use it to solve everything else, including AI.’
Tools & Platforms
‘Please join the Tesla silicon team if you want to…’: Elon Musk offers job as he announces ‘epic’ AI chip

Elon Musk has announced a major step forward for Tesla‘s chip development, confirming a ‘great design review’ for the company’s AI5 chip. The CEO made the announcement on X, signaling Tesla’s intensified push into custom semiconductors amid a fierce global competition, and also offered job to engineers at Tesla’s silicon team.According to Musk, the AI5 chip is set to be ‘epic,’ and the upcoming AI6 has a ‘shot at being the best by AI chip by far.’“Just had a great design review today with the Tesla AI5 chip design team! This is going to be an epic chip. And AI6 to follow has a shot at being the best by AI chip by far,” Musk said in a post on X.Musk revealed that Tesla’s silicon strategy has been streamlined. The company is moving from developing two separate chip architectures to focusing all of its talent on just one. “Switching from doing 2 chip architectures to 1 means all our silicon talent is focused on making 1 incredible chip. No-brainer in retrospect,” he wrote.
Job at Tesla chipmaking team
In a call for new talent, Musk invited engineers to join the Tesla silicon team, emphasising the critical nature of their work. He noted that they would be working on chips that “save lives” where “milliseconds matter.”Earlier this year, Tesla signed a major chip supply agreement with Samsung Electronics, reportedly valued at $16.5 billion. The deal is set to run through the end of 2033.Musk confirmed the partnership, stating that Samsung has agreed to allow “full customisation of Tesla-designed chips.” He also revealed that Samsung’s newest fabrication plant in Texas will be dedicated to producing Tesla’s next-generation A16 chipset.This contract is a significant win for Samsung, which has reportedly been facing financial struggles and stiff competition in the chip manufacturing market.
Tools & Platforms
Why AI’s greatest challenge isn’t chips, but people

There are currently more than 2,150 artificial intelligence companies operating in Israel, about 200 of which are branches of international firms. More than half of them focus on enterprise software, healthcare, fintech, and e-commerce. Compared to the broader high-tech sector, these companies tend to be more mature, raise more capital, and operate at later stages of the corporate lifecycle. Yet in the midst of the global AI revolution, Israel faces a strategic obstacle: a shortage of skilled and experienced talent capable of pushing the sector forward.
According to Dr. Ziv Katzir, director of the TELEM AI program at the Israel Innovation Authority, the challenge is global in scope. “This is not just an Israeli phenomenon,” Katzir says. “If someone wants to build unique intellectual property and create an AI company that grows into something big and significant, they need very special people. People with master’s degrees, preferably doctorates, plus years of experience. This journey takes 10–12 years, followed by another three years of work. There are no shortcuts. Demand for these people is growing worldwide, but there is no faster path to producing them.”
Decline in overall demand, shift to experienced workers
A new report from the Innovation Authority, prepared with the Samuel Neaman Institute for National Policy Research, shows a decline in the number of open AI positions, from 3,400 in 2023 to about 2,434 today. But the numbers mask an important trend: companies are no longer hiring juniors. Instead, they are targeting experienced experts.
Today, 65% of demand is for workers with at least three years of experience, compared to 44% in 2023. Meanwhile, demand for entry-level workers dropped from 53% to 31%. “Deep knowledge and experience are the key to success in the new world we are entering,” says Katzir. The greatest demand is for master’s degree holders with 5–6 years of experience.
The report examined companies developing core AI technologies such as image, audio, and text processing. The most sought-after roles are data scientists (40% of demand), followed by data engineers (29%) and ML-Ops specialists. Notably, 20% of roles fall into undefined and emerging categories, a reflection of how quickly the field is evolving.
“The job titles haven’t settled yet,” Katzir explains. “What we see in LinkedIn ads today looks different from a year ago, and will likely change again in the next two years.” In 2023, only 6% of advertised roles were undefined.
Academic pipeline falls short
Each year, fewer than 1,000 students in Israel graduate with advanced research degrees in fields like computer science, electrical engineering, and mathematics. Only 30%–40% enter the AI industry, roughly 300–400 new workers annually, far below demand. Within two years, the need for AI specialists is expected to reach 3,628 positions, leaving a widening gap between supply and demand.
“The current demand equals two to three full years of new graduates, and within two years, it will rise to four or five,” Katzir warns. “You can’t fold time. You can’t make 12 years into three. Long-term solutions are essential, but interim steps are needed as well.”
One clear trend is the renewed importance of formal education. A few years ago, the relevance of a bachelor’s degree for entering high-tech was questioned. Today, AI companies increasingly require advanced degrees and practical experience. “The industry has matured from a point where anyone calling themselves an AI expert was accepted as such,” Katzir says. “Now, deep knowledge and experience are recognized as the real competitive advantage.”
Efforts to bridge the gap
The Innovation Authority is pursuing several measures:
-
Expanding the talent base – Scholarships for advanced degrees and a unique IDF program that combines military service with master’s research.
-
Converting scientists from adjacent fields – Recruiting physics, chemistry, and math graduates and training them as AI researchers. “An AI researcher is first and foremost a scientist, not just a developer,” Katzir notes.
-
Bringing in experts from abroad – A pilot program launched in early 2025 to attract several hundred immigrants, returning Israelis, and foreign specialists.
However, importing talent faces limitations. Only 41% of companies say they are open to it, and 27% report barriers such as security clearance restrictions, cultural fit, regulatory hurdles, and time zone challenges.
The shortage of AI talent is not a passing issue but a challenge for years to come. The pace of technological progress, combined with the education and training bottleneck, raises questions about Israel’s ability to sustain leadership in the field. Still, Katzir remains cautiously optimistic.
“We are in a marathon, not a sprint,” he concludes. “There won’t be three times as many researchers here in two days, but Israel’s starting point is strong. If we continue to invest strategically, we can maintain Israel’s role as a global leader in AI.”
Tools & Platforms
Revealed: What our biggest companies worry about when it comes to AI – AFR
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi