AI Insights
Rethinking the AI Race | The Regulatory Review

Openness in AI models is not the same as freedom.
In 2016, Noam Chomsky, the father of modern linguistics, published the book Who Rules the World? referring to the United States’ dominance in global affairs. Today, policymakers—such as U.S. President Donald J. Trump argue that whoever wins the artificial intelligence (AI) race will rule the world, driven by a relentless, borderless competition for technological supremacy. One strategy gaining traction is open-source AI. But is it advisable? The short answer, I believe, is no.
Closed-source and open-source represent the two main paradigms in software, and AI software is no exception. While closed-source refers to proprietary software with restricted use, open-source software typically involves making the underlying source code publicly available, allowing unrestricted use, including the ability to modify the code and develop new applications.
AI is impacting virtually every industry, and AI startups have proliferated nonstop in recent years. OpenAI secured a multi-billion-dollar investment from Microsoft, while Anthropic has attracted significant investments from Amazon and Google. These companies are currently leading the AI race with closed-source models, a strategy aimed at maintaining proprietary control and addressing safety concerns.
But open-source models have consistently driven innovation and competition in software. Linux, one of the most successful open-source operating systems ever, is pivotal in the computer industry. Google Android, which is used in approximately 70 percent of smartphones worldwide, Amazon Web Services, Microsoft Azure, and all of the world’s top 500 supercomputers run on Linux. The success story of open-source software naturally fuels enthusiasm for open-source AI software. And behind the scenes, companies such as Meta are emerging by developing open-source AI initiatives to promote the democratization and growth of AI through a joint effort.
Mark Zuckerberg, in promoting an open-source model for AI, recalled the story of Linux’s open-source operating system. Linux became “the industry standard foundation for both cloud computing and the operating systems that run most mobile devices—and we all benefit from superior products because of it.”
But the story of Linux is quite different from Meta’s “open-source” AI project, Llama. First and foremost, no universally accepted definition of open-source AI exists. Second, Linux had no “Big Tech” corporation behind it. Its success was made possible by the free software movement, led by American activist and programmer Richard Stallman, who created the GNU General Public License (GPL) to ensure software freedom. The GPL allowed for the free distribution and collaborative development of essential software, most notably the Linux open source operating system, developed by Finnish programmer Linus Torvalds. Linux has become the foundation for numerous open-source operating systems, developed by a global community that has fostered a culture of openness, decentralization, and user control. Llama is not distributed under a GPL.
Under the Llama 4 licensing agreement, entities with more than 700 million monthly active users in the preceding calendar month must obtain a license from Meta, “which Meta may grant to you in its sole discretion” before using the model. Moreover, algorithms powering large AI models rely on vast amounts of data to function effectively. Meta, however, does not make its training data publicly available.
Thus, can we really call it open source?
Most importantly, AI presents fundamentally different and more complex challenges than traditional software, with the primary concern being safety. Traditional algorithms are predictable; we know the inputs and outputs. Consider the Euclidean algorithm, which provides an efficient way for computing the greatest common divisor of two integers. Conversely, AI algorithms are typically unpredictable because they leverage a large amount of data to build models, which are becoming increasingly sophisticated.
Deep learning algorithms, which underlie large language models such as ChatGPT and other well-known AI applications, rely on increasingly complex structures that make AI outputs virtually impossible to interpret or explain. Large language models are performing increasingly well, but would you trust something that you cannot fully interpret and understand? Open-source AI, rather than offering a solution, may be amplifying the problem. Although it is often seen as a tool to promote democratization and technological progress, open source in AI increasingly resembles a Ferrari engine with no brakes.
Like cars, computers and software are powerful technologies—but as with any technology, AI can harm if misused or deployed without a proper understanding of the risks. Currently, we do not know what AI can and cannot do. Competition is important, and open-source software has been a key driver of technological progress, providing the foundation for widely used technologies such as Android smartphones and web infrastructure. It has been, and continues to be, a key paradigm for competition, especially in a digital framework.
Is AI different because we do not know how to stop this technology if required? Free speech, free society, and free software are all appealing concepts, but let us do better than that. In the 18th century, French philosopher Baron de Montesquieu argued that “Liberty is the right to do everything the law permits.” Rather than promoting openness and competition at any cost to rule the world, liberty in AI seems to require a calibrated legal framework that balances innovation and safety.
AI Insights
AI’s Baby Bonus? | American Enterprise Institute

It seems humanity is running out of children faster than expected. Fertility rates are collapsing around the world, often decades ahead of United Nations projections. Turkey’s fell to 1.48 last year—a level the UN thought would not arrive until 2100—while Bogotá’s is now below Tokyo’s. Even India, once assumed to prop up global demographics, has dipped under replacement. According to a new piece in The Economist, the world’s population, once projected to crest at 10.3 billion in 2084, may instead peak in the 2050s below nine billion before declining. (Among those experts mentioned, by the way, is Jesús Fernández-Villaverde, an economist at the University of Pennsylvania and visiting AEI scholar.)
From “Humanity will shrink, far sooner than you think” in the most recent issue: “At that point, the world’s population will start to shrink, something it has not done since the 14th century, when the Black Death wiped out perhaps a fifth of humanity.”
This demographic crunch has defied policymaker efforts. Child allowances, flexible work schemes, and subsidized daycare have barely budged birth rates. For its part, the UN continues to assume fertility will stabilize or rebound. But a demographer quoted by the magazine calls that “wishful thinking,” and the opinion is hardly an outlier.
See if you find the UN assumption persuasive:
It is indeed possible to imagine that fertility might recover in some countries. It has done so before, rising in the early 2000s in the United States and much of northern Europe as women who had delayed having children got round to it. But it is far from clear that the world is destined to follow this example, and anyway, birth rates in most of the places that seemed fecund are declining again. They have fallen by a fifth in Nordic countries since 2010.
John Wilmoth of the United Nations Population Division explains one rationale for the idea that fertility rates will rebound: “an expectation of continuing social progress towards gender equality and women’s empowerment”. If the harm to women’s careers and finances that comes from having children were erased, fertility might rise. But the record of women’s empowerment thus far around the world is that it leads to lower fertility rates. It is not “an air-tight case”, concedes Mr Wilmoth.
Against this bleak backdrop, technology may be the only credible source of hope. Zoom boss Eric Yuan recently joined Bill Gates, Nvidia’s Jensen Huang, and JPMorgan’s Jamie Dimon in predicting shorter workweeks as advances in artificial intelligence boost worker productivity. The optimistic scenario goes like this: As digital assistants and code-writing bots shoulder more of the office load, employees reclaim hours for home life. Robot nannies and AI tutors lighten the costs and stresses of parenting, especially for dual-income households.
History hints at what could follow. Before the Industrial Revolution, wealth and fertility went hand-in-hand. That relationship flipped when economies modernized. Education became compulsory, child labor fell out of favor, and middle- and upper-class families invested heavily in fewer children’s education and well-being.
But today, wealthier Americans are having more children, treating them as the ultimate luxury good. As AI-driven abundance spreads more broadly, perhaps resulting in the shorter workweeks those CEOs are talking about, larger families may once again be considered an attainable aspiration for regular folks rather than an elite indulgence. (Fingers crossed, given this recent analysis from JPM: “The vast sums being spent on AI suggest that investors believe these productivity gains will ultimately materialize, but we suspect many of them have not yet done so.”)
Indeed, even a modest “baby bonus” from technology would be profound. Governments are running out of levers to pull, dials to turn, and buttons to press. AI-powered productivity may not just be the best bet for growth, it could be the only realistic chance of nudging humanity away from demographic decline. This is something for governments to think hard about when deciding how to regulate this fast-evolving technology.
AI Insights
AI’s winner-take-all effect, ‘Institutional Edge,’ episode 6 – Pensions & Investments

AI’s winner-take-all effect, ‘Institutional Edge,’ episode 6 Pensions & Investments
Source link
AI Insights
Three eastern Iowa students charged in nude AI-generated photos case

CASCADE, Iowa — Three Cascade High School students accused of creating fake nude images of other students with artificial intelligence have been charged, according to the Western Dubuque Community School District.
Iowa Public Radio reported back in May, that a group of students allegedly attached the victims’ headshots on other images of nude bodies. School officials say they first were made aware of the images on March 25.
The school district says “any student charged as a creator or distributor of materials like those in question will not be permitted to attend school in person at Cascade Junior/Senior High School.”
The district would not give many more details in the case due to the ongoing investigation and their “legal obligation to maintain student confidentiality.”
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries