Tools & Platforms
AI Is Not Here To Benefit Humanity. Just The Elites Who Are Behind It
Have you ever watched the movie Dune or read the book? It’s a good analogy for how stories are created and how they can lead people astray from reality, and get them to believe in myths.
Briefly, Paul Atreides’s mother, a main character in the story, fabricates a story to position Paul as the rightful supreme leader, ultimately to control the population. Those who come across this myth don’t know any better and don’t realize it’s fabricated, and become true believers. This is similar to how many today don’t know the true history behind “Artificial Intelligence,” or what AI is or isn’t, so they fall for the nice stories crafted by a few very unremarkable, hubris-filled white males who believe only they are the rightful decision-makers for humanity’s future.
Similarly, OpenAI CEO, Sam Altman, like Paul, knows his story is a fabrication, yet they still get caught up in the created myth — power becomes intoxicating, insatiable, and the line between reality and science fiction blurs. The myth spreads as it’s repeated repeatedly to the masses who become true believers. This is essentially what is happening in Silicon Valley today and spreading throughout the world: a self-declared AI aristocracy built on a constructed myth becomes a new AI religion.
“Successful people create companies. More successful people create countries. The most successful people create religions.”
The above quote is attributed to Qi Lu. But Sam Altman, the high priest of the new AI religion, said this, “It got me thinking, though — the most successful founders do not set out to create companies. They are on a mission to create something closer to a religion, and at some point, it turns out that forming a company is the easiest way to do so.”
Therefore, when Altman and the rest of the Tech Bro AI aristocracy talk about AGI (artificial general intelligence) it is not based on any scientific evidence or discovery. It is simply a made up term, from a quasi-religious hypomania belief, of a tiny group of determined akward white men engaging in constructive delusion.
Religion is about power and control, so the new AI religion allows its high priests and acolytes to thrive off the ignorance of its followers. The intellectually lazy! And for Sam Altman and OpenAI, the aristocracy, this is their promised land. AI, too, is fundamentally about power and control for those who have a vision of what they want the world to look like, and with them at the top, enjoying the wealth of their manifestation.
“Never doubt that a small group of thoughtful, committed citizens can change the world; indeed, it’s the only thing that ever has,” wrote Margaret Mead.
In her recent best-selling book, Empire of AI — Dreams and Nightmares in Sam Altman’s OpenAI, journalist and author Karen Hao explains how Sam Altman, Elon Musk, and a few others formed OpenAI with the “singular obsession: to be the first to reach artificial general intelligence, to make it in their own image.”
AI-generated videos are going viral after stereotypically depicting Black women.
OpenAI began under the “altruistic façade” of a “non-profit” by a small group of Silicon Valley elites determined to shape the future of AI and humanity with a very narrow view of the world and what it should be.
Similar to the Gilded Age period in America, from the late 1870s to the late 1890s, the “robber barons” of that time also believed that, because of their industrial achievements and ambitions about the future, they earned the right to make all the important economic decisions. The rest of society would benefit from their brillianceasthose benefits would “trickle down.” And lest we forget, former President Ronald Reagan’s trickle-down economicsin the 1980s was the same idea.
So today, the Tech Bros — the few — believe that due to their self-declared superiority, they have the divine right to build the future how they see fit, and the benefits will “trickle down”.
Technologies are not inevitable; however, they progress when groups with capital and power decide it is worth pursuing and investing in. However, these technologies rarely advance and deliver widespread prosperity; instead, they come to benefit a narrow elite and serve their equally narrow agendas.
As an example, the invention of a new cotton gin in the 1790s transformed the American South’s plantation economy into the world’s largest exporter of cotton, significantly boosting the South’s economic growth and prosperity. This generated enormous economic gains for white landowners involved in the cotton industry, with enslaved Black people doing the work. But it never served the interests of the slaves; enslaved Black people were forced to work longer hours to extract maximum profit for rich plantation owners. Further intensifying the economy of slavery and the brutal dehumanization of Black bodies and minds, they didn’t want to give up the economic advantages technology created for them, so they would go as far as fighting a Civil War to keep their slave-plantation based economy. Which they build for themselves, nobody else!
Therefore, the promise of AGI utopia, technology that benefits entire societies as OpenAI says it will, is a lie! History tells us that things, more often than not, don’t materialize that way; usually, they only truly benefit a narrow elite. (Note, even the Second Industrial Revolution that occurred in Britain between 1870 to 1914, ushered in the rise of the new urban working poor concentrated in the city of London.)
Accordingly, the Silicon Valley AI Revolution is elite and agenda-driven; times and technologies might change, but human nature always stays the same. From the plantation owner elites of the South and even the feudal systems of the Middle Ages, human nature remains the same. Like the new cotton gin, new technology is typically developed to serve the economic interests of the elites. Regular folks, and the most vulnerable, seldom, if ever, truly benefit.
Colonialism 3.0
Today, it’s not about the theft of sugar, cotton, minerals and precious metals and other material natural resources, but the free extraction of massive water resources to run Google’s huge data centre in Chile, for example, or paying sweatshop wages to workers in Kenya for data inputs that feed these ever growing large language models for OpenAI.
The demand for land and water to power LLM energy needs is staggering to facilitate these supercomputers 24/7.
We must also acknowledge the theft of artists’ and writers’ work — OpenAI blatantly ignores copyright laws and fair compensation, along with the countless individuals sharing their experiences and data online that ChatGPT leverages to train these models on. We are, therefore, witnessing colonialism 3.0, under the new American AI empire, becoming the main exploiter and harmer of the planet, people and humanity.
Busting Myth
With all the hype surrounding AI, it might seem like a recent breakthrough in technology; however, the origins of what we call “AI” date back to 1950–1956, the period when Alan Turing published “Computer Machinery and Intelligence,” which introduced a test for machine intelligence called The Imitation Game. You might have seen the movie.
In 1955, John McCarthy held a workshop at Dartmouth on “artificial intelligence,” marking the first use of the term. And in 1956, he coined the term to attract more attention and funding to his existing research. He explicitly stated years later, “I invented the term artificial intelligence to get money for a summer study.” Effectively, marketing and capital raising needs are behind the name AI, similarly to what is being done today with OpenAI and AGI.
So we still lack a definitive scientific consensus on what ‘artificial intelligence’ is, relative to authentic human general intelligence. Both AI and AGI remain undefined.
Consequently, when people refer to AI today, they are typically talking about a broad category of things they don’t fully understand. However, the term is catchy and seems reasonable enough, a suitable label for technologies that mimic various human behaviours or tasks, and if enough people use it over and over again, the uninformed and wilfully ignorant masses begin to adopt it, like a new religion.
Nevertheless, these systems operate very differently from how humans think, reason, and behave. Terms like machine learning, deep learning,and neural networks are also invented labels to support nice theories, but are void of any scientific bedrock to stand on, so a group of guys create a belief and call it whatever suits their agenda. This is the long and short of AGI.
So when AI advocates mention AGI, it is mostly a rebranding of AI. This is what major AI companies like OpenAI, Anthropic, Google, and Meta are doing: marketing and rebranding, because they all effectively produce the same thing, just different shades of pale grey.
Math Over Myths
Recently, Goldman Sachs published a 32-page report, June 25, 2024, GEN AI: TOO MUCH SPEND, TOO LITTLE BENEFIT?In summary, it is clear that tech giants and beyond are set to spend over $ 1 trillion on AI capital expenditures in the coming years, with little to show for it so far.
Jim Covello, Head of Global Equity Research at Goldman Sachs, is highly skeptical, saying, “AI technology is exceptionally expensive, and to justify those costs, the technology must be able to solve complex problems, which it isn’t designed to do.” Arguing that “the technology isn’t designed to solve the complex problems that would justify the costs, which may not decline as many expect.”
Daron Acemoglu, Institute Professor at MIT and 2024 Nobel Laureate, adds, “Given the focus and architecture of generative AI technology today… truly transformative changes won’t happen quickly and few — if any — will likely occur within the next 10 years.” Acemoglu goes on to say that AI might automate only 5% of tasks and add just 1% to global GDP over the next decade.
He says that AI’s potential is less clear than the Internet’s was, for example, and that human judgment trumps algorithms; he further challenges business leaders to innovate to enhance worker skills and productivity with AI, rather than seeing it as a one-dimensional cost-cutter or replacer of workers.
AI performs tasks we humans perform and recounts already established knowledge, he says. But reality is much more complex; it involves interactions and lots of other things based on tacit knowledge, or matching your contextual understanding of a problem with the specific tasks at hand. Most decisions require judgment, social interaction, and social intelligence, all of which are beyond the capabilities of AI.
“Many AI-based products use neural networks to infer patterns and rules from large volumes of data. But what many politicians do not understand is that simply adding a neural network to a problem does not automatically create a solution,” said the World Economic Forum (WEF), adding that “AI has huge potential — but it won’t solve all our problems, and not every problem is best addressed by applying machine intelligence.”
How long investors and people alike remain caught up in the myth and hype is anyone’s guess, but some bubbles just take longer to burst. But in the end, reality and gravity always prevail.
For those who read history, one thing has been proven over time: that our progress and prosperity depend on how we think and the choices we make about the technology we use.
Our choices matter! But if we allow others to be the arbiters of our lives, sacrificing our free will and individualism, we put our humanity at risk.
Technology has been a key driver of human progress and prosperity, which cannot be denied, but it can work against us when we allow all the major decisions to remain in the hands of a few hubris-filled men . Nevertheless, we do have agency, if we claim it with our minds, and don’t let nice stories, myths, and new religions run our lives.
Tools & Platforms
Virginia 911 call center implements AI technology to allow dispatchers to focus on emergency calls – KTVB
Tools & Platforms
In test-obsessed Korea, AI boom arrives in exams, ahead of the technology itself
July 11, 2025
SEOUL – A wave of artificial intelligence certifications has flooded the market in South Korea over the past two years.
But according to government data, most of these tests exist only on paper, and have never been used by a single person.
As of Wednesday, there were 505 privately issued AI-related certifications registered with the Korea Research Institute for Professional Education and Training, a state-funded body under the Prime Minister’s Office.
This is nearly five times the number recorded in 2022, before tools like ChatGPT captured global attention. But more than 90 percent of those certifications had zero test-takers as of late last year, the institute’s own data shows.
Many of the credentials are loosely tied to artificial intelligence in name only. Among recent additions are titles like “AI Brain Fitness Coach,” “AI Art Storybook Author,” and “AI Trainer,” which often have no connection to real AI technology.
KT’s AICE is South Korea’s only nationally accredited AI certification, offering five levels of exams that assess real-world AI understanding and skills, from block coding for elementary students to Python-based modeling for professionals. PHOTO: KT/THE KOREA HERALD
Only one of the 505 AI-related certifications — KT’s AICE exam — has received official recognition from the South Korean government. The rest have been registered by individuals, companies, or private organizations, with no independent oversight or quality control.
In 2024, just 36 of these certifications held any kind of exam. Only two had more than 1,000 people apply. Fourteen had a perfect 100 percent pass rate. And 20 were removed from the registry that same year.
For test organizers, the appeal is often financial. One popular certification that attracted around 500 candidates last year charged up to 150,000 won ($110) per person, including test fees and course materials. The content reportedly consisted of basic instructions on how to use existing tools like ChatGPT or Stable Diffusion. Some issuers even promote these credentials as qualifications to teach AI to students or the general public.
The people signing up tend to be those anxious about keeping up in an AI-driven world. A survey released this week by education firm Eduwill found that among 391 South Koreans in their 20s to 50s, 39.1 percent said they planned to earn an AI certificate to prepare for the digital future. Others (27.6 percent) said they were taking online AI courses or learning how to use automation tools like Notion AI.
Industry insiders warn that most of these certificates hold little value in the job market. A local AI industry official told The Korea Herald that these credentials are often “window dressing” for resumes.
“Most private AI certifications aren’t taken seriously by hiring managers,” he said. “Even for non-technical jobs like communications or marketing, what matters more is whether someone actually understands the AI space. That can’t be faked with a certificate.”
Tools & Platforms
Microsoft ‘Puts People First’ With $4 Billion AI Training
Microsoft is launching a $4 billion initiative to train 20 million people in artificial intelligence skills through a new global program called Elevate. The effort, announced by company President Brad Smith, is part of Microsoft’s commitment to “put people first” as AI becomes more integrated into work and education.
The tech titan described the program as a centralized platform for its technology support, donations, and training across schools, colleges, and nonprofits. Through the Elevate Academy, it plans to deliver AI literacy at scale, including offerings like “Hour of AI” and partnerships with educators and labor unions.
A unified platform for Microsoft’s AI training
Microsoft Elevate consolidates the company’s nonprofit and education initiatives into a single operational framework, replacing both its Philanthropies division and Tech for Social Impact team. It combines funding, cloud infrastructure, and AI tools to expand access to training and technology.
The $4 billion will be allocated over five years through a mix of grants, software, and computing resources for K–12 schools, community colleges, and nonprofit organizations worldwide.
Massive training effort for in-demand AI credentials
As part of its credentialing plan, Microsoft is introducing the Elevate Academy, a program to reach millions of learners in just two years. It will offer structured learning across a spectrum of competencies, from digital basics to advanced technical instruction.
Course content will run through LinkedIn Learning and GitHub, two platforms already used within professional and developer communities.
The academy serves as a centerpiece delivery channel, combining investment and infrastructure with partnerships and events to help learners earn industry-recognized certifications.
National and local partners help execute large-scale rollout
Microsoft is working with education nonprofits, labor groups, and government bodies to scale rollout..
“Hour of AI,” developed with Code.org, introduces younger students to foundational concepts through short-form instruction. A summer skilling series extends access outside the school year.
Labor unions are also involved in workforce development, including the National Academy for AI Instruction and courses across the building trades. In Germany, Microsoft is partnering with North Rhine-Westphalia for better regional programs.
Aligning training with public and institutional standards
To support policy alignment, Microsoft is working with public agencies to integrate AI skills into national education systems. It has also partnered with the United Nations, the Vatican, and academic institutions to promote responsible use and ethical standards in AI learning.
These collaborations build on Microsoft’s long-standing involvement in digital literacy and public education initiatives, now carried forward under Elevate’s global scope.
Technology with purpose, training with intent
Microsoft maintains that technology should augment human potential rather than replace it. Elevate reflects that view by focusing on skills amplifying judgment, creativity, and contribution.
Work, the company argues, is deeply tied to identity and dignity, a principle it says must guide how artificial intelligence is developed and deployed. Elevate carries that outlook forward, linking digital learning to values about the role of work in people’s lives.
Another way Microsoft is supporting AI training is by giving $12.5 million in funding to the National Academy for AI Instruction, which the American Federation of Teachers is launching this fall.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers1 week ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions1 week ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business1 week ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers1 week ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Education3 days ago
9 AI Ethics Scenarios (and What School Librarians Would Do)
-
Education4 days ago
Teachers see online learning as critical for workforce readiness in 2025
-
Education1 week ago
AERDF highlights the latest PreK-12 discoveries and inventions
-
Education4 days ago
Nursery teachers to get £4,500 to work in disadvantaged areas
-
Education6 days ago
How ChatGPT is breaking higher education, explained