Connect with us

Tools & Platforms

“Empire of AI”: Karen Hao on How AI Is Threatening Democracy & Creating a New Colonial World

Published

on


This is a rush transcript. Copy may not be in its final form.

AMY GOODMAN: This is Democracy Now!, democracynow.org, The War and Peace Report. I’m Amy Goodman.

We turn now to the Empire of AI. That’s the name of a new book by the journalist Karen Hao, who’s closely reported on the rise of the artificial intelligence industry with a focus on Sam Altman’s OpenAI. That’s the company behind ChatGPT. Karen Hao compares the actions of the AI industry to those of colonial powers in the past. She writes, quote, “The empires of AI are not engaged in the same overt violence and brutality that marked this history. But they, too, seize and extract precious resources to feed their vision of artificial intelligence: the work of artists and writers; the data of countless individuals posting about their experiences and observations online; the land, energy, and water required to house and run massive data centers and supercomputers,” she writes.

Karen Hao is a former reporter at The Wall Street Journal and MIT Technology Review, where she became the first journalist to profile OpenAI. Democracy Now!’s Juan González and I spoke to her in May. I began by asking her to explain what artificial intelligence is.

KAREN HAO: So, AI is a collection of many different technologies, but most people were introduced to it through ChatGPT. And what I argue in the book, and what the title refers to, Empire of AI, it’s actually a critique of the specific trajectory of AI development that led us to ChatGPT and has continued since ChatGPT. And that is specifically Silicon Valley’s scale-at-all-costs approach to AI development.

AI models in modern day, they are trained on data. They need computers to train them on that data. But what Silicon Valley did, and what OpenAI did in the last few years, is they started blowing up the amount of data and the size of the computers that need to do this training. So, we are talking about the full English-language internet being fed into these models — books, scientific articles, all of the intellectual property that has been created — and also massive supercomputers that run tens of thousands, even hundreds of thousands, of computer chips that are the size of dozens, maybe hundreds, of football fields and use practically the entire energy demands of cities now. So, this is an extraordinary type of AI development that is causing a lot of social, labor and environmental harms. And that is ultimately why I evoke this analogy to empire.

JUAN GONZÁLEZ: And, Karen, could you talk some more about not only the energy requirements, but the water requirements of these huge data centers that are, in essence, the backbone of this widening industry?

KAREN HAO: Absolutely. I’ll give you two stats on both the energy and the water. When talking about the energy demand, McKinsey recently came out with a report that said in the next five years, based on the current pace of AI computational infrastructure expansion, we would need to put as much energy on the global grid as what is consumed by two to six times the energy consumed annually by the state of California, and that will mostly be serviced by fossil fuels. We’re already seeing reporting of coal plants with their lives being extended. They were supposed to retire, but now they cannot, to support this data center development. We are seeing methane gas turbines, unlicensed ones, being popped up to service these data centers, as well.

From a freshwater perspective, these data centers need to be trained on freshwater. They cannot be trained on any other type of water, because it can corrode the equipment, it can lead to bacterial growth. And most of the time, it actually taps directly into a public drinking water supply, because that is the infrastructure that has been laid to deliver this clean freshwater to different businesses, to different homes. And Bloomberg recently had an analysis where they looked at the expansion of these data centers around the world, and two-thirds of them are being placed in water-scarce areas. So they’re being placed in communities that do not have access to freshwater. So, it’s not just the total amount of freshwater that we need to be concerned about, but actually the distribution of this infrastructure around the world.

JUAN GONZÁLEZ: And most people are familiar with ChatGPT, the consumer aspect of AI, but what about the military aspect of AI, where, in essence, we’re finding Silicon Valley companies becoming the next generation of defense contractors?

KAREN HAO: One of the reasons why OpenAI and many other companies are turning to the defense industry is because they have spent an extraordinary amount of money in developing these technologies. They’re spending hundreds of billions to train these models. And they need to recoup those costs. And there are only so many industries and so many places that have that size of a paycheck to pay. And so, that’s why we’re seeing a cozying up to the defense industry. We’re also seeing Silicon Valley use the U.S. government in their empire-building ambitions. You could argue that the U.S. government is also trying to use Silicon Valley, vice versa, in their empire-building ambitions.

But certainly, these technologies are not — they are not designed to be used in a sensitive military context. And so, the aggressive push of these companies to try and get those defense contracts and integrate their technologies more and more into the infrastructure of the military is really alarming.

AMY GOODMAN: I wanted to go to the countries you went to, or the stories you covered, because, I mean, this is amazing, the depth of your reporting, from Kenya to Uruguay to Chile. You were talking about the use of water. And I also want to ask you about nuclear power.

KAREN HAO: Yeah.

AMY GOODMAN: But in Chile, what is happening there around these data centers and the water they would use and the resistance to that?

KAREN HAO: Yeah. So, Chile has an interesting history in that it’s been under — it was under a dictatorship for a very long time. And so, during that time, most public resources were privatized, including water. But because of an anomaly, there’s one community in the greater Santiago metropolitan region that actually still has access to a public freshwater resource that services both that community, as well as the rest of the country in emergency situations. That is the exact community that Google chose to try to put a data center in. And they proposed for their data center to use a thousand times more freshwater than that community used annually.

AMY GOODMAN: And it would be free.

KAREN HAO: And it — you know, I have no idea. That is a great question. But what the community told me was they weren’t even paying taxes for this, because they believed, based on reading the documentation, that the taxes that Google was paying was, in fact, to where they had registered their offices, their administrative offices, not where they were putting down the data center. So they were not seeing any benefit from this data center directly to that community, and they were seeing no checks placed on the freshwater that this data center would have been allowed to extract.

And so, these activists said, “Wait a minute. Absolutely not. We’re not going to allow this data center to come in, unless they give us a legitimate reason for why it benefits us.” And so, they started doing boots-on-the-ground activism, pushing back, knocking on every single one of their neighbors’ doors, handing out flyers to the community, telling them, “This company is taking our freshwater resources without giving us anything in return.”

And so, they escalated so dramatically that it escalated to Google Chile. It escalated to Google Mountain View, which, by the way, then sent representatives to Chile that only spoke English. But then, it eventually escalated to the Chilean government. And the Chilean government now has roundtables where they ask these community residents and the company representatives and representatives from the government to come together to actually discuss how to make data center development more beneficial to the community.

The activists say the fight is not over. Just because they’ve been invited to the table doesn’t mean that everything is suddenly better. They need to stay vigilant. They need to continue scrutinizing these projects. But thus far, they’ve been able to block this project for four to five years and have gained that seat at the table.

JUAN GONZÁLEZ: And how is it that these Western companies, in essence, are exploiting labor in the Global South? You go into something called data annotation firms. What are those?

KAREN HAO: Yeah, so, because AI, modern-day AI systems are trained on massive amounts of data, and they’re scraped — that’s scraped from the internet, you can’t actually pump that data directly into your AI model, because there are a lot of things within that data. It’s heavily polluted. It needs to be cleaned. It needs to be annotated. So, this is where data annotation firms come in. These are middle-man firms that hire contract labor to provide to these AI companies to do that kind of data preparation.

And OpenAI, when it was starting to think about commercializing its products and thinking about, “Let’s put text-generation machines that can spew any kind of text into the hands of millions of users,” they realized they needed to have some kind of content moderation. They needed to develop a filter that would wrap around these models and prevent these models from actually spewing racist, hateful and harmful speech to users. That would not make a very good, commercially viable product.

And so, they contracted these middle-man firms in Kenya, where the Kenyan workers had to read through reams of the worst text on the internet, as well as AI-generated text, where OpenAI was prompting its own AI models to imagine the worst text on the internet and then telling these Kenyan workers to detail — to categorize them in detailed taxonomies of “Is this sexual content? Is this violent content? How graphic is that violent content?” in order to teach its filter all the different categories of content it had to block.

And this is an incredibly uncommon form of labor. There are lots of other different types of contract labor that they use. But these workers, they’re paid a few bucks an hour, if at all. And just like the era of social media, these content moderators are left very deeply psychologically traumatized. And ultimately, there is no real philosophy behind why these workers are paid a couple bucks an hour and have their lives destroyed, and why AI researchers who also contribute to these models are paid million-dollar compensation packages simply because they sit in Silicon Valley, in OpenAI’s offices. That is the logic of empire, and that harkens back to my title, Empire of AI.

AMY GOODMAN: So, let’s go back to your title, Empire of AI, the subtitle, Dreams and Nightmares in Sam Altman’s OpenAI. So, tell us the story of Sam Altman and what OpenAI is all about, right through to the deal he just made in the Gulf, when President Trump, Sam Altman and Elon Musk were there.

KAREN HAO: Altman is very much a product of Silicon Valley. His career was first as a founder of a startup, and then as the president of Y Combinator, which is one of the most famous startup accelerators in Silicon Valley, and then the CEO of OpenAI. And there’s no coincidence that OpenAI ended up introducing the world to the scale-at-all-costs approach to AI development, because that is the way that Silicon Valley has operated in the entire time that Altman came up in it.

And so, he is a very strategic person. He is incredibly good at telling stories about the future and painting these sweeping visions that investors and employees want to be a part of. And so, early on at YC, he identified that AI would be one of the trends that could take off. And he was trying to build a portfolio of different investments and different initiatives to place himself in the center of various different trends, depending on which one took off. He was investing in quantum computing, he was investing in nuclear fusion, he was investing in self-driving cars, and he was developing a fundamental AI research lab. Ultimately, the AI research lab was the ones that started accelerating really quickly, so he makes himself the CEO of that company.

And originally, he started it as a nonprofit to try and position it as a counter to for-profit-driven incentives in Silicon Valley. But within one-and-a-half years, OpenAI’s executives identified that if they wanted to be the lead in this space, they “had to” go for this scale-at-all-costs approach — and “had to” should be in quotes. They thought that they had to do this. There are actually many other ways to develop AI and to have progress in AI that does not take this approach.

But once they decided that, they realized the bottleneck was capital. It just so happened Sam Altman is a once-in-a-generation fundraising talent. He created this new structure, nesting a for-profit arm within the nonprofit, to become this fundraising vehicle for the tens of billions, and ultimately hundreds of billions, that they needed to pursue the approach that they decided on. And that is how we ultimately get to present-day OpenAI, which is one of the most capitalistic companies in the history of Silicon Valley, continuing to raise hundreds of billions, and, Altman has joked, even trillions, to produce a technology that ultimately has a middling economic impact thus far.

AMY GOODMAN: We’ll return to our conversation in a minute with Karen Hao, author of the new book, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. Stay with us.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

IT Summit focuses on balancing AI challenges and opportunities — Harvard Gazette

Published

on


Exploring the critical role of technology in advancing Harvard’s mission and the potential of generative AI to reshape the academic and operational landscape were the key topics discussed during University’s 12th annual IT Summit. Hosted by the CIO Council, the June 11 event attracted more than 1,000 Harvard IT professionals.

“Technology underpins every aspect of Harvard,” said Klara Jelinkova, vice president and University chief information officer, who opened the event by praising IT staff for their impact across the University.

That sentiment was echoed by keynote speaker Michael D. Smith, the John H. Finley Jr. Professor of Engineering and Applied Sciences and Harvard University Distinguished Service Professor, who described “people, physical spaces, and digital technologies” as three of the core pillars supporting Harvard’s programs. 

In his address, “You, Me, and ChatGPT: Lessons and Predictions,” Smith explored the balance between the challenges and the opportunities of using generative AI tools. He pointed to an “explainability problem” in generative AI tools and how they can produce responses that sound convincing but lack transparent reasoning: “Is this answer correct, or does it just look good?” Smith also highlighted the challenges of user frustration due to bad prompts, “hallucinations,” and the risk of overreliance on AI for critical thinking, given its “eagerness” to answer questions. 

In showcasing innovative coursework from students, Smith highlighted the transformative potential of “tutorbots,” or AI tools trained on course content that can offer students instant, around-the-clock assistance. AI is here to stay, Smith noted, so educators must prepare students for this future by ensuring they become sophisticated, effective users of the technology. 

Asked by Jelinkova how IT staff can help students and faculty, Smith urged the audience to identify early adopters of new technologies to “understand better what it is they are trying to do” and support them through the “pain” of learning a new tool. Understanding these uses and fostering collaboration can accelerate adoption and “eventually propagate to the rest of the institution.” 

The spirit of innovation and IT’s central role at Harvard continued throughout the day’s programming, which was organized into four pillars:  

  • Teaching, Learning, and Research Technology included sessions where instructors shared how they are currently experimenting with generative AI, from the Division of Continuing Education’s “Bot Club,” where instructors collaborate on AI-enhanced pedagogy, to the deployment of custom GPTs and chatbots at Harvard Business School.
  • Innovation and the Future of Services included sessions onAI video experimentation, robotic process automation, ethical implementation of AI, and a showcase of the University’s latest AI Sandbox features. 
  • Infrastructure, Applications, and Operations featured a deep dive on the extraordinary effort to bring the new David Rubenstein Treehouse conference center to life, including testing new systems in a physical “sandbox” environment and deploying thousands of feet of network cabling. 
  • And the Skills, Competencies, and Strategies breakout sessions reflected on the evolving skillsets required by modern IT — from automation design to vendor management — and explored strategies for sustaining high-functioning, collaborative teams, including workforce agility and continuous learning. 

Amid the excitement around innovation, the summit also explored the environmental impact of emerging technologies. In a session focused on Harvard’s leadership in IT sustainability — as part of its broader Sustainability Action Plan — presenters explored how even small individual actions, like crafting more effective prompts, can meaningfully reduce the processing demands of AI systems. As one panelist noted, “Harvard has embraced AI, and with that comes the responsibility to understand and thoughtfully assess its impact.” 



Source link
Continue Reading

Tools & Platforms

Tennis players criticize AI technology used by Wimbledon

Published

on


Some tennis players are not happy with Wimbledon’s new AI line judges, as reported by The Telegraph. 

This is the first year the prestigious tennis tournament, which is still ongoing, replaced human line judges, who determine if a ball is in or out, with an electronic line calling system (ELC).

Numerous players criticized the AI technology, mostly for making incorrect calls, leading to them losing points. Notably, British tennis star Emma Raducanu called out the technology for missing a ball that her opponent hit out, but instead had to be played as if it were in. On a television replay, the ball indeed looked out, the Telegraph reported. 

Jack Draper, the British No. 1, also said he felt some line calls were wrong, saying he did not think the AI technology was “100 percent accurate.”

Player Ben Shelton had to speed up his match after being told that the new AI line system was about to stop working because of the dimming sunlight. Elsewhere, players said they couldn’t hear the new automated speaker system, with one deaf player saying that without the human hand signals from the line judges, she was unable to tell when she won a point or not. 

The technology also met a blip at a key point during a match this weekend between British player Sonay Kartal and the Russian Anastasia Pavlyuchenkova, where a ball went out, but the technology failed to make the call. The umpire had to step in to stop the rally and told the players to replay the point because the ELC failed to track the point. Wimbledon later apologized, saying it was a “human error,” and that the technology was accidentally shut off during the match. It also adjusted the technology so that, ideally, the mistake could not be repeated.

Debbie Jevans, chair of the All England Club, the organization that hosts Wimbledon, hit back at Raducanu and Draper, saying, “When we did have linesmen, we were constantly asked why we didn’t have electronic line calling because it’s more accurate than the rest of the tour.” 

We’ve reached out to Wimbledon for comment.

This is not the first time the AI technology has come under fire as tennis tournaments continue to either partially or fully adopt automated systems. Alexander Zverev, a German player, called out the same automated line judging technology back in April, posting a picture to Instagram showing where a ball called in was very much out. 

The critiques reveal the friction in completely replacing humans with AI, making the case for why a human-AI balance is perhaps necessary as more organizations adopt such technology. Just recently, the company Klarna said it was looking to hire human workers after previously making a push for automated jobs. 



Source link

Continue Reading

Tools & Platforms

AI Technology-Focused Training Campaigns : Raspberry Pi Foundation

Published

on


The Raspberry Pi Foundation has issued a compelling report advocating for sustained emphasis on coding education despite the rapid advancement of AI technologies. The educational charity challenges emerging arguments that AI’s growing capability to generate code diminishes the need for human programming skills, warning against potential deprioritization of computer science curricula in schools.

The Raspberry Pi Foundation’s analysis presents coding as not merely a vocational skill but a fundamental literacy that develops critical thinking, problem-solving abilities, and technological agency — competencies argued to be increasingly vital as AI systems permeate all aspects of society. The foundation emphasizes that while AI may automate certain technical tasks, human oversight remains essential for ensuring the safety, ethics, and contextual relevance of computer-generated solutions.

For educators, parents, and policymakers, this report provides timely insights into preparing younger generations for an AI-integrated future.

Image Credit: Raspberry Pi Foundation



Source link

Continue Reading

Trending