AI Insights
AI workers are boosting rents across the US

The newest wave of tech workers isn’t just filling office towers — it’s bidding up apartments in cities already notorious for high housing costs.
Across the US and Canada, the number of workers with artificial intelligence skills has surged by more than 50% in the past year, topping 517,000, according to CBRE.
Much of that growth is clustered in the San Francisco Bay Area, New York City, Seattle, Toronto and the District of Columbia — areas where rents were straining households even before the AI boom.
The result: a fresh wave of demand that has helped push Manhattan rents up more than 14% between 2021 and 2024, Washington more than 12% in that same span, Seattle more than 7% and San Francisco nearly 6%.
New York gained about 20,000 AI-skilled workers over the past year alone, while other hubs including Atlanta, Chicago, Dallas-Fort Worth, Toronto and Washington each logged increases of 75% or more.
High salaries in AI allow workers to shoulder those rents — CBRE found Manhattan’s AI professionals spend about 29% of their income on housing, while in San Francisco and DC the share drops closer to 19%.
That affordability for one group is adding to the squeeze on everyone else.
Colin Yasukochi, executive director of CBRE’s Tech Insights Center, said San Francisco illustrates the trend.
“With this AI revolution, it’s been a fundamental game changer for the city of San Francisco, because that’s really ground zero for the AI revolution and where most of these major high-profile firms like OpenAI are located,” he told CNBC.
Unlike other parts of the tech sector that turned to remote work, AI firms are filling office towers. In San Francisco, 1 out of every 4 square feet leased over the past two and a half years went to an AI tenant.
“AI is predominantly in-office work, and they’re sort of back to the earlier days of tech innovation, where they’re in the office five, six days a week and for long hours,” Yasukochi said.
AI Insights
From Language Sovereignty to Ecological Stewardship – Intercontinental Cry

Last Updated on September 10, 2025
Artificial intelligence is often framed as a frontier that belongs to Silicon Valley, Beijing, or the halls of elite universities. Yet across the globe, Indigenous peoples are shaping AI in ways that reflect their own histories, values, and aspirations. These efforts are not simply about catching up with the latest technological wave—they are about protecting languages, reclaiming data sovereignty, and aligning computation with responsibilities to land and community.
From India’s tribal regions to the Māori homelands of Aotearoa New Zealand, Indigenous-led AI initiatives are emerging as powerful acts of cultural resilience and political assertion. They remind us that intelligence—whether artificial or human—must be grounded in relationship, reciprocity, and respect.
Giving Tribal Languages a Digital Voice
Just this week, researchers at IIIT Hyderabad, alongside IIT Delhi, BITS Pilani, and IIIT Naya Raipur, launched Adi Vaani, a suite of AI-powered tools designed for tribal languages such as Santali, Mundari, and Bhili.
At the heart of the project is a simple premise that technology should serve the people who need it most. Adi Vaani offers text-to-speech, translation, and optical character recognition (OCR) systems that allow speakers of marginalized languages to access education, healthcare, and public services in their mother tongues.
One of the project’s most promising outputs is a Gondi translator app that enables real-time communication between Gondi, Hindi, and English. For the nearly three million Gondi speakers who have long been excluded from India’s digital ecosystem, this tool is nothing less than transformative.
Speaking about the value of the app, research scholar Gopesh Kumar Bharti commented, “Like many tribal languages, Gondi faces several challenges due to its lack of representation in the official schedule, which hampers its preservation and development. The aim is to preserve and restore the Gondi language so that the next generation understands its cultural and historical significance.”
Latin America’s Open-Source Revolution
In Latin America, a similar wave of innovation is underway. Earlier this year, researchers at the Chilean National Center for Artificial Intelligence (CENIA) unveiled Latam-GPT, a free and open-source large language model trained not only on Spanish and Portuguese, but also incorporating Indigenous languages such as Mapuche, Rapanui, Guaraní, Nahuatl, and Quechua.
Unlike commercial AI systems that extract and commodify, Latam-GPT was designed with sovereignty and accessibility in mind.
To be successful, Latam-GPT needs to ensure the participation of “Indigenous peoples, migrant communities, and other historically marginalized groups in the model’s validation,” said Varinka Farren, chief executive officer of Hub APTA.
But as with most good things, it’s going to take time. Rodrigo Durán, CENIA’s general manager, told Rest of World that it will likely take at least a decade.
Māori Data Sovereignty: “Our Language, Our Algorithms”
Half a world away, the Māori broadcasting collective Te Hiku Media has become a global leader in Indigenous AI. In 2021, the organization released an automatic speech recognition (ASR) model for Te Reo Māori with an accuracy rate of 92%—outperforming international tech giants.
Their achievement was not the result of corporate investment or vast computing power, but of decades of community-led language revitalization. By combining archival recordings with new contributions from fluent speakers, Te Hiku demonstrated that Indigenous peoples can own not only their languages but also the algorithms that process them.
As co-director Peter-Lucas Jones explained, “In the digital world, data is like land,” he says. “If we do not have control, governance, and ongoing guardianship of our data as indigenous people, we will be landless in the digital world, too.”
Indigenous Leadership at UNESCO
On the global policy front, leadership is also shifting. Earlier this year, UNESCO appointed Dr. Sonjharia Minz, an Oraon computer scientist from India’s Jharkhand state, as co-chair of the Indigenous Knowledge Research Governance and Rematriation program.
Her mandate is ambitious: to guide the development of AI-based systems that can securely store, share, and repatriate Indigenous cultural heritage. For communities who have seen their songs, rituals, and even sacred objects stolen and digitized without consent, this initiative signals a long-overdue turn toward justice.
As Dr. Minz told The Times of India, “We are on the brink of losing indigenous languages around the world. Indigenous languages are more than mere communication tools. They are repository of culture, knowledge and knowledge system. They are awaiting urgent attention for revitalization.”
AI and Environmental Co-Stewardship
Artificial intelligence is also being harnessed to care for the land and waters that sustain Indigenous peoples. In the Arctic, communities are blending traditional ecological knowledge with AI-driven satellite monitoring to guide adaptive mariculture practices—helping to ensure that changing seas still provide food for generations to come.
In the Pacific Northwest, Indigenous nations are deploying AI-powered sonar and video systems to monitor salmon runs, an effort vital not only to ecosystems but to cultural survival. Unlike conventional “black box” AI, these systems are validated by Indigenous experts, ensuring that machine predictions remain accountable to local governance and ecological ethics.
Such projects remind us that AI need not be extractive. It can be used to strengthen stewardship practices that have protected biodiversity for millennia.
The Hidden Toll of AI’s Appetite
As Indigenous communities lead the charge toward ethical and ecologically grounded AI, we must also confront the environmental realities underpinning the technology—especially the vast energy and water demands of large language models.
In Chile, the rapid proliferation of data centers—driven partly by AI demands—has sparked fierce opposition. Activists argue that facilities run by tech giants like Amazon, Google, and Microsoft exacerbate water scarcity in drought-stricken regions. As one local put it, “It’s turned into extractivism … We end up being everybody’s backyard.”
The energy hunger of LLMs compounds this strain further. According to researchers at MIT, training clusters for generative AI consume seven to eight times more energy than typical computing workloads, accelerating energy demands just as renewable capacity lags behind.
Globally, by 2022, data centers had consumed a staggering 460 terawatt-hours—a scale comparable to the electricity use of entire states such as France—and are projected to reach 1,050 TWh by 2026, which would place data centers among the top five global electricity users.
LLMs aren’t just energy-intensive; their environmental footprint also extends across their whole lifecycle. New modeling shows that inference—the use of pre-trained models—now contributes to more than half of total emissions. Meanwhile, Google’s own reporting suggests that AI operations have increased greenhouse gas emissions by roughly 48% over five years.
Communities hosting data centers often face additional challenges, including:
This environmental reckoning matters deeply to Indigenous-led AI initiatives—because AI should not replicate colonial patterns of extraction and dispossession. Instead, it must align with ecological reciprocity, sustainability, and respect for all forms of life.
Rethinking Intelligence
Together, these Indigenous-led initiatives compel us to rethink both what counts as intelligence and where AI should be heading. In the mainstream tech industry, intelligence is measured by processing power, speed, and predictive accuracy. But for Indigenous nations, intelligence is relational: it lives in languages that carry ancestral memory and in stories that guide communities toward balance and responsibility.
When these values shape artificial intelligence, the results look radically different from today’s extractive systems. AI becomes a tool for reciprocity instead of extraction. In other words, it becomes less about dominating the future and more about sustaining the conditions for life itself.
This vision matters because the current trajectory of AI as an arms race of ever-larger models, resource-hungry data centers, and escalating ecological costs—cannot be sustained.
The challenge is no longer technical but political and ethical. Will governments, institutions, and corporations make space for Indigenous leadership to shape AI’s future? Or will they repeat the same old colonial logics of extraction and exclusion? Time will tell.
AI Insights
How federal tech leaders are rewriting the rules for AI and cyber hiring

Terry Gerton Well, there’s a lot of things happening in your world. Let’s talk about, first, the new memo that came out at the end of August that talks about FedRAMP 20x. Put that in plain language for folks and then tell us what it means for PSC and its stakeholders.
Jim Carroll Yeah, I think really what it means, it’s a reflection of what’s happening in the industry overall, the GovCon world, as well as probably everything that we do, you know, even as individual citizens, which is more and more reliance on AI. What we’re seeing is the artificial intelligence world has really picked up steam, not only I saw mention of it on the news today and they were talking about — every Google search now incorporates AI. So what we’re seeing with this GSA and FedRAMP initiative is really trying to fast track the authorization of the cloud-based services side of AI. Because it really is becoming more and more part of every basic use, not only in our private lives, like they talk about, but also in the federal contracting space. And what we are seeing are more and more federal government officials using it for routine things. And so I think what this is is really a reflection that they are going to move this as quickly as possible, in recognition that the world is changing right in front of us.
Terry Gerton So is this more for government contractors who are offering AI products, or for government contractors who are using AI in their internal products?
Jim Carroll It’s really for AI-based cloud services who are able to use AI tools that not only allow them, but really allow federal workers to be able to access AI in a much faster space. And, you know, there’s certainly some challenges with AI. I think, you’re hearing some of the futurists talk about, do we really understand AI enough to embrace it to the extent that we have? I don’t think anyone really knows the answer to that, but we know it’s out there and there is this recognition that there will be an ongoing routine federal use of AI. So let’s at least have the major players that are doing it the best authorized to be able to provide the service. And so much is happening right now in the AI space. And I think everyone knows the acronym. There’s a lot of acronyms we’re going to talk about today that are happening, but AI is an acronym that really is. And we did a poll and looked at our 400 member companies at PS Council. And I think it was 45% or 50% mentioned the use of AI on their homepage. And so I think there’s just recognition that GSA wants to be able to provide these solutions to the federal government workers.
Terry Gerton Do you see any risks or trade-offs in accelerating this approval versus adopting things that might not quite be ready for prime time?
Jim Carroll You know, I think there’s always that concern, as I mentioned, about some of the futurists that are looking at this and making sure that it’s safe. We’re hearing about it from the White House and we’re putting together — you’ve seen some public panels already with the White House, we’ve been asked to bring our PSC members for a policy discussion and some of the legal issues around AI to the White House. And so we’ll be bringing some members to the White House here in the next couple of weeks. And so I think there is concern that the people who use AI are also double-checking to make sure it’s accurate, right? That’s one of the concerns I think that people want to make sure is that there should not be an over-reliance or an exclusive reliance on AI tools. And we need to make sure that the solutions and the answers that our AI tools are giving us are actually accurate. One of the concerns, which I think goes into something we need to discuss that’s happening this week, is cybersecurity. Is AI secure? Is the use of it going to be able to safeguard some of the really important national security work that we’re doing? And how do we do that?
Terry Gerton I’m speaking with Jim Carroll. He’s the CEO of the Professional Services Council. Well, let’s stick in that tech vein and cybersecurity. There’s a new bill in Congress that wants to shift cybersecurity hiring to more of a skills-based qualification than professional degrees. How does PSC think about that proposal?
Jim Carroll I think again, it’s a reflection of what’s actually out there — that these new tools, we’ll say in cybersecurity, [are] really based on an individual’s ability to maneuver in this space, as opposed to just a degree. And being able to really focus on the ability of everyone, I think equals the playing field, right? It means more and more people are qualified to do this. When you take away a — I hate to say a barrier such as a degree, but it’s a reflection that there are other skill sets that people have learned to be able to actually do their work. And I can say this, having gotten a law degree many years ago, that you really sort of learn how to practice law by doing it and by having a mentor and doing it over the years, as opposed to just having a law degree. I don’t think it would be a good person to just go out and represent anyone on anything on the day after graduating from law school. You really need to learn how to apply it and I think that’s what this bipartisan bill is doing. And so you know, we’re encouraging more and more people being able to get into this, because there’s a greater and greater need, Terry. And so we’re okay with this.
Terry Gerton So what might it mean then for the GovCon workforce?
Jim Carroll I think there’s an opportunity here for the GovCon workspace and employees to be able to expand and really get some super-talented people to be able to work at these federal agencies. Which is a great plus, I think, for actually achieving the desired results that our GovCon members at PS Council are able to deliver, is we’re going to get the best and brightest and bring those people in to give real solutions.
Terry Gerton So the bill calls for more transparency from OPM on education-related hiring policies. Does PSC have an idea of what kind of oversight they’d like to see about that practice?
Jim Carroll Yeah, we’re looking into it now. We’re talking to our members and seeing what kind of oversight they have. You know, representing 400 organizations, companies that do business with the federal government and so many in this space of cybersecurity, being the leading trade organization for these 400 companies, it means that we’re able to go to our members and get from them, really, the safeguards that they think are important. Get the requirements that they think are important and get it in there. And so this is going to be a deliberative process. We have a little bit of time to work on this. But we’re excited about the potential. We really think this will be able to deliver great solutions, Terry.
Terry Gerton Well, speaking of cyber, there’s a new memo out on the cybersecurity maturity model. What’s your hot take there?
Jim Carroll Terry, how long has that been pending? I think five years. I think it’s five years is what I heard this morning. And so, you know, this will provide three levels of certification and clarity for CMMC [(Cybersecurity Maturity Model Certification)]. We’re looking at it now. This is obviously a critical issue and we are starting a working group. And we’re going to be able to provide resources to our members for this, to make sure that the certification — some of which are going to be very expensive for our members, depending on what type of certification that they want. So we’re gearing up. We have been ready for this. Like I said, we started planning this for five years ago, right? So did you, Terry. And so we have five years of thought going into it and we will be announcing and developing a website for our members to be able to have information on this, learn from this. We’ll be conducting seminars for our members. So now that CMMC — the other acronym I think that I mentioned earlier — is finally here, it’ll be implemented, I guess, in 60 days. And so we’ll have some time to use the skills that we have been developing over the last five years to give to our members.
Terry Gerton Any surprises for you in the final version? I know that PSC had quite a bit of input in the development.
Jim Carroll Not right now. We’re sort of looking at it; obviously, it just dropped in the last 24 hours. And so nothing right now that has caught us off guard. And so we’ve been ready for this and we’re ready to educate our members on this.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.
AI Insights
Techno-Utopians Like Elon Musk Are Treading Old Ground

In “The Singularity is Nearer: When We Merge with AI,” the futurist Ray Kurzweil imagines the point in 2045 when rapid technological progress crosses a threshold as humans merge with machines, an event he calls “the singularity.”
Although Kurzweil’s predictions may sound more like science fiction than fact-based forecasting, his brand of thinking goes well beyond the usual sci-fi crowd. It has provided inspiration for American technology industry elites for some time, chief among them Elon Musk.
With Neuralink, his company that is developing computer interfaces implanted in people’s brains, Musk says he intends to “unlock new dimensions of human potential.” This fusion of human and machine echoes Kurzweil’s singularity. Musk also cites apocalyptic scenarios and points to transformative technologies that can save humanity.
Ideas like those of Kurzweil and Musk, among others, can seem as if they are charting paths into a brave new world. But as a humanities scholar who studies utopianism and dystopianism, I’ve encountered this type of thinking in the futurist and techno-utopian art and writings of the early 20th century.
Techno-utopianism’s origins
Techno-utopianism emerged in its modern form in the 1800s, when the Industrial Revolution ushered in a set of popular ideas that combined technological progress with social reform or transformation.
Kurzweil’s singularity parallels ideas from Italian and Russian futurists amid the electrical and mechanical revolutions that took place at the turn of the 20th century. Enthralled by inventions like the telephone, automobile, airplane and rocket, those futurists found inspiration in the concept of a “New Human,” a being who they imagined would be transformed by speed, power and energy.
A century ahead of Musk, Italian futurists imagined the destruction of one world, so that it might be replaced by a new one, reflecting a common Western techno-utopian belief in a coming apocalypse that would be followed by the rebirth of a changed society.
One especially influential figure of the time was Filippo Marinetti, whose 1909 “Founding and Manifesto of Futurism” offered a nationalistic vision of a modern, urban Italy. It glorified the tumultuous transformation caused by the Industrial Revolution. The document describes workers becoming one with their fiery machines. It encourages “aggressive action” coupled with an “eternal” speed designed to break things and bring about a new world order.
The overtly patriarchal text glorifies war as “hygiene” and promotes “scorn for woman.” The manifesto also calls for the destruction of museums, libraries and universities and supports the power of the rioting crowd.
Marinetti’s vision later drove him to support and even influence the early fascism of Italian dictator Benito Mussolini. However, the relationship between the futurism movement and Mussolini’s increasingly anti-modern regime was an uneasy one, as Italian studies scholar Katia Pizzi wrote in “Italian Futurism and the Machine.”
Further east, the Russian revolutionaries of 1917 adopted a utopian faith in material progress and science. They combined a “belief in the ease with which culture could be destroyed” with the benefits of “spreading scientific ideas to the masses of Russia,” historian Richard Stites wrote in “Revolutionary Dreams.”
For the Russian left, an “immediate and complete remaking” of the soul was taking place. This new proletarian culture was personified in the ideal of the New Soviet Man. This “master of nature by means of machines and tools” received a polytechnical education instead of the traditional middle-class pursuit of the liberal arts, humanities scholar George Young wrote in “The Russian Cosmists.” The first Soviet People’s Commissar of Education, Anatoly Lunacharsky, supported these movements.
Although their political ideologies took different forms, these 20th-century futurists all focused their efforts on technological advancement as an ultimate objective. Techno-utopians were convinced that the dirt and pollution of real-world factories would automatically lead to a future of “perfect cleanliness, efficiency, quiet, and harmony,” historian Howard Segal wrote in “Technology and Utopia.”
Myths of efficiency and everyday tech
Despite the remarkable technological advances of that time, and since, the vision of those techno-utopians largely has not come to pass. In the 21st century, it can seem as if we live in a world of near-perfect efficiency and plenitude thanks to the rapid development of technology and the proliferation of global supply chains. But the toll that these systems take on the natural environment – and on the people whose labor ensures their success – presents a dramatically different picture.
Today, some of the people who espouse techno-utopian and apocalyptic visions have amassed the power to influence, if not determine, the future. At the start of 2025, through the Department of Government Efficiency, or DOGE, Musk introduced a fast-paced, tech-driven approach to government that has led to major cutbacks in federal agencies. He’s also influenced the administration’s huge investments in artificial intelligence, a class of technological tools that public officials are only beginning to understand.
The futurists of the 20th century influenced the political sphere, but their movements were ultimately artistic and literary. By contrast, contemporary techno-futurists like Musk lead powerful multinational corporations that influence economies and cultures across the globe.
Does this make Musk’s dreams of human transformation and societal apocalypse more likely to become reality? If not, these elements of Musk’s project are likely to remain more theoretical, just as the dreams of last century’s techno-utopians did.
-
Business2 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi