Connect with us

Tools & Platforms

AI investment shouldn’t ignore the burdens that tech places on people

Published

on


When President Donald Trump stood before a crowd at Carnegie Mellon University last month and announced $90 billion in new artificial intelligence investments, it was hardly the first time the federal government has devoted significant resources to developing the technology. Earlier this year, Trump revealed a $500 billion “Stargate” project backed by OpenAI, SoftBank, and Oracle to build data centers across the United States.

The message is clear: America is betting big on its AI future. But buried in the fanfare was the cost of this extraordinary investment to ordinary Americans. As energy consumption from AI surges, consumers will bear the burden through higher electricity bills, strain on an already overwhelmed grid, and emerging evidence of health impacts from nearby data centers.

In Pennsylvania, where Trump is launching these massive investments, the impact is already visible. Duquesne Light’s standard rate for residential customers jumped 15% in June, while West Penn Power’s rose 9%, as the regional grid operator scrambles to secure enough power for the flood of data centers coming online.

Nationwide, electricity rates are already increasing by as much as 5% a year, enough to strain household budgets, especially for those on the brink of nonpayment. Nearly 80,000 Pennsylvanians had their electricity shut off for nonpayment in April and May alone — before the recent rate increases took effect. (For context, in Virginia, where nearly half the country’s data centers are housed, prices are expected to increase by as much as 70% in the next decade without massive changes.)

How is this possible?

Every time you ask ChatGPT a question, upload photos to the cloud, or join a Zoom call, that request travels to a data center where banks of computers respond to these requests. Data centers are massive warehouses filled with thousands of servers that store, process, and deliver everything from your Netflix stream to your Google searches. They need electricity not just to run the computers, but to keep them cool with industrial-grade air-conditioning systems running 24/7.

A single large data center can consume as much power as a small city, and the new AI-focused facilities are even more demanding. Once it is trained, these systems are fine-tuned, using more energy. And once they are fine-tuned, millions of people access them every hour. ChatGPT alone processes about a billion queries a day, using at least as much energy as 33,000 U.S. homes.

To their credit, tech companies aren’t ignoring the challenge. Google just announced a $3 billion deal to source hydropower from two Pennsylvania facilities. The company is essentially betting it can build both the demand and the clean supply at the same time. Meanwhile, a $25 billion investment by Blackstone includes natural gas power plants that can come online quickly, but will burn fossil fuels and potentially drive up gas prices.

A single large data center can consume as much power as a small city, and the new AI-focused facilities are even more demanding.

But AI’s energy crisis has two dimensions. First, clean energy takes years to build, while AI demand is increasing now. Second, we have to look at where and how this power runs.

At the moment, it runs on outdated infrastructure. A new report from the North American Electric Reliability Corp. finds that data centers are an emerging threat to grid stability because they pull huge amounts of power at unpredictable times.

Our power grid wasn’t built for this.

Trump’s Pennsylvania AI summit showcased American ambition, but it also highlighted a fundamental challenge: The United States is trying to build a 21st-century economy on a 20th-century platform. And we, as consumers, will pay the price.

This moment demands a two-pronged response that addresses not only the short-term opportunities but also the long-term risks.

In the long term, America must invest in upgrading our infrastructure. We need a power grid that can move energy from where it is abundant to where it is needed most.

A series of Biden-era bills, like the Inflation Reduction Act, represent the most significant progress since the Eisenhower era — but they are all being cut under the current administration. Specifically, the Big Beautiful Bill rolls back clean energy funding that could have eased the infrastructure strain.

At the same time, we need to make sure our generation and use of AI is as efficient as possible. And to do that, we need disclosure.

The world’s largest AI companies are largely staying silent on energy reporting. Google stopped providing specific AI energy disclosure in 2024 after doing so previously, despite seeing emissions rise 48% since 2019. Microsoft’s emissions increased 30% from a 2020 baseline. OpenAI provides virtually no public energy data, with CEO Sam Altman offering only that an average query uses 0.34 watt-hours. The company declined to provide specific figures when requested by MIT Technology Review.

Disclosure is not just about transparency. It’s about giving communities and policymakers the data they need to make informed decisions. We can’t manage what we can’t measure, and right now, we’re flying blind about AI’s true energy cost.

Once we understand the problem, government regulators can utilize the many tools in our policy arsenal to mitigate the ecological and energy constraints: taxing in a way to reflect costs, instituting mandatory efficiency requirements, and planning upgrades.

Without such measures, we will start to see skyrocketing electricity bills and communities bearing high risks they never agreed to take. But legislators and tech companies must act before the infrastructure crisis becomes irreversible. Technological investments should benefit all of us, not just shareholders.

Aya Saed, who served as the former counsel and legislative director for U.S. Rep. Alexandria Ocasio-Cortez, is the director of AI policy and strategy at Scope3 and the policy cochair for the Green Software Foundation. She is a public voices fellow with the OpEd Project in partnership with the PD Soros Fellowship for New Americans.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | northjersey.com
logo

northjersey.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on northjersey.com



Source link

Continue Reading

Tools & Platforms

Your browser is not supported

Published

on


Your browser is not supported | jsonline.com
logo

jsonline.com wants to ensure the best experience for all of our readers, so we built our site to take advantage of the latest technology, making it faster and easier to use.

Unfortunately, your browser is not supported. Please download one of these browsers for the best experience on jsonline.com



Source link

Continue Reading

Tools & Platforms

Microsoft Launches In-House AI Models to Reduce OpenAI Dependence

Published

on

By


Microsoft’s Strategic Pivot in AI Development

Microsoft Corp. has unveiled its first in-house artificial intelligence models, marking a significant shift in its approach to AI technology. The company announced MAI-Voice-1, a specialized model for speech generation, and a preview version of MAI-1, a foundational model aimed at broader applications. This move comes amid growing tensions in Microsoft’s partnership with OpenAI, where the tech giant has invested billions but now seeks greater independence.

According to details reported in a recent article by Mashable, these models are designed to enhance Microsoft’s Copilot AI assistant, integrating into products like Bing and Windows. The launch raises questions about the future of Microsoft’s collaboration with OpenAI, as the company aims to reduce its reliance on external AI providers.

Implications for the OpenAI Partnership

Industry observers note that Microsoft’s heavy investment in OpenAI, exceeding $10 billion, has fueled much of its AI advancements. However, disputes over intellectual property and revenue sharing have prompted this internal development push. The MAI-1 model, in particular, is being positioned as a direct competitor to OpenAI’s offerings, potentially challenging the startup’s dominance in generative AI.

As highlighted in reports from Reuters, Microsoft began training MAI-1 as early as last year, with parameters estimated at around 500 billion, making it a heavyweight contender against models like GPT-4. This internal effort is led by former executives from AI startup Inflection, bringing expertise to bolster Microsoft’s capabilities.

Technical Innovations and Efficiency Gains

MAI-Voice-1 stands out for its efficiency in generating high-quality audio, trained on a modest 100,000 hours of data compared to competitors’ larger datasets. This approach not only cuts costs but also accelerates deployment, allowing Microsoft to offer faster, more affordable AI features to consumers and businesses.

The preview of MAI-1 focuses on text-based tasks, with plans for multimodal expansions including image and video processing. Insights from Technology Magazine suggest these models could provide advanced problem-solving abilities, integrating seamlessly into Microsoft’s ecosystem and potentially lowering operational expenses.

Market Competition and Future Outlook

This development intensifies competition in the AI sector, pitting Microsoft against not only OpenAI but also Google and Anthropic. By building in-house models, Microsoft aims to control its AI destiny, mitigating risks associated with third-party dependencies. Analysts predict this could lead to more innovative features in Copilot, enhancing user experiences across Microsoft’s software suite.

However, the partnership with OpenAI isn’t dissolving entirely; Microsoft continues to leverage OpenAI’s technology while developing its own. A report in CNBC indicates that internal testing of MAI-1 is already underway, with public previews signaling rapid progress toward widespread adoption.

Broader Industry Ramifications

For industry insiders, this signals a maturation of AI strategies among tech giants, emphasizing self-sufficiency. Microsoft’s move could inspire similar initiatives elsewhere, fostering a more diverse array of AI tools. Yet, challenges remain, including ethical considerations and regulatory scrutiny over AI’s societal impact.

Ultimately, as Microsoft refines these models, the tech world watches closely. The balance between collaboration and competition will define the next phase of AI innovation, with Microsoft’s in-house efforts potentially reshaping market dynamics for years to come.



Source link

Continue Reading

Trending