AI Research
Researchers question AI data centers’ ‘eye-popping’ energy demands

Jonathan Koomey remembers the hype around electricity demand during the dot-com bubble. Now, he is raising the alarm over stark parallels he sees between that decades-old speculation and the projected energy demand of artificial intelligence data centers.
Regional grid operators in the U.S. are projecting major increases in the amount of power required in the coming years. Artificial intelligence is the primary driver behind the trend. The need to build capacity on the power grid is already increasing costs, as utilities scramble to build power plants and Big Tech invests in nuclear power. However, some experts are skeptical of the AI-related electric growth projections, warning that they could lead to higher prices for consumers.
In an interview with Straight Arrow News, Koomey described how, in the late 1990s, many people believed that computers would use half of all the electricity produced in the U.S. within a decade or two.
“It turned out that across the board, these claims were vast exaggerations,” said Koomey, who has spent his career researching the energy and environmental effects of information technology, including more than two decades as a scientist at the Lawrence Berkeley National Laboratory.
Koomey is part of a growing number of researchers and consumer advocates who worry that the power consumption hype is playing out again with AI.
“Both the utilities and the tech companies have an incentive to embrace the rapid growth forecast for electricity use,” Koomey said.
Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.
Point phone camera here
How much is electricity demand expected to spike?
Across the country, the regional power grid operators that manage real-time supply and demand to keep power flowing are singing the same song: America needs more power.
Unbiased. Straight Facts.TM
America’s largest regional power grid expects a 42% increase in peak demand for electricity by 2040.

In addition to AI data centers, cloud computing, cryptocurrency mining, new manufacturing facilities, and the electrification of heating and transportation also contribute to growing electricity demand. But artificial intelligence is the fastest-growing factor driving power demands.
The largest electric grid, PJM, expects to see peak electricity demand — the maximum amount of power needed in a single instant — increase by 70,000 megawatts in the next 15 years. That’s 42% more than the current peak on the grid that serves 65 million people from Washington, D.C. to Chicago.
In Texas, the gains are expected to be faster. Peak demand is projected to nearly double from 2024 levels by 2030, according to the state’s major grid operator, the Electric Reliability Council of Texas.
Nationwide, estimates vary. Grid Strategies, a consulting company in the power sector, projects that the U.S. will need 128 gigawatts of power — a nearly 16% increase — by 2029. Some estimates are even higher. Consulting firm ICF expects approximately a 25% higher demand nationwide by 2030 and 78% more power needed by 2050, with 2023 serving as the baseline.
For most Americans, electricity rates are already rising. And as demand for power increases, consumers are likely to pay even more.
What are ‘phantom’ data centers?
Utility companies are also counting duplicate data centers in their growth projections.
“One data center may shop around a few different locations before it decides where it finally wants to be,” said Cathy Kunkel, a consultant at the Institute for Energy Economics and Financial Analysis.
When a tech company wants to build a data center, it files a request with a utility company to connect to the grid. The utility companies report how many requests they’ve received to regional grid operators like ERCOT and PJM, and the grid operators use that information to estimate how much electricity demand will grow.
However, utility companies do not typically communicate with each other to determine whether they are receiving duplicate requests from the same technology companies. This makes it difficult to gauge future electricity needs accurately. The Wall Street Journal recently reported that some utility companies’ projections for future power demand are multiples higher than the existing peak demand.
“Many data centers that are talked about and proposed and in some cases even announced will never get built,” said Sean O’Leary, a senior researcher at the Ohio River Valley Institute, in an interview with SAN.
Nevertheless, utility companies are racing to secure more power sources to meet growing demands.
Where are data centers getting the power?
Although Big Tech has announced investments in nuclear power, some of those deals are only intended to keep existing plants online. Some investments, such as Google’s plans to build unproven nuclear fusion, are promising but represent speculative, long-term bets.
To meet immediate new demand, many utility companies are turning to gas-fired power plants. Entergy Louisiana recently received state approval to build three new gas power plants to serve a Meta data center currently under construction. The power plants will add enough electricity to the grid to power two new cities the size of New Orleans.
Nationwide, about 114,000 megawatts of new gas power plants are currently being built or in a pre-construction planning phase, according to reporting from Reuters.
When a utility company seeks regulators’ approval for a new power plant, the company also asks for a guaranteed profit margin on the new infrastructure investments, which typically comes from increased rates.
O’Leary told SAN the “utility is going to be able to recover the cost of that power plant plus a commercial rate of return, whether or not that plant is ultimately needed or not.”
One similarity Koomey sees between the dot-com boom and today’s electricity growth is that “there’s a whole ecosystem of people who are very willing to make simple extrapolations,” and it’s in the interest of both Big Tech and the utility companies to generate “eye-popping numbers.”
Are the AI companies profitable?
Critics of the AI growth projections point out that the technology has not yet proven it can be a source of profit.
“The forecasts that we’re seeing right now are basically what the tech industry wants to happen and what they’re selling to their investors,” Kunkel said. “But the reality is that their financials don’t match that picture that they’re painting.”
In 2024, OpenAI, the company behind ChatGPT, reported a $5 billion loss, according to CNBC. Anthropic, which runs the chatbot Claude, also did not profit in 2024, and one recent analysis suggests the company is losing money on its paid subscribers. Nevertheless, the promise of artificial intelligence continues to attract new investors and valuations in the hundreds of billions for these new pure AI companies.
Meanwhile, Big Tech companies like Meta and Google are racing to recruit the best AI researchers, spending hundreds of millions on individual salaries.
So far, Koomey and other critics are skeptical that the public’s interest in AI will match the investments that Big Tech is making. Referring to recent viral incidents of Google’s AI tools offering flawed results, Koomey said, “These things still tell you to eat rocks and put glue in your pizza.”
“It would not be weird for an emerging technology to suddenly hit a ceiling in terms of popularity,” said Alex de Vries, a researcher at the university VU Amsterdam.
What are the physical constraints of meeting demand?
Even if consumer demand for AI matches Big Tech’s ambitions, the industry might first run into physical constraints.
Alex de Vries is the founder of the website digiconomist, where he writes about the economics of digital trends, including artificial intelligence.
In an interview with SAN, de Vries said tech companies are already using all the advanced computer chips that manufacturers can produce. On one hand, this suggests a growing demand for AI systems, but a recent analysis from London Economics International found that supply-chain constraints mean many proposed data centers will not be able to obtain the necessary AI chips.
“You can’t really predict much further than like one to two years into the future,” de Vries said, because “the supply chain capacity is fully utilized.”
Will AI models get more efficient?
The power needs of artificial intelligence also vary depending on the use case. For example, creating an AI-generated video requires much more energy than interacting with a text-based AI chatbot. New “reasoning” models capable of advanced analysis will require more power than conventional large language models, de Vries said.
Energy efficiency also varies depending on the AI model. Earlier this year, Chinese AI model DeepSeek demonstrated a functionality similar to ChatGPT while using up to 90% less electricity.
“For a data center, the single largest variable cost is the electric bill,” said O’Leary, who expects Big Tech to look for ways their AI models can become more energy efficient.
However, Tyson Slocum, director of the energy program at the nonprofit Public Citizen, noted that consumers are already experiencing increased electric bills as the market adjusts to increased electricity demand.
In an interview with SAN, Slocum said utility companies and the tech industry “intentionally create a sense of hype and panic around the energy consumption of AI, because both had a shared financial interest in doing so.”
contributed to this report.
AI Research
Pentagon research official wants to have AI on every desktop in 6 to 9 months

The Pentagon is angling to introduce artificial intelligence across its workforce within nine months following the reorganization of its key AI office.
Emil Michael, under secretary of defense for research and engineering at the Department of Defense, talked about the agency’s plans for introducing AI to its operations as it continues its modernization journey.
“We want to have an AI capability on every desktop — 3 million desktops — in six or nine months,” Michael said during a Politico event on Tuesday. “We want to have it focus on applications for corporate use cases like efficiency, like you would use in your own company … for intelligence and for warfighting.”
This announcement follows the recent shakeups and restructuring of the Pentagon’s main artificial intelligence office. A senior defense official said the Chief Digital and Artificial Intelligence Office will serve as a new addition to the department’s research portfolio.
Michael also said he is “excited” about the restructured CDAO, adding that its new role will pivot to a focus on research that is similar to the Defense Advanced Research Projects Agency and Missile Defense Agency. This change is intended to enhance research and engineering priorities that will help advance AI for use by the armed forces and not take agency focus away from AI deployment and innovation.
“To add AI to that portfolio means it gets a lot of muscle to it,” he said. “So I’m spending at least a third of my time –– maybe half –– rethinking how the AI deployment strategy is going to be at DOD.”
Applications coming out of the CDAO and related agencies will then be tailored to corporate workloads, such as efficiency-related work, according to Michael, along with intelligence and warfighting needs.
The Pentagon first stood up the CDAO and brought on its first chief digital and artificial intelligence officer in 2022 to advance the agency’s AI efforts.
The restructuring of the CDAO this year garnered attention due to its pivotal role in investigating the defense applications of emerging technologies and defense acquisition activities. Job cuts within the office added another layer of concern, with reports estimating a 60% reduction in the CDAO workforce.
AI Research
Panelists Will Question Who Controls AI | ACS CC News
Artificial intelligence (AI) has become one of the fastest-growing technologies in the world today. In many industries, individuals and organizations are racing to better understand AI and incorporate it into their work. Surgery is no exception, and that is why Clinical Congress 2025 has made AI one of the six themes of its Opening Day Thematic Sessions.
The first full day of the conference, Sunday, October 5, will include two back-to-back Panel Sessions on AI. The first session, “Using ChatGPT and AI for Beginners” (PS104), offers a foundation for surgeons not yet well versed in AI. The second, “AI: Who Is In Control?” (PS 110), will offer insights into the potential upsides and drawbacks of AI use, as well as its limitations and possible future applications, so that surgeons can involve this technology in their clinical care safely and effectively.
“AI: Who Is In Control?” will be moderated by Anna N. Miller, MD, FACS, an orthopaedic surgeon at Dartmouth Hitchcock Medical Center in Lebanon, New Hampshire, and Gabriel Brat, MD, MPH, MSc, FACS, a trauma and acute care surgeon at Beth Israel Deaconess Medical Center and an assistant professor at Harvard Medical School, both in Boston, Massachusetts.
In an interview, Dr. Brat shared his view that the use of AI is not likely to replace surgeons or decrease the need for surgical skills or decision-making. “It’s not an algorithm that’s going to be throwing the stitch. It’s still the surgeon.”
Nonetheless, he said that the starting presumption of the session is that AI is likely to be highly transformative to the profession over time.
“Once it has significant uptake, it’ll really change elements of how we think about surgery,” he said, including creating meaningful opportunities for improvements.
The key question of the session, therefore, is not whether to engage with AI, but to do so in ways that ensure the best outcomes: “We as surgeons need to have a role in defining how to do so safely and effectively. Otherwise, people will start to use these tools, and we will be swept along with a movement as opposed to controlling it.”
To that end, Dr. Brat explained that the session will offer “a really strong translational focus by people who have been in the trenches working with these technologies.” He and Dr. Miller have specifically chosen an “all-star panel” designed to represent academia, healthcare associations, and industry.
The panelists include Rachael A. Callcut, MD, MSPH, FACS, who is the division chief of trauma, acute care surgery and surgical critical care as well as associate dean of data science and innovation at the University of California-Davis Health in Sacramento, California. She will share the perspective on AI from academic surgery.
Genevieve Melton-Meaux, MD, PhD, FACS, FACMI, the inaugural ACS Chief Health Informatics Officer, will present on AI usage in healthcare associations. She also is a colorectal surgeon and the senior associate dean for health informatics and data science at the University of Minnesota and chief health informatics and AI officer for Fairview Health Services, both in Minneapolis.
Finally, Khan Siddiqui, MD, a radiologist and serial entrepreneur who is the cofounder, chairman, and CEO of a company called HOPPR AI, will present the view from industry. HOPPR AI is a for-profit company focused on building AI apps for medical imaging. As a radiologist, Dr. Siddiqui represents a medical specialty that is thought to likely undergo sweeping change as AI is incorporated into image-reading and diagnosis. His comments will focus on professional insights relevant to surgeons.
Their presentations will provide insights on general usage of AI at present, as well as predictions on what the landscape for AI in healthcare will look like in approximately 5 years. The session will include advice on what approaches to AI may be most effective for surgeons interested in ensuring positive outcomes and avoiding negative ones.
Additional information on AI usage pervades Clinical Congress 2025. In addition to various sessions that will comment on AI throughout the 4 days of the conference, various researchers will present studies that involve AI in their methods, starting presumptions, and/or potential applications to practice.
Access the Interactive Program Planner for more details about Clinical Congress 2025 sessions.
AI Research
Our new study found AI is wreaking havoc on uni…

Artificial intelligence (AI) is wrecking havoc on university assessments and exams.
Thanks to generative AI tools, such as ChatGPT, students can now generate essays and assessment answers in seconds. As we have noted in a study earlier this year, this has left universities scrambling to redesign tasks, update policies, and adopt new cheating detection systems.
But the technology keeps changing as they do this, there are constant reports of students cheating their way through their degrees.
The AI and assessment problem has put enormous pressure on institutions and teachers. Today’s students need assessment tasks to complete, as well as confidence the work they are doing matters. The community and employers need assurance university degrees are worth something.
In our latest research, we argue the problem of AI and assessment is far more difficult even than media debates have been making out.
It’s not something that can just be fixed once we find the “correct solution”. Instead, the sector needs to recognise AI in assessment is an intractable “wicked” problem, and respond accordingly.
What is a wicked problem?
The term “wicked problem,” was made famous by theorists Horst Rittel and Melvin Webber in the 1970s. It describes problems that defy neat solutions.
Well-known examples include climate change, urban planning and healthcare reform.
Unlike “tame” problems, which can be solved with enough time and resources, wicked problems have no single correct answer. In fact there is no “true” or “false” answer, only better or worse ones.
Wicked problems are messy, interconnected and resistant to closure. There is no way to test the solution to a wicked problem. Attempts to “fix” the issue inevitably generate new tensions, trade-offs and unintended consequences.
However, admitting there are no “correct” solutions does not mean there are not better and worse ones. Rather, it allows us the space to appreciate the nature and necessity of the trade offs involved.
Our research
In our latest research, we interviewed 20 university teachers leading assessment design work at Australian universities.
We recruited participants by asking for referrals across four faculties at a large Australian university.
We wanted to speak to teachers who had made changes to their assessments because of generative AI. Our aim was to better understand what assessment choices were being made, and what challenges teachers were facing.
When we were setting up our research we didn’t necessarily think of AI and assessment as a “wicked problem”. But this is what emerged from the interviews.
Our results
Interviewees described dealing with AI as an impossible situation, characterised by trade-offs. As one teacher explained:
We can make assessments more AI-proof, but if we make them too rigid, we just test compliance rather than creativity.
In other words, the solution to the problem was not “true or false”, only better or worse.
Or as another teacher asked:
Have I struck the right balance? I don’t know.
There were other examples of imperfect trade-offs. Should assessments allow students to use AI (like they will in the real world)? Or totally exclude it to ensure they demonstrate independent capability?
Should teachers set more oral exams – which appear more AI resistant than other assessments – even if this increases workload and disadvantages certain groups?
As one teacher explained,
250 students by […] 10 min […] it’s like 2,500 min, and then that’s how many days of work is it just to administer one assessment?
Teachers could also set in-person hand-written exams, but this does not necessarily test other skills students need for the real world. Nor can this be done for every single assessment in a course.
The problem keeps shifting
Meanwhile, teachers are expected to redesign assessments immediately, while the technology itself keeps changing. GenAI tools such as ChatGPT are constantly releasing new models, as well as new functionalities, while new AI learning tools (such as AI text summarisers for unit readings) are increasingly ubiquitous.
At the same time, educators need to keep up with all their usual teaching responsibilities (where we know they are already stressed and stretched).
This is a sign of a messy problem, which has no closure or end point. Or as one interviewee explained:
We just do not have the resources to be able to detect everything and then to write up any breaches.
What do we need to do instead?
The first step is to stop pretending AI in assessment is a simple, “solvable” problem.
This not only fails to understand what’s going on, it can also lead to paralysis, stress, burnout and trauma among educators, and policy churn as institutions keep trying one “solution” after the next.
Instead, AI and assessment must be treated as something to be continually negotiated rather than definitively resolved.
This recognition can lift a burden from teachers. Instead of chasing the illusion of a perfect fix, institutions and educators can focus on building processes that are flexible and transparent about the trade-offs involved.
Our study suggests universities give teaching staff certain “permissions” to better address AI.
This includes the ability to compromise to find the best approach for their particular assessment, unit and group of students. All potential solutions will have trade offs – oral examinations might be better at assuring learning but may also bias against certain groups, for example, those whose second language is English.
Perhaps it also means teachers don’t have time for other course components and this might be OK.
But, like so many of the trade offs involved in this problem, the weight of responsibility for making the call will rest on the shoulders of teachers. They need our support to make sure the weight doesn’t crush them.
David Boud receives funding from the Australian Research Council, and has in the past recieved funding from the Office for Learning and Teaching
Margaret Bearman receives funding from the Novo Nordisk Fond and the Royal Canadian College of Physicians and Surgeons. In the past she has received funding from a broad range of organisations including the Tertiary Education Quality and Standards Agency (TEQSA), the Office for Learning and Teaching, Victorian and Commonwealth governments and a range of health professional education organisations, including the College of Intensive Care Medicine and the Royal Australasian College of Surgeons.
Phillip Dawson receives funding from the Australian Research Council, and has in the past recieved funding from the Tertiary Education Quality and Standards Agency (TEQSA), the Office for Learning and Teaching, and educational technology companies Turnitin, Inspera and NetSpot.
Thomas Corbin does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.
This article was originally published on The Conversation. Read the original article.
-
Business3 weeks ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms1 month ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy2 months ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers3 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Education3 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Funding & Business3 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries