Connect with us

AI Research

Carnegie Mellon Research Forecasts Nation’s AI Energy Needs – News

Published

on


As artificial intelligence and digital infrastructure grow at unprecedented rates, researchers at Carnegie Mellon University, the birthplace of AI(opens in new window), are building sophisticated models to forecast the impact on the nation’s power grid and to forge the path toward resilient, cost-effective and low-emission energy systems.

Michael Blackhurst

The Open Energy Outlook(opens in new window) initiative at CMU’s Wilton Scott Institute for Energy Innovation is modeling how large-scale digital infrastructure growth — driven by AI and cryptocurrency — could reshape the U.S. power sector, using data to explore scenarios that vary by region, resource availability and planning strategies. 

“We’re pointing out challenges and wanted to offer some possible solutions,” said Michael Blackhurst(opens in new window), executive director of the initiative in Carnegie Mellon’s Engineering and Public Policy department(opens in new window) in the College of Engineering(opens in new window). “The degree to which policy actors are coordinating around the solutions is going to be important in terms of how effective those solutions are.”

Using estimates on the demand from digital infrastructure over the next five years, policies and planning done now could promote both digital progress and grid stability, according to the report, “Electricity Grid Impacts of Rising Demand from Data Centers and Cryptocurrency Mining Operations(opens in new window).”

AI boom stressing the grid

The digital infrastructure boom is outpacing the U.S. electricity system’s ability to respond. Data centers offer potential benefits but risk locking in higher emissions and driving up prices for households without proactive and coordinated planning. According to the report, data center and cryptocurrency mining will increase electricity demand by 350% by 2030.

Blackhurst said that until recently, data has been scarce regarding the capacity of data centers and their energy demands in order to make these kinds of predictions.

Model helps map smarter solutions

The team used the data in a model tasked with finding solutions that met the demands of the digital infrastructure while using the lowest cost options.

“Our models are least-cost optimization models,” Blackhurst said. “What that means is that given the demands placed on the electric power system — for servers, digital infrastructure and all the other demands — it’ll find the least costly set of technologies to meet those demands.”

Param Singh

Param Singh

For industry leaders and developers, right now those energy demands are greater than the supply, said Param Singh(opens in new window), Carnegie Bosch Professor of Business Technologies and Marketing and associate dean for research at Carnegie Mellon’s Tepper School of Business(opens in new window).

“The key constraint today isn’t what hyperscalers are willing to pay — it’s whether the electricity is actually available,” he said. “These companies can manage higher energy costs, but in many regions, the supply simply isn’t there at scale.”

To fill that gap, the demand is often filled by fossil fuel-powered plants.

“These are faster to deploy than renewable energy sources, but raise questions about long-term sustainability and local impact,” said Singh, whose research analyzes how location choices, energy availability and incentive structures affect the economic and environmental outcomes for the communities that host these facilities.

Short-term constraints, long-term possibilities

The Open Energy Outlook research team modeled ways the grid could expand to handle the rising demand while keeping emissions and costs in check. By combining consumption, capacity and climate data, the team’s work can give regional and national planners clearer insight into how today’s decisions could shape the energy system for decades to come without locking in high-emissions infrastructure.

paulina-jaramillo.jpeg

Paulina Jaramillo

“There’s a lot of interest in Congress on AI, energy and the emissions implications of AI,” said Paulina Jaramillo(opens in new window), a member of the research team and a Trustee Professor in Engineering and Public Policy at CMU. “Since we have the model, I asked the team if they could do some simulations to see if we could quantify the impact.” 

Research from the Open Energy Outlook team suggests several state and federal policies(opens in new window) for consideration including fair cost allocations, which would shift costs to large users instead of individual families.

Economics of growing demand

muller.jpg

Nicholas Muller

A number of factors would influence how each individual ratepayer experiences increased costs, said Nicholas Z. Muller(opens in new window), Lester and Judith Lave Professor of Economics, Engineering and Public Policy in the Tepper School.

Additionally, data centers built to include a power generation source could be self-contained with less impact on the electric grid than ones that are not.

“The way in which these costs often get covered is through a process that involves producers and the Public Utility Commission, and this ultimately shapes the retail prices and rates that we see as consumers,” Muller said. “Some of that then is shouldered by the downstream entities that are ultimately using power through retail rates.”

Powering a more reliable future

Data centers need consistent sources of energy, which can be a challenge when relying on variable renewable energy.

“They can’t suffer interruptions,” he said. “So we need to think about coupling the variable resources with backup technology or stored energy. And that’s where batteries come in.”

One solution from Muller’s research(opens in new window) could be providing incentives to account for system instabilities and avoid imbalances, coordinated with power generation sources and transmission.

Chris Telmer

Chris Telmer

Colocating self-contained power generating sources alongside data centers, as Muller mentioned, could be effective depending on other factors, said Chris Telmer(opens in new window), associate professor of financial economics at CMU’s Tepper School. 

“Colocation of generation and load poses challenges to the traditional utility-company model,” he said. “Regulation will need to be more flexible than it has been in the past.”

Smarter tech, smaller footprint

In the same way the increased efficiency of LED lights reduced energy demand compared to incandescents, future demand may depend on technological advancement, which Carnegie Mellon researchers have well underway.

For example, CMU research teams are studying(opens in new window) large language models specialized for specific tasks that can then make them more efficient, and how more AI models may run locally, known as “edge computing,” instead of relying on data centers at all.

These CMU-led innovations could ultimately lead to reduced overall forecasted demands, Telmer said.

“Discoveries could be made regarding efficiencies in computer chips and other elements that power the AI industry, and it’s that uncertainty that is really the important one.”

Research from Carnegie Mellon, with its interdisciplinary atmosphere paired with its history of innovation in engineering, technology and business, has an important role to play, Singh said.

For instance, John Kitchin(opens in new window), who serves as the John E. Swearingen Professor of Chemical Engineering in CMU’s College of Engineering, is leading research that is exploring ways to convert stored energy from renewable sources through different chemical catalysts into fuels that can be used for transportation and industry.

What’s next for energy policy and planning

Future research could include testing the different policy strategies in the model to help predict or lessen financial impacts, said Jaramillo, who is currently serving as a climate Congressional Fellow in the Science & Technology Policy Fellowship program managed by the American Association for the Advancement of Science. 

“The current analysis doesn’t include any simulations on the effect of different policy mechanisms, so next steps would go a little deeper into our policy options and how effective they would be,” she said.

For energy policy to be effective and fair, research like what is taking place at Carnegie Mellon can lead to tools that show us how interventions ripple through the grid, the economy and emissions trajectories, Singh said.

“This is exactly the kind of systems problem Carnegie Mellon is built to tackle — where technical, economic and policy considerations need to be integrated to guide smarter, more balanced decisions. In my mind, this is really an economics plus engineering problem, and Carnegie Mellon is a great place to consider this,” he said.

A peer-reviewed version of the report is forthcoming. Projections used for the modeling come from Lawrence Berkeley National Laboratory, Electric Power Research Institute and Bloomberg.

The Open Energy Outlook research initiative is a collaboration between the Scott Institute for Energy Innovation(opens in new window) at Carnegie Mellon and North Carolina State University.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Endangered languages AI tools developed by UH researchers

Published

on

By


Reading time: < 1 minute

University of Hawaiʻi at Mānoa researchers have made a significant advance in studying how artificial intelligence (AI) understands endangered languages. This research could help communities document and maintain their languages, support language learning and make technology more accessible to speakers of minority languages.

The paper by Kaiying Lin, a PhD graduate in linguistics from UH Mānoa, and Department of Information and Computer Sciences Assistant Professor Haopeng Zhang, introduces the first benchmark for evaluating large language models (AI systems that process and generate text) on low-resource Austronesian languages. The study focuses on three Formosan (Indigenous peoples and languages of Taiwan) languages spoken in Taiwan—Atayal, Amis and Paiwan—that are at risk of disappearing.

Using a new benchmark called FORMOSANBENCH, Lin and Zhang tested AI systems on tasks such as machine translation, automatic speech recognition and text summarization. The findings revealed a large gap between AI performance in widely spoken languages such as English, and these smaller, endangered languages. Even when AI models were given examples or fine-tuned with extra data, they struggled to perform well.

“These results show that current AI systems are not yet capable of supporting low-resource languages,” Lin said.

Zhang added, “By highlighting these gaps, we hope to guide future development toward more inclusive technology that can help preserve endangered languages.”

The research team has made all datasets and code publicly available to encourage further work in this area. The preprint of the study is available online, and the study has been accepted into the 2025 Conference on Empirical Methods in Natural Language Processing in Suzhou, China, an internationally recognized premier AI conference.

The Department of Information and Computer Sciences is housed in UH Mānoa’s College of Natural Sciences, and the Department of Linguistics is housed in UH Mānoa’s College of Arts, Languages & Letters.



Source link

Continue Reading

AI Research

OpenAI reorganizes research team behind ChatGPT’s personality

Published

on


OpenAI is reorganizing its Model Behavior team, a small but influential group of researchers who shape how the company’s AI models interact with people, TechCrunch has learned.

In an August memo to staff seen by TechCrunch, OpenAI’s chief research officer Mark Chen said the Model Behavior team — which consists of roughly 14 researchers — would be joining the Post Training team, a larger research group responsible for improving the company’s AI models after their initial pre-training.

As part of the changes, the Model Behavior team will now report to OpenAI’s Post Training lead Max Schwarzer. An OpenAI spokesperson confirmed these changes to TechCrunch.

The Model Behavior team’s founding leader, Joanne Jang, is also moving on to start a new project at the company. In an interview with TechCrunch, Jang says she’s building out a new research team called OAI Labs, which will be responsible for “inventing and prototyping new interfaces for how people collaborate with AI.”

The Model Behavior team has become one of OpenAI’s key research groups, responsible for shaping the personality of the company’s AI models and for reducing sycophancy — which occurs when AI models simply agree with and reinforce user beliefs, even unhealthy ones, rather than offering balanced responses. The team has also worked on navigating political bias in model responses and helped OpenAI define its stance on AI consciousness.

In the memo to staff, Chen said that now is the time to bring the work of OpenAI’s Model Behavior team closer to core model development. By doing so, the company is signaling that the “personality” of its AI is now considered a critical factor in how the technology evolves.

In recent months, OpenAI has faced increased scrutiny over the behavior of its AI models. Users strongly objected to personality changes made to GPT-5, which the company said exhibited lower rates of sycophancy but seemed colder to some users. This led OpenAI to restore access to some of its legacy models, such as GPT-4o, and to release an update to make the newer GPT-5 responses feel “warmer and friendlier” without increasing sycophancy.

Techcrunch event

San Francisco
|
October 27-29, 2025

OpenAI and all AI model developers have to walk a fine line to make their AI chatbots friendly to talk to but not sycophantic. In August, the parents of a 16-year-old boy sued OpenAI over ChatGPT’s alleged role in their son’s suicide. The boy, Adam Raine, confided some of his suicidal thoughts and plans to ChatGPT (specifically a version powered by GPT-4o), according to court documents, in the months leading up to his death. The lawsuit alleges that GPT-4o failed to push back on his suicidal ideations.

The Model Behavior team has worked on every OpenAI model since GPT-4, including GPT-4o, GPT-4.5, and GPT-5. Before starting the unit, Jang previously worked on projects such as Dall-E 2, OpenAI’s early image-generation tool.

Jang announced in a post on X last week that she’s leaving the team to “begin something new at OpenAI.” The former head of Model Behavior has been with OpenAI for nearly four years.

Jang told TechCrunch she will serve as the general manager of OAI Labs, which will report to Chen for now. However, it’s early days, and it’s not clear yet what those novel interfaces will be, she said.

“I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” said Jang. “I’ve been thinking of [AI systems] as instruments for thinking, making, playing, doing, learning, and connecting.”

When asked whether OAI Labs will collaborate on these novel interfaces with former Apple design chief Jony Ive — who’s now working with OpenAI on a family of AI hardware devices — Jang said she’s open to lots of ideas. However, she said she’ll likely start with research areas she’s more familiar with.

This story was updated to include a link to Jang’s post announcing her new position, which was released after this story published. We also clarify the models that OpenAI’s Model Behavior team worked on.





Source link

Continue Reading

AI Research

Researchers can accurately tell someone’s age using AI and just a bit of DNA

Published

on


At the Hebrew University of Jerusalem, scientists created a new way to tell someone’s age using just a bit of DNA. This method uses a blood sample and a small part of your genetic code to give highly accurate results. It doesn’t rely on external features or medical history like other age tests often do. Even better, it stays accurate no matter your sex, weight, or smoking status.

Bracha Ochana and Daniel Nudelman led the team, guided by Professors Kaplan, Dor, and Shemer. They developed a tool called MAgeNet that uses artificial intelligence to study DNA methylation patterns. DNA methylation is a process that adds chemical tags to DNA as the body ages. By training deep learning networks on these patterns, they predicted age with just a 1.36-year error in people under 50.

How DNA Stores the Marks of Time

Time leaves invisible fingerprints on your cells. One of the most telling signs of age in your body is DNA methylation—the addition of methyl groups (CH₃) to your DNA. These chemical tags don’t change your genetic code, but they do affect how your genes behave. And over time, these tags build up in ways that mirror the passage of years.

450K/EPIC age-associated DNA methylation sites are often surrounded by additional CpGs correlated with age. (CREDIT: Cell Reports)

What makes the new method so effective is its focus. Instead of analyzing thousands of areas in the genome, MAgeNet zeroes in on just two short genomic regions. This tight focus, combined with high-resolution scanning at the single-molecule level, allows the AI to read the methylation patterns like a molecular clock. Professor Kaplan explains it simply: “The passage of time leaves measurable marks on our DNA. Our model decodes those marks with astonishing precision.”

Small Sample, Big Insights

The study, recently published in Cell Reports, used blood samples from more than 300 healthy individuals. It also included data from a 10-year follow-up of the Jerusalem Perinatal Study, which tracks health information across lifetimes. That long-term data, led by Professor Hagit Hochner from the Faculty of Medicine, helped the team confirm that MAgeNet works not just in the short term but also across decades.

Importantly, the model’s accuracy held up no matter the person’s sex, body mass index, or smoking history—factors that often throw off similar tests. That consistency means the tool could be widely used in both clinical and non-clinical settings.



From Medicine to Crime Scenes

The medical uses are easy to imagine. Knowing someone’s true biological age can help doctors make better decisions about care, especially when signs of aging don’t match the number of candles on a birthday cake. Personalized treatment plans could become more effective if based on what’s happening at the cellular level, not just what appears on a chart.

But this breakthrough also has major potential in the world of forensic science. Law enforcement teams could one day use this method to estimate the age of a suspect based solely on a few cells left behind. That’s a big step forward from current forensic DNA tools, which are good at identifying a person but struggle with age.

“This gives us a new window into how aging works at the cellular level,” says Professor Dor. “It’s a powerful example of what happens when biology meets AI.

A schematic view of targeted PCR sequencing following bisulfite conversion, facilitating concurrent mapping of multiple neighboring CpG sites at a depth >5,000×. (CREDIT: Cell Reports)

Ticking Clocks Inside Our Cells

As they worked with the data, the researchers noticed something else: DNA doesn’t just age randomly. Some changes happen in bursts. Others follow slow, steady patterns—almost like ticking clocks inside each cell. These new observations may help explain why people age differently, even when they’re the same age chronologically.

“It’s not just about knowing your age,” adds Professor Shemer. “It’s about understanding how your cells keep track of time, molecule by molecule.”

This could also impact the growing field of longevity research. Scientists are increasingly interested in how biological aging differs from the simple count of years lived. The ability to measure age so precisely from such a small DNA sample may become a key tool in developing future anti-aging therapies or drugs that slow down cellular wear and tear.

A deep neural network for age prediction from fragment-level targeted DNA methylation data. (CREDIT: Cell Reports)

Why This Research Changes Everything

The method created by the Hebrew University team marks a turning point in how we think about aging, identity, and health. In the past, DNA told us who we are. Now it can tell us how old we truly are—and possibly how long we’ll stay healthy. The implications stretch from hospital rooms to courtrooms.

As the world faces rising healthcare demands from aging populations, tools like MAgeNet offer a smarter, faster way to assess risk, track longevity, and understand what aging really means. It’s no longer just a number on your ID.

Thanks to AI and a deep dive into the chemistry of life, age has become something you can measure with stunning accuracy, from the inside out.





Source link

Continue Reading

Trending