AI Research
How much energy does your AI prompt use? It depends
A chatbot might not break a sweat every time you ask it to make your shopping list or come up with its best dad jokes. But over time, the planet might.
As generative AI such as large language models (LLMS) becomes more ubiquitous, critical questions loom. For every interaction you have with AI, how much energy does it take — and how much carbon is emitted into the atmosphere?
Earlier this month, OpenAI CEO Sam Altman claimed that an “average ChatGPT query” uses energy equal to “about what an oven would use in a little over one second.” That’s within the realm of reason: AI research firm Epoch AI previously calculated a similar estimate. However, experts say the claim lacks key context, like what an “average” query even is.
“If you wanted to be rigorous about it, you would have to give a range,” says Sasha Luccioni, an AI researcher and climate lead at the AI firm Hugging Face. “You can’t just throw a number out there.”
Major players including OpenAI and Anthropic have the data, but they’re not sharing it. Instead, researchers can only piece together limited clues from open-source LLMs. One study published June 19 in Frontiers in Communication examined 14 such models, including those from Meta and DeepSeek, and found that some models produced up to 50 times more CO₂ emissions than others.
But these numbers merely offer a narrow snapshot — and they only get more dire after factoring in the carbon cost of training models, manufacturing and maintaining the hardware to run them and the scale at which generative AI is poised to permeate our daily lives.
“Machine learning research has been driven by accuracy and performance,” says Mosharaf Chowdhury, a computer scientist at the University of Michigan in Ann Arbor. “Energy has been the middle child that nobody wants to talk about.”
Science News spoke with four experts to unpack these hidden costs and what they mean for AI’s future.
What makes large language models so energy-hungry?
You’ll often hear people describe LLMs by the number of parameters they have. Parameters are the internal knobs the model adjusts during training to improve its performance. The more parameters, the more capacity the model has to learn patterns and relationships in data. GPT-4, for example, is estimated to have over a trillion parameters.
“If you want to learn all the knowledge of the world, you need bigger and bigger models,” MIT computer scientist Noman Bashir says.
Models like these don’t run on your laptop. Instead, they’re deployed in massive data centers located across the world. In each center, the models are loaded on servers containing powerful chips called graphics processing units (GPUs), which do the number crunching needed to generate helpful outputs. The more parameters a model has, generally the more chips are needed to run it — especially to get users the fastest response possible.
All of this takes energy. Already, 4.4 percent of all energy in the U.S. goes toward data centers used for a variety of tech demands, including AI. By 2028, this number is projected to grow to up to 12 percent.
Why is it so difficult to measure the carbon footprint of LLMs?
Before anyone can ask a model a question, it must first be trained. During training, a model digests vast datasets and adjusts its internal parameters accordingly. It often takes weeks and thousands of GPUs, burning an enormous amount of energy. But since companies rarely disclose their training methods — what data they used, how much compute time or what kind of energy powered it — the emissions from this process are largely a black box.
The second half of the model’s life cycle is inference, which happens every time a user prompts the model. Over time, inference is expected to account for the bulk of a model’s emissions. “You train a model once, then billions of users are using the model so many times,” Chowdhury says.
But inference, too, is difficult to quantify. The environmental impact of a single query can vary dramatically depending on which data center it’s routed to, which energy grid powers the data center and even the time of day. Ultimately, only the companies running these models have a complete picture.
Is there any way to estimate an LLM’s energy use?
For training, not really. For inference, kind of.
OpenAI and Anthropic keep their models proprietary, but other companies such as Meta and DeepSeek release open-source versions of their AI products. Researchers can run these models locally and measure the energy consumed by their GPU as a proxy for how much energy inference would take.
In their new study, Maximilian Dauner and Gudrun Socher at Munich University of Applied Sciences in Germany tested 14 open-source AI models, ranging from 7 billion to 72 billion parameters (those internal knobs), on the NVIDIA A100 GPU. Reasoning models, which explain their thinking step by step, consumed far more energy during inference than standard models, which directly output the answer.
The reason comes down to tokens, or the bits of text a model processes to generate a response. More tokens mean more computation and higher energy use. On average, reasoning models used 543.5 tokens per question, compared to just 37.7 for standard models. At scale, the queries add up: Using the 70-parameter reasoning model DeepSeek R1 to answer 600,000 questions would emit as much CO₂ as a round-trip flight from London to New York.
In reality, the numbers can only be higher. Many companies have switched over to Nvidia’s newer H100, a chip specifically optimized for AI workloads that’s even more power-hungry than the A100. To more accurately reflect the total energy used during inference — including cooling systems and other supporting hardware — previous research has found that reported GPU energy consumption needs to be doubled.
Even still, none of that accounts for the emissions generated from manufacturing the hardware and constructing the buildings that house it, what’s known as embodied carbon, Bashir points out.
What can people do to make their AI usage more environmentally friendly?
Choosing the right model for each task makes a difference. “Is it always needed to use the biggest model for easy questions?” Dauner asks. “Or can a small model also answer easy questions, and we can reduce CO₂ emissions based on that?”
Similarly, not every question needs a reasoning model. For example, Dauner’s study found that the standard model Qwen 2.5 achieved comparable accuracy to the reasoning model Cogito 70B, but with less than a third of the carbon production.
Researchers have created other public tools to measure and compare AI energy use. Hugging Face runs a leaderboard called AI Energy Score, which ranks models based on how much energy they use across 10 different tasks from text generation to image classification to voice transcription. It includes both open source and proprietary models. The idea is to help people choose the most efficient model for a given job, finding that “golden spot” between performance, accuracy and energy efficiency.
Chowdhury also helps run ML.Energy, which has a similar leaderboard. “You can save a lot of energy by giving up a tiny bit of performance,” Chowdhury says.
Using AI less frequently during the daytime or summer, when power demand spikes and cooling systems work overtime, can also make a difference. “It’s similar to AC,” Bashir says. “If the outside temperature is very high, you would need more energy to cool down the inside of the house.”
Even the way you phrase your queries matters. Environmentally speaking, there’s no need to be polite to the chatbot. Any extra input you put in takes more processing power to parse. “It costs millions of [extra] dollars because of ‘thank you’ and ‘please,’” Dauner says. “Every unnecessary word has an influence on the run time.”
Ultimately, however, policy must catch up. Luccioni suggests a framework based on an energy rating system, like those used for household appliances. For example, “if your model is being used by, say, 10 million users a day or more, it has to have an energy score of B+ or higher,” she says.
Otherwise, energy supply won’t be able to sustain AI’s growing demand. “I go to conferences where grid operators are freaking out,” Luccioni says. “Tech companies can’t just keep doing this. Things are going to start going south.”
Source link
AI Research
Political attitudes shape public perceptions of artificial intelligence
AI Research
Space technology: Lithuania’s promising space start-ups
Technology Reporter
I’m led through a series of concrete corridors at Vilnius University, Lithuania; the murals give a Soviet-era vibe, and it seems an unlikely location for a high-tech lab working on a laser communication system.
But that’s where you’ll find the headquarters of Astrolight, a six-year-old Lithuanian space-tech start-up that has just raised €2.8m ($2.3m; £2.4m) to build what it calls an “optical data highway”.
You could think of the tech as invisible internet cables, designed to link up satellites with Earth.
With 70,000 satellites expected to launch in the next five years, it’s a market with a lot of potential.
The company hopes to be part of a shift from traditional radio frequency-based communication, to faster, more secure and higher-bandwidth laser technology.
Astrolight’s space laser technology could have defence applications as well, which is timely given Russia’s current aggressive attitude towards its neighbours.
Astrolight is already part of Nato’s Diana project (Defence Innovation Accelerator for the North Atlantic), an incubator, set up in 2023 to apply civilian technology to defence challenges.
In Astrolight’s case, Nato is keen to leverage its fast, hack-proof laser communications to transmit crucial intelligence in defence operations – something the Lithuanian Navy is already doing.
It approached Astrolight three years ago looking for a laser that would allow ships to communicate during radio silence.
“So we said, ‘all right – we know how to do it for space. It looks like we can do it also for terrestrial applications’,” recalls Astrolight co-founder and CEO Laurynas Maciulis, who’s based in Lithuania’s capital, Vilnius.
For the military his company’s tech is attractive, as the laser system is difficult to intercept or jam.
It’s also about “low detectability”, Mr Maciulis adds:
“If you turn on your radio transmitter in Ukraine, you’re immediately becoming a target, because it’s easy to track. So with this technology, because the information travels in a very narrow laser beam, it’s very difficult to detect.”
Worth about £2.5bn, Lithuania’s defence budget is small when you compare it to larger countries like the UK, which spends around £54bn a year.
But if you look at defence spending as a percentage of GDP, then Lithuania is spending more than many bigger countries.
Around 3% of its GDP is spent on defence, and that’s set to rise to 5.5%. By comparison, UK defence spending is worth 2.5% of GDP.
Recognised for its strength in niche technologies like Astrolight’s lasers, 30% of Lithuania’s space projects have received EU funding, compared with the EU national average of 17%.
“Space technology is rapidly becoming an increasingly integrated element of Lithuania’s broader defence and resilience strategy,” says Invest Lithuania’s Šarūnas Genys, who is the body’s head of manufacturing sector, and defence sector expert.
Space tech can often have civilian and military uses.
Mr Genys gives the example of Lithuanian life sciences firm Delta Biosciences, which is preparing a mission to the International Space Station to test radiation-resistant medical compounds.
“While developed for spaceflight, these innovations could also support special operations forces operating in high-radiation environments,” he says.
He adds that Vilnius-based Kongsberg NanoAvionics has secured a major contract to manufacture hundreds of satellites.
“While primarily commercial, such infrastructure has inherent dual-use potential supporting encrypted communications and real-time intelligence, surveillance, and reconnaissance across NATO’s eastern flank,” says Mr Genys.
Going hand in hand with Astrolight’s laser technology is the autonomous satellite navigation system fellow Lithuanian space-tech start-up Blackswan Space has developed.
Blackswan Space’s “vision based navigation system” allows satellites to be programmed and repositioned independently of a human based at a ground control centre who, its founders say, won’t be able to keep up with the sheer volume of satellites launching in the coming years.
In a defence environment, the same technology can be used to remotely destroy an enemy satellite, as well as to train soldiers by creating battle simulations.
But the sales pitch to the Lithuanian military hasn’t necessarily been straightforward, acknowledges Tomas Malinauskas, Blackswan Space’s chief commercial officer.
He’s also concerned that government funding for the sector isn’t matching the level of innovation coming out of it.
He points out that instead of spending $300m on a US-made drone, the government could invest in a constellation of small satellites.
“Build your own capability for communication and intelligence gathering of enemy countries, rather than a drone that is going to be shot down in the first two hours of a conflict,” argues Mr Malinauskas, also based in Vilnius.
“It would be a big boost for our small space community, but as well, it would be a long-term, sustainable value-add for the future of the Lithuanian military.”
Eglė Elena Šataitė is the head of Space Hub LT, a Vilnius-based agency supporting space companies as part of Lithuania’s government-funded Innovation Agency.
“Our government is, of course, aware of the reality of where we live, and that we have to invest more in security and defence – and we have to admit that space technologies are the ones that are enabling defence technologies,” says Ms Šataitė.
The country’s Minister for Economy and Innovation, Lukas Savickas, says he understands Mr Malinauskas’ concern and is looking at government spending on developing space tech.
“Space technology is one of the highest added-value creating sectors, as it is known for its horizontality; many space-based solutions go in line with biotech, AI, new materials, optics, ICT and other fields of innovation,” says Mr Savickas.
Whatever happens with government funding, the Lithuanian appetite for innovation remains strong.
“We always have to prove to others that we belong on the global stage,” says Dominykas Milasius, co-founder of Delta Biosciences.
“And everything we do is also geopolitical… we have to build up critical value offerings, sciences and other critical technologies, to make our allies understand that it’s probably good to protect Lithuania.”
AI Research
How Is AI Changing The Way Students Learn At Business School?
Artificial intelligence is the skill set that employers increasingly want from future hires. Find out how b-schools are equipping students to use AI
Business students are already seeing AI’s value. More than three-quarters of business schools have already integrated AI into their curricula—from essay writing to personal tutoring, career guidance to soft-skill development.
BusinessBecause hears from current business students about how AI is reshaping the business school learning experience.
The benefits and drawbacks of using AI for essay writing
Many business school students are gaining firsthand experience of using AI to assist their academic work. At Rotterdam School of Management, Erasmus University in the Netherlands, students are required to use AI tools when submitting essays, alongside a log of their interactions.
“I was quite surprised when we were explicitly instructed to use AI for an assignment,” said Lara Harfner, who is studying International Business Administration (IBA) at RSM. “I liked the idea. But at the same time, I wondered what we would be graded on, since it was technically the AI generating the essay.”
Lara decided to approach this task as if she were writing the essay herself. She began by prompting the AI to brainstorm around the topic, research areas using academic studies and build an outline, before asking it to write a full draft.
However, during this process Lara encountered several problems. The AI-generated sources were either non-existent or inappropriate, and the tool had to be explicitly instructed on which concepts to focus on. It tended to be too broad, touching on many ideas without thoroughly analyzing any of them.
“In the end, I felt noticeably less connected to the content,” Lara says. “It didn’t feel like I was the actual author, which made me feel less responsible for the essay, even though it was still my name on the assignment.”
Despite the result sounding more polished, Lara thought she could have produced a better essay on her own with minimal AI support. What’s more, the grades she received on the AI-related assignments were below her usual average. “To me, that shows that AI is a great support tool, but it can’t produce high-quality academic work on its own.”
AI-concerned employers who took part in the Corporate Recruiters Survey echo this finding, stating that they would rather GME graduates use AI as a strategic partner in learning and strategy, than as a source for more and faster content.
How business students use AI as a personal tutor
Daniel Carvalho, a Global Online MBA student, also frequently uses AI in his academic assignments, something encouraged by his professors at Porto Business School (PBS).
However, Daniel treats AI as a personal tutor, asking it to explain complex topics in simple terms and deepen the explanation. On top of this, he uses it for brainstorming ideas, summarizing case studies, drafting presentations and exploring different points of view.
“My MBA experience has shown me how AI, when used thoughtfully, can significantly boost productivity and effectiveness,” he says.
Perhaps one of the most interesting ways Daniel uses AI is by turning course material into a personal podcast. “I convert text-based materials into audio using text-to-speech tools, and create podcast-style recaps to review content in a more conversational and engaging way. This allows me to listen to the materials on the go—in the car or at the gym.”
While studying his financial management course, Daniel even built a custom GPT using course materials. Much like a personal tutor, it would ask him questions about the material, validate his understanding, and explain any questions he got wrong. “This helped reinforce my knowledge so effectively that I was able to correctly answer all multiple-choice questions in the final exam,” he explains.
Similarly, at Villanova School of Business in the US, Master of Science in Business Analytics and AI (MSBAi) students are building personalized AI bots with distinct personalities. Students embed reference materials into the bot which then shape how the bot responds to questions.
“The focus of the program is to apply these analytics and AI skills to improve business results and career outcomes,” says Nathan Coates, MSBAi faculty director at the school. “Employers are increasingly looking for knowledge and skills for leveraging GenAI within business processes. Students in our program learn how AI systems work, what their limitations are, and what they can do better than existing solutions.”
The common limitations of using AI for academic work
Kristiina Esop, who is studying a doctorate in Business Administration and Management at Estonian Business School, agrees that AI in education must always be used critically and with intention. She warns students should always be aware of AI’s limitations.
Kristiina currently uses AI tools to explore different scenarios, synthesize large volumes of information, and detect emerging debates—all of which are essential for her work both academically and professionally.
However, she cautions that AI tools are not 100% accurate. Kristiina once asked ChatGPT to map actors in circular economy governance, and it returned a neat, simplified diagram that ignored important aspects. “That felt like a red flag,” she says. “It reminded me that complexity can’t always be flattened into clean logic. If something feels too easy, too certain—that’s when it is probably time to ask better questions.”
To avoid this problem, Kristiina combines the tools with critical thinking and contextual reading, and connects the findings back to the core questions in her research. “I assess the relevance and depth of the sources carefully,” she says. “AI can widen the lens, but I still need to focus it myself.”
She believes such critical thinking when using AI is essential. “Knowing when to question AI-generated outputs, when to dig deeper, and when to disregard a suggestion entirely is what builds intellectual maturity and decision-making capacity,” she says.
This is also what Wharton management professor Ethan Mollick, author of Co Intelligence: Living and Working with AI and co-director of the Generative AI Lab believes. He says the best way to work with [generative AI] is to treat it like a person. “So you’re in this interesting trap,” he says. “Treat it like a person and you’re 90% of the way there. At the same time, you have to remember you are dealing with a software process.”
Hult International Business School, too, expects its students to use AI in a balanced way, encouraging them to think critically about when and how to use it. For example, Rafael Martínez Quiles, a Master’s in Business Analytics student at Hult, uses AI as a second set of eyes to review his thinking.
“I develop my logic from scratch, then use AI to catch potential issues or suggest improvements,” he explains. “This controlled, feedback-oriented approach strengthens both the final product and my own learning.”
At Hult, students engage with AI to solve complex, real-world challenges as part of the curriculum. “Practical business projects at Hult showed me that AI is only powerful when used with real understanding,” says Rafael. “It doesn’t replace creativity or business acumen, it supports it.”
As vice president of Hult’s AI Society, N-AIble, Rafael has seen this mindset in action. The society’s members explore AI ethically, using it to augment their work, not automate it. “These experiences have made me even more confident and excited about applying AI in the real world,” he says.
The AI learning tools students are using to improve understanding
In other business schools, AI is being used to offer faculty a second pair of hands. Nazarbayev University Graduate School of Business has recently introduced an ‘AI Jockey’. Appearing live on a second screen next to the lecturer’s slides, this AI tool acts as a second teacher, providing real-time clarifications, offering alternate examples, challenging assumptions, and deepening explanations.
“Students gain access to instant, tailored explanations that complement the lecture, enhancing understanding and engagement,” says Dr Tom Vinaimont, assistant professor of finance, Nazarbayev University Graduate School of Business, who uses the AI jockey in his teaching.
Rather than replacing the instructor, the AI enhances the learning experience by adding an interactive, AI-driven layer to traditional teaching, transforming learning into a more dynamic, responsive experience.
“The AI Jockey model encourages students to think critically about information, question the validity of AI outputs, and build essential AI literacy. It helps students not only keep pace with technological change but also prepares them to lead in an AI-integrated world by co-creating knowledge in real time,” says Dr Vinaimont.
How AI can be used to encourage critical thinking among students
So, if you’re looking to impress potential employers, learning to work with AI while a student is a good place to start. But simply using AI tools isn’t enough. You must think critically, solve problems creatively and be aware of AI’s limitations.
Most of all, you must be adaptable. GMAC’s new AI-powered tool, Advancery, helps you find graduate business programs tailored to your career goals, with AI-readiness in mind.
After all, working with AI is a skill in itself. And in 2025, it is a valuable one.
-
Funding & Business1 week ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Jobs & Careers7 days ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Mergers & Acquisitions7 days ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies
-
Funding & Business7 days ago
Rethinking Venture Capital’s Talent Pipeline
-
Jobs & Careers7 days ago
Why Agentic AI Isn’t Pure Hype (And What Skeptics Aren’t Seeing Yet)
-
Funding & Business4 days ago
Sakana AI’s TreeQuest: Deploy multi-model teams that outperform individual LLMs by 30%
-
Jobs & Careers7 days ago
Astrophel Aerospace Raises ₹6.84 Crore to Build Reusable Launch Vehicle
-
Jobs & Careers7 days ago
Telangana Launches TGDeX—India’s First State‑Led AI Public Infrastructure
-
Funding & Business1 week ago
From chatbots to collaborators: How AI agents are reshaping enterprise work
-
Funding & Business5 days ago
HOLY SMOKES! A new, 200% faster DeepSeek R1-0528 variant appears from German lab TNG Technology Consulting GmbH