AI Research
Study finds AI can slash global carbon emissions

A study from the London School of Economics and Systemiq suggests it’s possible to cut global carbon emissions without giving up modern comforts—with AI as our ally in the climate fight.
According to the duo’s research, smart AI applications in just three industries could slash greenhouse gas emissions by 3.2-5.4 billion tonnes each year by 2035.
In contrast to much of what we’ve heard, these reductions would far outweigh the carbon that AI itself produces.
The study, ‘Green and intelligent: the role of AI in the climate transition,’ doesn’t just see AI as a tool for small improvements. Instead, it could help transform our entire economy into something sustainable and inclusive.
Net-zero as an opportunity, not a burden
The researchers suggest we should see the shift to a net-zero economy not as a burden but as “a great opportunity for innovation and sustainable, resilient, and inclusive economic growth.”
They focused on three of the major carbon culprits – power generation, meat and dairy production, and passenger vehicles – which together cause almost half of global emissions. The potential AI savings from just these sectors would more than cancel out the estimated 0.4 to 1.6 billion tonnes of annual emissions from running all those AI data centers.
As the authors put it, “the case for using AI for the climate transition is not only strong but imperative.”
Five big ways AI can help save our planet (and us)
1. Making complex systems smarter
Think about how our modern lives depend on intricate networks for energy, transport, and city living. AI can redesign these systems to work much more efficiently.
Remember those frustrating power outages when the wind stops blowing or clouds cover the sun? AI can help predict these fluctuations in renewable energy and balance them with real-time demand. DeepMind has already shown its AI can boost wind energy’s economic value by 20% by reducing the need for backup power sources.
2. Speeding up discovery and reducing waste
Almost half the emissions cuts needed to reach net-zero by 2050 will rely on technologies that are barely out of the lab today and AI is turbocharging these breakthroughs.
Take Google DeepMind’s GNOME tool, which has already identified over two million new crystal structures that could revolutionise renewable energy and battery storage. Or consider how Amazon’s AI packaging algorithms have saved over three million metric tons of material since 2015.
3. Helping us make better choices
Our daily decisions – from what we eat, to how we travel – could drive up to 70% of emissions reductions by 2050. But making the right choice isn’t always easy.
AI can be our personal environmental coach, breaking down information barriers and offering tailored recommendations. Already using Google Maps’ fuel-efficient routes? That’s AI helping you cut emissions while saving gas money. And those smart home systems like Nest use AI to optimise your heating and cooling, which could save millions of tonnes of CO2 if we all adopted them.
4. Predicting climate changes and policy effects
How do we plan for a changing climate? AI can process enormous datasets to forecast climate patterns with unprecedented accuracy.
Tools like IceNet (developed by the British Antarctic Survey and the Alan Turing Institute) are using AI to predict sea ice levels better than ever before, helping communities and businesses prepare. This capability also extends to helping governments design climate policies that actually work, by learning from countless case studies around the world.
5. Keeping us safe in extreme weather
As climate disasters intensify, early warning can save lives. AI-powered systems for floods and wildfires are becoming essential safety nets.
Google’s Flood Hub uses machine learning to provide flood forecasts up to five days in advance across more than 80 countries. That’s precious time for people to protect their homes and evacuate if necessary.
The numbers support AI cutting global carbon emissions
When researchers crunched the numbers, they found AI could:
- Cut power sector emissions by 1.8 billion tonnes yearly by 2035 just by optimising renewable energy
- Save between 0.9 and 3.0 billion tonnes annually by improving plant-based proteins to taste and feel more like meat
- Reduce vehicle emissions by up to 0.6 billion tonnes each year through shared mobility and better battery technology
Here’s the catch: we can’t just sit back and let market forces determine how AI develops. The researchers call for an “active state” to ensure that AI benefits everyone and the planet.
“Governments have a critical role in ensuring that AI is deployed effectively to accelerate the transition equitably and sustainably,” they conclude.
What this means in practice is creating incentives for green AI research, regulating to minimise environmental impact, and investing in infrastructure so communities worldwide can share in the benefits.
By guiding innovation and working together internationally, we can unlock AI’s full potential to reduce global carbon emissions and tackle the climate crisis—and build a future where both people and the planet can thrive.
(Photo by Abhishek Mishra)
See also: Power play: Can the grid cope with AI’s growing appetite?
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
AI Research
Inside Austin’s Gauntlet AI, the Elite Bootcamp Forging “AI First” Builders

AUSTIN, Texas — In the brave new world of artificial intelligence, talent is the new gold, and companies are in a frantic race to find it. While universities work to churn out computer science graduates, a new kind of school has emerged in Austin to meet the insatiable demand: Gauntlet AI.
Gauntlet AI bills itself as an elite training program. It’s a high-stakes, high-reward process designed to forge “AI-first” engineers and builders in a matter of weeks.
“We’re closer to Navy SEAL bootcamp training than a school,” said Ash Tilawat, Head of Product and Learning. “We take the smartest people in the world. We bring them into the same place for a 1000 hours over ten weeks and we make them go all in with building with AI.”
Austen Allred, the co-founder and CEO of Gauntlet AI, says when they claim to be looking for the smartest engineers in the world, it’s no exaggeration. The selection process is intensely rigorous.
“We accept around 2 percent of the applicants,” Allred explained. “We accept 98th percentile and above of raw intelligence, 95th percentile of coding ability, and then you start on The Gauntlet.”
ALSO| The 60-Second Guardian: Can a Swarm of Drones Stop a School Shooter?
The price of admission isn’t paid in dollars—there are no tuition fees. Instead, the cost is a student’s absolute, undivided attention.
“It is pretty grueling, but it’s invigorating and I love doing this,” said Nataly Smith, one of the “Gauntlet Challengers.”
Smith, whose passions lie in biotech and space, recently channeled her love for bioscience to complete one of the program’s challenges. Her team was tasked with building a project called “Geno.”
“It’s a tool where a person can upload their genomic data and get a statistical analysis of how likely they are to have different kinds of cancers,” Smith described.
Incredibly, her team built the AI-powered tool in just one week.
The ultimate prize waiting at the end of the grueling 10-week gauntlet is a guaranteed job offer with a starting salary of at least $200,000 a year. And hiring partners are already lining up to recruit challengers like Nataly.
“We very intentionally chose to partner with everything from seed-stage startups all the way to publicly traded companies,” said Brett Johnson, Gauntlet’s COO. “So Carvana is a hiring partner. Here in Austin, we have folks like Function Health. We have the Trilogy organization; we have Capital Factory just around the corner. We’re big into the Austin tech community and looking to double down on that.”
In a world desperate for skilled engineers, Gauntlet AI isn’t just training people; it’s manufacturing the very talent pipeline it believes will power the next wave of technological innovation.
AI Research
Endangered languages AI tools developed by UH researchers

University of Hawaiʻi at Mānoa researchers have made a significant advance in studying how artificial intelligence (AI) understands endangered languages. This research could help communities document and maintain their languages, support language learning and make technology more accessible to speakers of minority languages.
The paper by Kaiying Lin, a PhD graduate in linguistics from UH Mānoa, and Department of Information and Computer Sciences Assistant Professor Haopeng Zhang, introduces the first benchmark for evaluating large language models (AI systems that process and generate text) on low-resource Austronesian languages. The study focuses on three Formosan (Indigenous peoples and languages of Taiwan) languages spoken in Taiwan—Atayal, Amis and Paiwan—that are at risk of disappearing.
Using a new benchmark called FORMOSANBENCH, Lin and Zhang tested AI systems on tasks such as machine translation, automatic speech recognition and text summarization. The findings revealed a large gap between AI performance in widely spoken languages such as English, and these smaller, endangered languages. Even when AI models were given examples or fine-tuned with extra data, they struggled to perform well.
“These results show that current AI systems are not yet capable of supporting low-resource languages,” Lin said.
Zhang added, “By highlighting these gaps, we hope to guide future development toward more inclusive technology that can help preserve endangered languages.”
The research team has made all datasets and code publicly available to encourage further work in this area. The preprint of the study is available online, and the study has been accepted into the 2025 Conference on Empirical Methods in Natural Language Processing in Suzhou, China, an internationally recognized premier AI conference.
The Department of Information and Computer Sciences is housed in UH Mānoa’s College of Natural Sciences, and the Department of Linguistics is housed in UH Mānoa’s College of Arts, Languages & Letters.
AI Research
OpenAI reorganizes research team behind ChatGPT’s personality

OpenAI is reorganizing its Model Behavior team, a small but influential group of researchers who shape how the company’s AI models interact with people, TechCrunch has learned.
In an August memo to staff seen by TechCrunch, OpenAI’s chief research officer Mark Chen said the Model Behavior team — which consists of roughly 14 researchers — would be joining the Post Training team, a larger research group responsible for improving the company’s AI models after their initial pre-training.
As part of the changes, the Model Behavior team will now report to OpenAI’s Post Training lead Max Schwarzer. An OpenAI spokesperson confirmed these changes to TechCrunch.
The Model Behavior team’s founding leader, Joanne Jang, is also moving on to start a new project at the company. In an interview with TechCrunch, Jang says she’s building out a new research team called OAI Labs, which will be responsible for “inventing and prototyping new interfaces for how people collaborate with AI.”
The Model Behavior team has become one of OpenAI’s key research groups, responsible for shaping the personality of the company’s AI models and for reducing sycophancy — which occurs when AI models simply agree with and reinforce user beliefs, even unhealthy ones, rather than offering balanced responses. The team has also worked on navigating political bias in model responses and helped OpenAI define its stance on AI consciousness.
In the memo to staff, Chen said that now is the time to bring the work of OpenAI’s Model Behavior team closer to core model development. By doing so, the company is signaling that the “personality” of its AI is now considered a critical factor in how the technology evolves.
In recent months, OpenAI has faced increased scrutiny over the behavior of its AI models. Users strongly objected to personality changes made to GPT-5, which the company said exhibited lower rates of sycophancy but seemed colder to some users. This led OpenAI to restore access to some of its legacy models, such as GPT-4o, and to release an update to make the newer GPT-5 responses feel “warmer and friendlier” without increasing sycophancy.
Techcrunch event
San Francisco
|
October 27-29, 2025
OpenAI and all AI model developers have to walk a fine line to make their AI chatbots friendly to talk to but not sycophantic. In August, the parents of a 16-year-old boy sued OpenAI over ChatGPT’s alleged role in their son’s suicide. The boy, Adam Raine, confided some of his suicidal thoughts and plans to ChatGPT (specifically a version powered by GPT-4o), according to court documents, in the months leading up to his death. The lawsuit alleges that GPT-4o failed to push back on his suicidal ideations.
The Model Behavior team has worked on every OpenAI model since GPT-4, including GPT-4o, GPT-4.5, and GPT-5. Before starting the unit, Jang previously worked on projects such as Dall-E 2, OpenAI’s early image-generation tool.
Jang announced in a post on X last week that she’s leaving the team to “begin something new at OpenAI.” The former head of Model Behavior has been with OpenAI for nearly four years.
Jang told TechCrunch she will serve as the general manager of OAI Labs, which will report to Chen for now. However, it’s early days, and it’s not clear yet what those novel interfaces will be, she said.
“I’m really excited to explore patterns that move us beyond the chat paradigm, which is currently associated more with companionship, or even agents, where there’s an emphasis on autonomy,” said Jang. “I’ve been thinking of [AI systems] as instruments for thinking, making, playing, doing, learning, and connecting.”
When asked whether OAI Labs will collaborate on these novel interfaces with former Apple design chief Jony Ive — who’s now working with OpenAI on a family of AI hardware devices — Jang said she’s open to lots of ideas. However, she said she’ll likely start with research areas she’s more familiar with.
This story was updated to include a link to Jang’s post announcing her new position, which was released after this story published. We also clarify the models that OpenAI’s Model Behavior team worked on.
-
Business1 week ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms4 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences4 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
Macron says UK and France have duty to tackle illegal migration ‘with humanity, solidarity and firmness’ – UK politics live | Politics
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi