Connect with us

AI Insights

Code Green or Code Red? The Untold Climate Cost of Artificial Intelligence

Published

on


As the world races to harness artificial intelligence, few are pausing to ask a critical question: What is AI doing to the planet?

AI is being heralded as a game-changer in the global fight against climate change. AI is already assisting scientists in modeling rising temperatures and extreme weather phenomena, enabling decision-making bodies to predict and prepare for unexpected weather, while allowing energy systems to become smarter and more efficient. According to the World Economic Forum, AI has the potential to contribute up to 5.1 trillion dollars annually to the global economy, under the condition that it is deployed sustainably during the climate transition (WEF, 2025).

Beneath the sleek interfaces and climate dashboards lies a growing environmental cost. The widespread use of generative AI, in particular, is creating a new carbon frontier, one that we’re just beginning to untangle and understand.

Training large-scale AI models is energy-intensive, according to a 2024 MIT report. Training a single GPT-3 sized model can consume enough electricity to power almost 120 U.S. homes for a year, which totals up to 1.300 megawatt-hours of electricity. AI systems, once deployed, are not static, since they continue to consume energy each time a user interacts with them. For example, an AI-generated image may require as much energy as watching a short video on an online platform, while large language model queries require almost 10 times more energy than a typical Google search (MIT, 2024).

As AI becomes embedded into everything from online search to logistics and social media, this energy use is multiplying at scale. The International Energy Agency (IEA) warns that by 2026, data center electricity consumption could double globally, driven mainly by the rise of AI and cryptocurrency. Taking into account the recent developments regarding the Digital Euro, the discussion instantly receives more value. Without rapid decarbonization of energy grids, this could significantly increase global emissions, undermining progress on climate goals (IEA,2024).

Sotiris Anastasopoulos/ With data from the IEA’s official website.

The climate irony is real: AI is both the solution and the multiplier to Earth’s climate challenges.

Still, when used responsibly, AI remains a powerful ally. The UNFCCC’s 2023 “AI for Climate Action” roadmap outlines dozens of promising, climate-friendly applications. AI can detect deforestation from satellite imagery, track methane leaks, help decarbonize supply chains, and forecast the spread of wildfires. In agriculture, AI systems can optimize irrigation and fertilizer use, helping reduce emissions and protect soil. In the energy sector, AI enables real-time management of grids, integrating variable sources like solar and wind while improving reliability. But to unlock this potential, the conversation around AI must evolve, from excitement about its capabilities to accountability for its impact.

This starts with transparency. Today, few AI developers publicly report the energy or emissions cost of training and running their models. That needs to change. The IEA calls for AI models to be accompanied by “energy use disclosures” and impact assessments. Governments and regulators should enforce such standards, similarly to industrial emissions or vehicle efficiency (UNFCC, 2023).

Second, green infrastructure must become the default. Data centers must be powered by renewable energy, not fossil fuels. AI models must be optimized for efficiency, not just performance. Instead of racing toward ever-larger models, we should ask what the climate cost of model inflation is and if it’s worth it (UNFCC, 2023).

Third, we need to question the uses of AI itself. Not every application is essential. Does society actually benefit from energy-intensive image generation tools for trivial entertainment or advertising? While AI can accelerate climate solutions, it can also accelerate consumption, misinformation, and surveillance. A climate-conscious AI agenda must weigh trade-offs, not just celebrate innovation (UNFCC,2023).

Finally, equity matters. As the UNFCC report emphasizes, the AI infrastructure powering the climate transition is heavily concentrated in the Global North. Meanwhile, the Global South, home to many of the world’s most climate-vulnerable populations, lacks access to these tools, data, and services. An inclusive AI-climate agenda must invest in capacity-building, data access,  and technological advancements to ensure no region is left behind (UNFCC, 2023).

Artificial intelligence is not green or dirty by its nature. Like all tools, its impact depends on how and why we use it. We are still early in the AI revolution to shape its trajectory, but not for long.

The stakes are planetary. If deployed wisely, AI could help the transition to a net-zero future. If mismanaged, it risks becoming just another accelerant of a warming world.

Technology alone will not solve the climate crisis. But climate solutions without responsible technology are bound to fail.

*Sotiris Anastasopoulos is a student researcher at the Institute of European Integration and Policy of the UoA. He is an active member of YCDF and AEIA and currently serves as a European Climate Pact Ambassador. 

This op-ed is part of To BHMA International Edition’s NextGen Corner, a platform for fresh voices on the defining issues of our time



Source link

AI Insights

COMMENTARY: How Will AI Impact the Oil and Natural Gas Industry? – Yogi Schulz – Energy News, Top Headlines, Commentaries, Features & Events

Published

on


By Yogi Schulz

Breathless AI headlines promise superintelligence. However, in today’s oil and gas industry, the impact of AI is more practical—streamlining operations rather than inventing new exploration and production technologies or conquering new markets.

There is a paradox in how AI’s current phase is influencing corporate strategy. While AI is touted as a path to increased profitability, most oil and gas applications of AI today focus on reducing costs, rather than growing revenue. While trimming expenses can boost margins, this action does not deliver sustainable corporate growth.

This article provides actionable answers to AI-related questions that management in the oil and gas industry often asks.

Where is AI having the most immediate and practical impact?

AI is advancing the capability of many categories of engineering application software. Examples include oil and gas software used for predictive maintenance, process optimization and resource allocation. The benefits include reduced operating costs, enhanced safety, improved risk management, and higher confidence in decision-making.

Where will AI have the most impact in the future?

Materials science – Today, the interaction of molecules within mixtures created for materials is almost impossible to predict. This reality leads to endless, high-cost experimentation and lengthy development timelines. DeepMind’s breakthrough work on protein folding is an early example. Oil and gas applications include materials that withstand higher temperatures, pressures, and abrasion.

Text-to-design applications – AI’s ability to generate a 3D design from a short text paragraph will significantly change the nature of engineering design work. It will enable the rapid exploration of many more design alternatives. Oil and gas applications include valves, pressure vessels and gas processing plants.

AI code generation – AI’s ability to generate application code will advance to encompass entire systems. It will shift software development from a hand-crafted process to an automated one. Oil and gas applications include software for SCADA, process control, and autonomous oilsands vehicles.

How is AI changing the industry’s approach to productivity, profitability, and risk management?

AI has produced immediate productivity improvements for most disciplines in:

  • Research.
  • Document writing.

AI applications identify cost-reduction ideas for capital projects and ongoing operations. The ideas, often in fabrication and maintenance, have increased profitability.

AI applications in many industries actually increase risks, not decrease them. Various AI risks are not yet well understood. Even if the risk of Armageddon-type risks where AI takes over the planet and supplants humans is low, there are other risks.

For example, a recent Bloomberg article described financial risks. It’s a regulator’s nightmare: Hedge funds unleash AI bots on stock and bond exchanges — but they don’t just compete, they collude. Instead of battling for returns, they fix prices, hoard profits and sideline human traders.

GLJ
Tarco | Delivering Engineered Solutions

Researchers say that scenario is far from science fiction. Every AI application should include a comprehensive risk assessment.

What technical, ethical, or cultural challenges are slowing AI adoption?

Companies contemplating a foray into AI should let someone else make the tools, according to a recent MIT study, which found that 95% of internal AI pilot programs fail to boost revenue or productivity. Often, technical challenges create failure.

According to Fortune magazine, the issue isn’t subpar models, it’s flawed integration. Successful implementations utilize AI to address specific problems with specialized external vendors. These projects succeed twice as often as in-house pilots. MIT also determined that too much AI spend goes to sales and marketing, even though reducing outsourcing and streamlining field operations, HR, and finance drives bigger savings.

The biggest challenge slowing AI adoption is poor data quality in internal datastores used to train the AI model. The cleanup is costly and takes considerable elapsed time. Data quality is a more significant problem for oil and gas applications than for those in many other industries, as they consume significantly more data.

Many executives are concerned about the ethical challenges that AI applications have revealed. These include:

  • Risks of misunderstanding basic scientific concepts leading to dangerously wrong recommendations.
  • Embarrassingly wrong answers that are termed hallucinations.
  • Racial and gender biases.

Addressing the ethical challenges that AI raises requires oversight from management, engineering, and IT leadership.

What’s needed to adopt AI successfully?

Right now, all disciplines are excited by the seemingly endless potential of AI. To add some reality to all the hype, leaders should remember that companies cannot ignore business analysis or treat it superficially with AI. That means:

  • Define a problem or opportunity that aligns with targeted and specific business goals – avoid a general exploration of AI technology.
  • Measure what matters – don’t expect assertions or output that feels good or looks good to impress anyone.
  • Ensure you’re designing for data-driven decisions – you don’t need AI if you will continue working based on experience or gut feel.
  • Design human-AI collaboration that leverages their respective strengths – don’t rely exclusively on AI recommendations.
  • Ask: Should we? – meaning, do we have a business case?
  • Do not just ask Can we? – meaning, do we have the technical and data capability?

What should oil and gas leaders think about?

The responsibility of the board and management for governance suggests that they should focus on guidance to implement AI responsibly and effectively. More specifically, they should give the following specific AI topics attention:

  • AI acceptable usage policy – define and implement an acceptable AI usage policy to clearly describe what employee uses are permissible and what is not.
  • AI risk management – the fastest and easiest way to implement an AI risk management process is to adopt one of the existing AI risk frameworks. The MIT AI risk framework is one example.
  • AI hallucinations – champion the expectation that all AI application development will involve significant efforts to mitigate AI hallucinations.
  • Project best practices – AI application projects must adhere to best practices, just like all other projects.
  • Cybersecurity for AI applications – to reduce AI cybersecurity risks in applications, the board and the CEO should sponsor a review process that ensures adequate cybersecurity defence features are included in AI applications.
  • AI for cybersecurity defences – to address the increasing cybersecurity risks caused by hackers using AI in their attacks, the board and the CEO should sponsor a review process that ensures adequate cybersecurity defences are in place.

Where is AI development headed?

Today, AI is still a people-centred tool. AI waits for us to provide questions to answer. AI developments are advancing the technology to reason more like humans. This reasoning is referred to as AGI, or artificial general intelligence. AGI raises more profound questions:

  • Can AI start to ask the right questions or at least propose better questions, not just answer questions we’re smart enough to pose?
  • Can AI help professionals in making higher-order decisions for oil and natural gas property development or marketing strategies?
  • Will AI supplant technical staff in making routine operational or purchasing decisions? How soon might that step occur?

We will soon enter a new era, where AI does more than simply responding and starts orchestrating. That is the coming age of agentic AI. Agentic AI represents a step forward from reasoning to autonomy, leading to more self-directed initiative.

In many situations, the hardest part is not finding the correct or best answer—it is knowing or at least hypothesizing what the right question or a better question is in the first place. This reality underscores a crucial truth: even the most advanced AI tools rely on human curiosity, perspective, and framing.

In the future, AI development will advance down the creative road toward AGI. Related advances will have a profound impact on all industries, including the oil and natural gas industry.


Yogi Schulz has over 40 years of experience in information technology in various industries. He writes for Engineering.comEnergyNow.caEnergyNow.com and other trade publications. Yogi works extensively in the petroleum industry to select and implement financial, production revenue accounting, land & contracts, and geotechnical systems. He manages projects that arise from changes in business requirements, the need to leverage technology opportunities, and mergers. His specialties include IT strategy, web strategy, and systems project management.

Share This:


More News Articles



Source link

Continue Reading

AI Insights

Meta’s AI lab Is in crisis — even $100M can’t buy loyalty

Published

on


Over the past few months, Meta has doubled down its efforts on the generative AI space. Shortly after, CEO Mark Zuckerberg announced Meta Superintelligence Labs (MSL) to compete with rivals like OpenAI, Google, and Microsoft.

The Facebook maker has been making bold moves, including making a $14.3 billion acquisition of Scale AI, which specializes in data labeling, model evaluation, and software development of apps for AI. The company even hired Scale AI’s CEO, Alexandr Wang, to lead its AI operations.



Source link

Continue Reading

AI Insights

Contributor: How do we prepare college students for the AI world?

Published

on


The rise of artificial intelligence is threatening the foundations of education — how we teach, how we assess and even how students learn to think. Cheating has become effortless. Attention spans are dissolving. And the future job landscape is so uncertain that we don’t know what careers to prepare students for. A recent NBC News poll of nearly 20,000 Americans shows the public is evenly divided, with about half believing we should integrate AI into education and half believing we should ban it.

So, as we welcome the Class of 2029 to our campuses, what should colleges do?

Although some urge higher education to prioritize STEM fields and AI-related job skills, a surprising number of technology leaders are advising the opposite.

“I no longer think you should learn to code,” says investor and former Facebook executive Chamath Palihapitiya. “The engineer’s role will be supervisory, at best, within 18 months.”

Roman Vorel, chief information officer of Honeywell, argues that “the future belongs to leaders with high EQs — those with empathy, self-awareness and the ability to make genuine human connections — because AI will democratize IQ.”

Daniel Kokotajlo, co-author of “AI 2027,” which projects a set of scenarios leading to an “enormous” impact of superhuman AI over the next decade, puts it bluntly: “Economic productivity is just no longer the name of the game when it comes to raising kids. What still matters is that my kids are good people — and that they have wisdom and virtue.”

In other words, as machines gain in speed and capability, the most valuable human traits may not be technical but moral and interpersonal. Technology journalist Steven Levy spoke even more plainly in a recent commencement address at Temple University: “You have something that no computer can ever have. It’s a superpower, and every one of you has it in abundance: your humanity.”

It might seem like a tall order to cultivate attention, empathy, judgment and character — qualities that are hard to measure and even harder to mass-produce. Fortunately, we have an answer, one that turns out to be surprisingly ancient: liberal education. Small liberal arts colleges may enroll only a modest 4% of our undergraduates, but they are, historically and today, our nation’s seed bank for deep and broad humanistic education.

Liberal education is structured around serious engagement with texts, works of art and scientific discoveries that have shaped our understanding of truth, justice, beauty and the nature of the world. Students don’t just absorb information — they engage in dialogue and active inquiry, learning to grapple with foundational questions. What is the good life? What is the relationship between mathematics and reality? Can reason and faith coexist? Why do music and art move us?

These acts — reading, looking, listening, discussing — may sound modest, but they are powerful tools for developing the skills students most need. Wrestling with a challenging text over hours and days strengthens attention like physical exercise builds stamina. Conversation sharpens the ability to speak and listen with care, to weigh opposing views, to connect thought with feeling. This kind of education, by deepening our understanding of ourselves and our world, cultivates wisdom — and it’s remarkably resistant to the shortcuts AI offers.

If you spent a week at the college I lead, St. John’s College in Santa Fe, N.M., you might forget that AI even exists. It’s hard to fake a two-hour conversation about “Don Quixote” after reading only an AI summary, and it’s awkward to continue that conversation with your friends over a meal in the dining hall. Should you succumb to the temptations of AI in writing a paper, you’re likely to find yourself floundering in the follow-up discussion with faculty.

Liberal arts colleges have one other indispensable tool for deepening learning and human connection: culture. Most are small, tight-knit communities where students and faculty know one another and ideas are exchanged face to face. Students don’t choose these schools by default; they opt in, often for their distinctiveness. The pull of technology is less strong at these colleges, because they create intense, sustaining, unmediated experiences of communal thinking. This strong culture might be seen as a kind of technology itself — one designed not to dissipate minds and hearts, but to support and deepen them.

Paradoxically, four years largely removed from the influence of technology is one of the best ways of preparing for life and work in an increasingly technologized world.

Carla Echevarria, a 1996 alumna of St. John’s and now a senior manager of user experience at Google DeepMind, admits that she would “struggle with Schrödinger in senior lab and then bang my head against Hegel for a couple of hours and then weep in the library while listening to ‘Tristan und Isolde.’ That brings an intellectual fearlessness.

“When I started working in AI, I didn’t really know anything about AI,” she adds. “I prepared for my interview by reading for a couple of weeks. That fearlessness is the greatest gift of the education.” Many alums echo this belief regardless of the fields they go into.

As we head into this school year and into a future shaped by powerful and unpredictable machines, the best preparation may not be a new invention, but an old discipline. We don’t need a thousand new small colleges, but we need a thousand of our colleges and universities, large and small, to embrace an overdue renaissance of these deeply humanizing educational practices. We don’t need to outpace AI — we need to educate people who can think clearly, act wisely and live well with others.

J. Walter Sterling is the president of St. John’s College, with campuses in Annapolis, Md., and Santa Fe, N.M.



Source link

Continue Reading

Trending