Connect with us

AI Insights

US green energy braces for federal funding cuts

Published

on


Zoe Corbyn

Technology Reporter

Reporting fromSan Francisco
Getty Images Wind turbines operate at a wind farm on March 05, 2024 near Palm Springs, California. Getty Images

President Trump has described windfarms as “disgusting” and “ugly”

US green fuel company HIF Global has a big vision for Texas’s Matagorda County: a $7bn (£5.2bn) commercial scale e-methanol factory to supply the world market.

The plant, which it claims would be the largest to date anywhere, would make e-methanol from captured carbon dioxide and green hydrogen produced on site using renewable energy.

Its construction would create thousands of jobs and the product would power ships and planes in a far cleaner way.

But the company has yet to make its final investment decision. It is waiting to see what the Republican-led Congress does to clean energy tax credits, in particular the one for clean hydrogen production.

The fate of the subsidies is part of a sweeping budget bill currently under consideration by the Senate.

A version of the legislation passed by the lower house cuts the hydrogen tax credit, amongst others, and scales back more.

The clean hydrogen tax credit would help reduce the cost of the American technology going into the facility, and aide in competing with Chinese e-methanol producers, says Lee Beck, HIF Global’s senior vice president for global policy and commercial strategy.

“The goal is not to be dependent on tax credits over the long run, but to get the project started.”

Ms Beck can’t say yet what the outcome for the Matagorda facility will be if the tax credit is ultimately killed, except that it will make things hard – and the US isn’t the only location the company operates in.

HIF A sole wind turbine stands in a desert-like area of Punta Arenas, Chile.HIF

HIF Global has a demonstration e-fuel producing facility in Punta Arenas, Chile

The Trump administration has been particularly hostile to green energy.

Amongst the President’s actions since taking office in January include initiating the US’s withdrawal from the Paris climate agreement and temporarily suspending renewable energy projects on federal lands (he has a particular disdain for wind power).

Trump has also directed agencies to pause Green New Deal funds, which he regularly calls “Green New Scam” funds: grants and loans being made under the Infrastructure Investment and Jobs Act (IIJA) and the Inflation Reduction Act (IRA), enacted under Biden’s presidency in 2021 and 2022 respectively.

Those grants and loans, together with the clean energy tax credits that are also part of the IRA, have been funnelling billions of new federal and private dollars into developing clean energy.

“It is tumultuous time,” says Adie Tomer, of the Brookings Institution, a think tank. “We are doing the exact opposite of our developed world peers.”

Court battles are ongoing over the President’s order to pause green funding, which might ultimately end up in the Supreme Court. In the meantime, agencies are conducting their own reviews and making their own decisions.

Getty Images Capitol building Washington DC with a US flag in the foregroundGetty Images

Green energy firms are watching developments on the budget bill

Jessie Stolark, executive director of the Carbon Capture Coalition, which represents companies involved in carbon capture and storage, laments the lack of clarity from the administration.

Members, she explains, have won project funding under the IIJA – including, for example, to build direct air capture facilities. But while projects generally have been able to access funds already awarded to earlier phases, it is unclear if they will be able to progress to additional phases where additional funds are supposed to be made available.

“It is causing uncertainty, which is really bad for project deployment,” says Ms Stolark. “If you endanger the success of these first-of-a-kind projects it just takes the wind out of the sails of the whole [carbon management] industry long term.”

Meanwhile, the fate of the IRA, which the Congress has the power to amend or repeal along with the IIJA, is being decided, in part, by the budget bill, which aims to permanently extend President Trump’s first term tax cuts by making savings elsewhere.

What exactly will remain of the Federal green energy agenda when both the House and Senate agree a compromise version remains to be seen.

It seems likely the IRA’s tax credits, which are generally scheduled to expire at the end of 2032, though some extend beyond that date, will take a heavy hit, even if the IRA dodges the bullet of outright repeal.

Also marked for termination include the tax credits for consumers buying EVs and making their homes more efficient.

Many others, such as those for producing clean electricity and manufacturing clean energy components like wind turbine parts, solar panels and batteries, would be phased out earlier or made harder and less worthwhile to secure.

That many of the projects set to benefit from the tax credits are in Republican areas seems to have had little sway in the House, notes Ashur Nissan of policy advice firm Kaya Partners.

But critics say that the Biden green energy initiatives are too expensive.

The IRA’s energy tax credits are “multiple times” larger than initial estimates, and expose American taxpayers to “potentially unlimited liability” noted a recent report from the libertarian Cato Institute advocating their full repeal.

Meanwhile, actual clean energy investment in the US including from both government and private sources (the far larger share) dropped 3.8% in the first quarter of 2025 to $67.3bn, a second quarterly decline, according to new figures released by the Clean Investment Monitor.

“Momentum is sagging a bit which is a little concerning,” says Hannah Hess of the Rhodium Group research firm, which partners with the Massachusetts Institute of Technology to produce it. She attributes the trend to a mix of high inflation, high interest rates, global supply chain issues and uncertainty in the policy environment created by the new administration.

There was also, she observes, a record number of clean energy manufacturing projects cancelled in the first quarter of 2025 – six projects mostly in batteries and representing $6.9bn in investment– though it is difficult to say to what extent the new administration was a driver.

More worrying to Ms Hess is the decline since the last quarter in announcements for some types of new projects, which she believes can be “more strongly” attributed to the policy situation, with companies lacking confidence there will be demand for the clean products their projects would produce.

Heirloom A worker in a hi-vis jacket looks at machinery at a CO2 capturing plant.Heirloom

Firms that capture CO2 from the air have won government funding

Tariffs, which will increase factory construction costs if components need to be imported, are an extra factor that may negatively influence project decisions going forward, notes Anthony DeOrsey of the Cleantech Group research and consulting firm.

Investment aside, companies are also making shifts in how they market their products.

The homepage of LanzaJet – which produces Sustainable Aviation Fuel (SAF) from ethanol – used to emphasise how scaling SAF could “meet the urgent moment of climate change”. It now focusses on its potential to “harness the energy of locally produced feedstocks”.

SAF has never been about just one thing, notes CEO Jimmy Samartzis. Tailoring messaging to be “relevant to the stakeholders we are engaging with” makes sense.

The company is current waiting on a $3m grant it was awarded by the Federal Aviation Authority last August as part of a nearly $300m program designed to help aviation transition to SAF and which was funded under the IRA.

“It is approved funding, but it is stuck at this point,” says Mr Samartzis.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Scientists create biological ‘artificial intelligence’ system

Published

on


Australian scientists have successfully developed a research system that uses ‘biological artificial intelligence’ to design and evolve molecules with new or improved functions directly in mammal cells. The researchers said this system provides a powerful new tool that will help scientists develop more specific and effective research tools or gene therapies. Named PROTEUS (PROTein Evolution Using Selection) the system harnesses ‘directed evolution’, a lab technique that mimics the natural power of evolution. However, rather than taking years or decades, this method accelerates cycles of evolution and natural selection, allowing them to create molecules with new functions in weeks. This could have a direct impact on finding new, more effective medicines. For example, this system can be applied to improve gene editing technology like CRISPR to improve its effectiveness.

Journal/conference: Nature Communications

Research: Paper

Organisation/s: The University of Sydney



Funder: Declaration: Alexandar Cole, Christopher Denes, Daniel Hesselson and Greg Neely have filed a provisional patent application on this technology The remaining authors declare no competing interests.

Media release

From: The University of Sydney

Australian scientists have successfully developed a research system that uses ‘biological artificial intelligence’ to design and evolve molecules with new or improved functions directly in mammal cells. The researchers said this system provides a powerful new tool that will help scientists develop more specific and effective research tools or gene therapies.

Named PROTEUS (PROTein Evolution Using Selection) the system harnesses ‘directed evolution’, a lab technique that mimics the natural power of evolution. However, rather than taking years or decades, this method accelerates cycles of evolution and natural selection, allowing them to create molecules with new functions in weeks.

This could have a direct impact on finding new, more effective medicines. For example, this system can be applied to improve gene editing technology like CRISPR to improve its effectiveness.

“This means PROTEUS can be used to generate new molecules that are highly tuned to function in our bodies, and we can use it to make new medicine that would be otherwise difficult or impossible to make with current technologies.” says co-senior author Professor Greg Neely, Head of the Dr. John and Anne Chong Lab for Functional Genomics at the University of Sydney.

“What is new about our work is that directed evolution primarily work in bacterial cells, whereas PROTEUS can evolve molecules in mammal cells.”

PROTEUS can be given a problem with uncertain solution like when a user feeds in prompts for an artificial intelligence platform. For example the problem can be how to efficiently turn off a human disease gene inside our body.

PROTEUS then uses directed evolution to explore millions of possible sequences that have yet to exist naturally and finds molecules with properties that are highly adapted to solve the problem. This means PROTEUS can help find a solution that would normally take a human researcher years to solve if at all.

The researchers reported they used PROTEUS to develop improved versions of proteins that can be more easily regulated by drugs, and nanobodies (mini versions of antibodies) that can detect DNA damage, an important process that drives cancer. However, they said PROTEUS isn’t limited to this and can be used to enhance the function of most proteins and molecules.

The findings were reported in Nature Communications, with the research performed at the Charles Perkins Centre, the University of Sydney with collaborators from the Centenary Institute.

Unlocking molecular machine learning

The original development of directed evolution, performed first in bacteria, was recognised by the 2018 Noble Prize in Chemistry.

“The invention of directed evolution changed the trajectory of biochemistry. Now, with PROTEUS, we can program a mammalian cell with a genetic problem we aren’t sure how to solve. Letting our system run continuously means we can check in regularly to understand just how the system is solving our genetic challenge,” said lead researcher Dr Christopher Denes from the Charles Perkins Centre and School of Life and Environmental Sciences

The biggest challenge Dr Denes and the team faced was how to make sure the mammalian cell could withstand the multiple cycles of evolution and mutations and remain stable, without the system “cheating” and coming up with a trivial solution that doesn’t answer the intended question.

They found the key was using chimeric virus-like particles, a design consisting of taking the outside shell of one virus and combining it with the genes of another virus, which blocked the system from cheating.

The design used parts of two significantly different virus families creating the best of both worlds. The resulting system allowed the cells to process many different possible solutions in parallel, with improved solutions winning and becoming more dominant while incorrect solutions instead disappear.

“PROTEUS is stable, robust and has been validated by independent labs. We welcome other labs to adopt this technique. By applying PROTEUS, we hope to empower the development of a new generation of enzymes, molecular tools and therapeutics,” Dr Denes said.

“We made this system open source for the research community, and we are excited to see what people use it for, our goals will be to enhance gene-editing technologies, or to fine tune mRNA medicines for more potent and specific effects,” Professor Neely said.

-ENDS-



Source link

Continue Reading

AI Insights

AI can provide ’emotional clarity and confidence’ Xbox executive producer tells staff after Microsoft lays off 9,000 employees

Published

on



  • An Xbox executive suggested that laid-off employees use AI for emotional support and career guidance
  • The suggestion sparked backlash and led the executive to delete their LinkedIn post
  • Microsoft has laid off 9,000 employees in recent months while investing heavily in AI.

Microsoft has been hyping up its AI ambitions for the last several years, but one executive’s pitch about the power of AI to former employees who were recently let go has landed with an awkward thud.

Amid the largest round of layoffs in over two years, about 9,000 people, Matt Turnbull, Executive Producer at Xbox Game Studios Publishing, suggested that AI chatbots could help those affected process their grief, craft resumes, and rebuild their confidence.



Source link

Continue Reading

AI Insights

Regulatory Policy and Practice on AI’s Frontier

Published

on


Adaptive, expert-led regulation can unlock the promise of artificial intelligence.

Technological breakthroughs, historically, have played a distinctive role in accelerating economic growth, expanding opportunity, and enhancing standards of living. Technology enables us to get more out of the knowledge we have and prior scientific discoveries, in addition to generating new insights that enable new inventions. Technology is associated with new jobs, higher incomes, greater wealth, better health, educational improvements, time-saving devices, and many other concrete gains that improve people’s day-to-day lives. The benefits of technology, however, are not evenly distributed, even when an economy is more productive and growing overall. When technology is disruptive, costs and dislocations are shouldered by some more than others, and periods of transition can be difficult.

Theory and experience teach that innovative technology does not automatically improve people’s station and situation merely by virtue of its development. The way technology is deployed and the degree to which gains are shared—in other words, turning technology’s promise into reality without overlooking valid concerns—depends, in meaningful part, on the policy, regulatory, and ethical decisions we make as a society.

Today, these decisions are front and center for artificial intelligence (AI).

AI’s capabilities are remarkable, with profound implications spanning health care, agriculture, financial services, manufacturing, education, energy, and beyond. The latest research is demonstrably pushing AI’s frontier, advancing AI-based reasoning and AI’s performance of complex multistep tasks, and bringing us closer to artificial general intelligence (high-level intelligence and reasoning that allows AI systems to autonomously perform highly complex tasks at or beyond human capacity in many diverse instances and settings). Advanced AI systems, such as AI agents (AI systems that autonomously complete tasks toward identified objectives), are leading to fundamentally new opportunities and ways of doing things, which can unsettle the status quo, possibly leading to major transformations.

In our view, AI should be embraced while preparing for the change it brings. This includes recognizing that the pace and magnitude of AI breakthroughs are faster and more impactful than anticipated. A terrific indication of AI’s promise is the 2024 Nobel Prize in chemistry, winners of which used AI to “crack the code” of protein structures, “life’s ingenious chemical tools.” At the same time, as AI becomes widely used, guardrails, governance, and oversight should manage risks, safeguard values, and look out for those disadvantaged by disruption.

Government can help fuel the beneficial development and deployment of AI in the United States by shaping a regulatory environment conducive to AI that fosters the adoption of goods, services, practices, processes, and tools leveraging AI, in addition to encouraging AI research.

It starts with a pro-innovation policy agenda. Once the goal of promoting AI is set, the game plan to achieve it must be architected and implemented. Operationalizing policy into concrete progress can be difficult and more challenging when new technology raises novel questions infused with subtleties.

Regulatory agencies that determine specific regulatory requirements and enforce compliance play a significant part in adapting and administering regulatory regimes that encourage rather than stifle technology. Pragmatic regulation compatible with AI is instrumental so that regulation is workable as applied to AI-led innovation, further unlocking AI’s potential. Regulators should be willing to allow businesses flexibility to deploy AI-centered uses that challenge traditional approaches and conventions. That said, regulators’ critical mission of detecting and preventing harmful behavior should not be cast aside. Properly calibrated governance, guardrails, and oversight that prudently handle misuse and misconduct can support technological advancement and adoption over time.

Regulators can achieve core regulatory objectives, including, among other things, consumer protection, investor protection, and health and safety, without being anchored to specific regulatory requirements if the requirements—fashioned when agentic and other advanced AI was not contemplated—are inapt in the context of current and emerging AI.

We are not implying that vital governmental interests that are foundational to many regulatory regimes should be jettisoned. Rather, it is about how those interests are best achieved as technology changes, perhaps dramatically. It is about regulating in a way that allows AI to reach its promise and ensuring that essential safeguards are in place to protect persons from wrongdoing, abuses, and harms that could frustrate AI’s real-world potential by undercutting trust in—and acceptance of—AI. It is about fostering a regulatory environment that allows for constructive AI-human collaboration—including using AI agents to help monitor other AI agents while humans remain actively involved addressing nuances, responding to an AI agent’s unanticipated performance, engaging matters of greatest agentic AI uncertainty, and resolving tough calls that people can uniquely evaluate given all that human judgment embodies.

This takes modernizing regulation—in its design, its detail, its application, and its clarity—to work, very practically, in the context of AI by accommodating AI’s capabilities.

Accomplishing this type of regulatory modernity is not easy. It benefits from combining technological expertise with regulatory expertise. When integrated, these dual perspectives assist regulatory agencies in determining how best to update regulatory frameworks and specific regulatory requirements to accommodate expected and unexpected uses of advanced AI. Even when underpinning regulatory goals do not change, certain decades-old—or newer—regulations may not fit with today’s technology, let alone future technological breakthroughs. In addition, regulatory updates may be justified in light of regulators’ own use of AI to improve regulatory processes and practices, such as using AI agents to streamline permitting, licensing, registration, and other types of approvals.

Regulatory agencies are filled with people who bring to bear valuable experience, knowledge, and skill concerning agency-specific regulatory domains, such as financial services, antitrust, food, pharmaceuticals, agriculture, land use, energy, the environment, and consumer products. That should not change.

But the commissions, boards, departments, and other agencies that regulate so much of the economy and day-to-day life—the administrative state—should have more technological expertise in-house relevant to AI. AI’s capabilities are materially increasing at a rapid clip, so staying on top of what AI can do and how it does it—including understanding leading AI system architecture and imagining how AI might be deployed as it advances toward its frontier—is difficult. Without question, there are individuals across government with impressive technological chops, and regulators have made commendable strides keeping apprised of technological innovation. Indeed, certain parts of government are inherently technology-focused. Many regulatory agencies are not, however; but even at those agencies, in-depth understanding of AI is increasingly important.

Regulatory agencies should bring on board more individuals with technology backgrounds from the private sector, academia, research institutions, think tanks, and elsewhere—including computer scientists, physicists, software engineers, AI researchers, cryptographers, and the like.

For example, we envision a regulatory agency’s lawyers working closely with its AI engineers to ensure that regulatory requirements contemplate and factor in AI. Lawyers with specific regulatory knowledge can prompt large language models to measure a model’s interpretation of legal and regulatory obligations. Doing this systematically and with a large enough sample size requires close collaboration with AI engineers to automate the analysis and benchmark a model’s results. AI engineers could partner with an agency’s regulatory experts in discerning the technological capabilities of frontier AI systems to comport with identified regulatory objectives in order to craft regulatory requirements that account for and accommodate the use of AI in consequential contexts. AI could accelerate various regulatory functions that typically have taken considerable time for regulators to perform because they have demanded significant human involvement. To illustrate, regulators could use AI agents to assist the review of permitting, licensing, and registration applications that individuals and businesses must obtain before engaging in certain activities, closing certain transactions, or marketing and selling certain products. Regulatory agencies could augment humans by using AI systems to conduct an initial assessment of applications and other requests against regulatory requirements.

The more regulatory agencies have the knowledge and experience of technologists in-house, the more understanding regulatory agencies will gain of cutting-edge AI. When that enriched technological insight is combined with the breadth of subject-matter expertise agencies already possess, regulatory agencies will be well-positioned to modernize regulation that fosters innovation while preserving fundamental safeguards. Sophisticated technological know-how can help guide regulators’ decisions concerning how best to revise specific regulatory features so that they are workable with AI and conducive to technological progress. The technical elements of regulation should be informed by the technical elements of AI to ensure practicable alignment between regulation and AI, allowing AI innovation to flourish without incurring undue risks.

With more in-house technological expertise, we think regulatory agencies will grow increasingly comfortable making the regulatory changes needed to accommodate, if not accelerate, the development and adoption of advanced AI.

There is more to technological progress that propels economic growth than technological capability in and of itself. An administrative state that is responsive to the capabilities of AI—including those on AI’s expanding frontier—could make a big difference converting AI’s promise into reality, continuing the history of technological breakthroughs that have improved people’s lives for centuries.

Troy A. Paredes



Source link

Continue Reading

Trending