Connect with us

AI Research

As Data Centers Expand, Should That Concern Schools?

Published

on


Over the last three years, generative artificial intelligence made its way into many classrooms. Now, a White House initiative could plant the pervasive technology right outside of schools as well.

Late last month, the Trump administration rolled out its Winning the AI Race: America’s AI Action Plan,” detailing efforts to accelerate innovation, build AI infrastructure and boost international diplomacy and security across 90 policy changes.

One key focus is “promoting rapid expansion” of data centers, which are large, standalone buildings housing tech systems that support AI’s workload.

Data centers — typically the size of a Walmart — are already rapidly cropping up across the nation. Virginia, deemed the “data center capital of the U.S.,” reports housing 35 percent of all known “hyperscale” data centers worldwide.

The structures could begin to creep into more communities, including near schools, if local zoning allows.

Graphic by Geoff McGhee / & the West

But with those come concerns. The centers, as well as AI as a whole, use large amounts of energy and put out large amounts of heat. Most of the centers are made with concrete, which emit high levels of carbon. The centers also require large amounts of potable water, which Joseph Carvalko, chairman of Yale University’s Technology and Ethics working group, says could lead to draining local reservoirs. Averaging 100,000 square feet, they create an imposing physical presence, thanks to their size and accompanying power lines.

For these reasons, some communities are trying to limit the encroachment of data centers. Louisa County, Virginia, recently made headlines for pushing back against a proposed Amazon Web Services data center spanning 7.2 million square feet. Residents feared it would affect drinking water, decimate valuable rural land and contribute to sound pollution.

“We’re letting or even entertaining the idea of a billion dollar corporation coming around and messing with our drinking water. I think it’s pretty humiliating,” Louisa resident Brittany Carroll said in an interview with The Virginia Mercury.

The construction of data centers next to schools doesn’t necessarily create problems unique to schools alone, according to Andrew Chien, a professor of computer science at the University of Chicago.

But similar to Louisa County, it could bring concerns to the community as a whole.

“There is increased power use and water use; generally that’s a regional issue,” Chien says.

Both Chien and Carvalko expect the centers to go into small towns that may not have the wherewithal to combat the possible downsides or have the proper zoning laws in place to mitigate them.

“Smaller communities are particularly vulnerable in my own opinion for good reason,” Carvalko says. “Having worked in corporations my entire life, corporations will take advantage of a small community, because they realize it’ll be easier to get through them versus larger communities. They’ll give them tax breaks and incentivize them, but they can’t fix the environment.”

While corporations tout the centers as job builders, in reality the job creation is minimal — and the employment opportunities are very short term. According to a report by Stanford University’s Bill Lane Center for the American West, the jobs claim can be dubious, pointing out situations like in Phoenix and a small county in Oregon, where some officials faced recalls after giving millions in tax breaks to large tech companies.

“The problem with data centers is they have a non-local benefit,” Chien says. “Normally, with a factory, you get jobs and investments in the community. But this serves AI and computation with people far away. And I think some communities will decide they had enough of it.”

That won’t stop AI companies continuing though, especially now that they have official support from the White House.

“The question is, ‘How much computing do you think we can use?’ and the answer is infinite,” Chien says. “So, it’s about where it’s going to be, and how to do it safely and cleanly.”

There are some efforts underway to make the centers more environmentally sustainable. A group of Harvard engineering students spent their spring semester creating four tools to help developers find new, ecologically friendlier locations or transition existing locations to more sustainable technologies. Carvalko added there is a push toward smaller data centers, called edge data centers, more similar to the size of a car versus a super store.

“That would probably be more accepted and more sustainable,” he says. “Eventually I think they will find their own market and be part of this system of data centers and perhaps a good part of it.”

Despite concerns, a possible upside for students who attend school near a data center is exposure to a new career option.

“It might be inspiring; if they made it attractive, it could inspire kids to work on AI and tech,” Chien says.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

Artificial Intelligence Stocks Rally as Nvidia, TSMC Gain on Oracle Growth Forecast

Published

on


This article first appeared on GuruFocus.

Sep 11 – Oracle (ORCL, Financial) projected its cloud infrastructure revenue will surge to $114 billion by fiscal 2030, a forecast that triggered strong gains across artificial intelligence-related stocks.

The company also outlined plans to spend $35 billion in capital expenditures by fiscal 2026 to expand its data center capacity.

Shares of Oracle soared 36% on Wednesday on the outlook, as investors bet on rising demand for GPU-based cloud services. Nvidia (NASDAQ:NVDA), which supplies most of the chips and systems for AI data centers, climbed 4%. Broadcom (NASDAQ:AVGO), a key networking and custom chip supplier, gained 10%.

Other chipmakers also advanced. Advanced Micro Devices (AMD,) added 2%, while Micron Technology (MU, Financial) increased 4% on expectations for higher memory demand in AI servers. Taiwan Semiconductor Manufacturing Co. (NYSE:TSM), which produces chips for Nvidia and other AI players, rose more than 4% after reporting a 34% jump in August sales.

Server makers Super Micro Computer (SMCI, Financial) and Dell Technologies (DELL) each rose 2%, supported by their role in assembling Nvidia-powered systems. CoreWeave (CRWV), an Oracle rival in the neo-cloud segment, advanced 17% as investors continued to bet on accelerating AI compute demand.



Source link

Continue Reading

AI Research

Oracle Health Deploys AI to Tackle $200B Administrative Challenge

Published

on

By


Oracle Health introduced tools aimed at easing administrative healthcare burdens and costs.

The company’s new artificial intelligence-powered offerings are designed to simplify and lower the cost of processes such as prior authorizations, medical coding, claims processing and determining eligibility, according to a Thursday (Sept. 11) press release.

“Oracle Health is working to solve long-standing problems in healthcare with AI-powered solutions that simplify transactions between payers and providers,” Seema Verma, executive vice president and general manager, Oracle Health and Life Sciences, said in the release. “Our offerings can help minimize administrative complexity and waste to improve accuracy and reduce costs for both parties. With these capabilities, providers can better navigate payer-specific coverage, medical necessity and billing rules while enabling payers to lower administrative workloads by receiving more accurate claims from the start.”

Annual administrative costs tied to healthcare billing and insurance are estimated at roughly $200 billion, the release said. That figure continues to rise, largely due to the complexity of medical and financial processing rules and evolving payment models. The rules and models are time-consuming and inefficient for providers to follow and adopt, so they use manual processes, which make them prone to errors.

The PYMNTS Intelligence report “Healthcare Payments Need Modernization to Drive Financial Health” found that healthcare’s lingering reliance on manual payment systems is proving to be a bottleneck for its financial health and operational efficiency.

The worldwide market for healthcare digital payments is forecast to increase at a compound annual growth rate of 19% between 2024 and 2030, indicating a shift and market opportunity for digital solutions, per the report.

The report also explored how these outdated systems strain revenues and create inefficiencies, contrasting the sector’s slower adoption with other industries that have embraced digital payment tools.

“On the patient side, the benefits are equally compelling,” PYMNTS wrote in June. “Digital transactions offer hassle-free experiences, which are a driver for patient satisfaction and, ultimately, patient retention.”

The research found that 67% of executives and decision-makers in healthcare payer organizations said that their firms’ manual payment platforms were actively hindering efficiency. In addition, 74% said these platforms put their organizations at greater risk for regulatory fines and penalties.



Source link

Continue Reading

AI Research

Can AI optimize building retrofits? Research shows promise in CO₂ reduction but gaps in economic reasoning

Published

on


Researchers from Michigan State University have conducted one of the first systematic evaluations of large language models (LLMs) in the domain of building energy retrofits, where decisions on upgrades such as insulation, heat pumps, and electrification can directly impact energy savings and carbon reduction.

The study, titled “Can AI Make Energy Retrofit Decisions? An Evaluation of Large Language Models,” published on arXiv, examines whether LLMs can reliably guide retrofit decision-making across diverse U.S. housing stock. It addresses the limitations of conventional methods, which are often too technical, data-heavy, or opaque for practical adoption, particularly at large scale.

How accurate are AI models in selecting retrofit measures?

The researchers tested seven widely used LLMs, ChatGPT o1, ChatGPT o3, DeepSeek R1, Grok 3, Gemini 2.0, Llama 3.2, and Claude 3.7, on a dataset of 400 homes drawn from 49 states. Each home profile included details such as construction vintage, floor area, insulation levels, heating and cooling systems, and occupant patterns. The models were asked to recommend retrofit measures under two separate objectives: maximizing carbon dioxide reduction (technical context) and minimizing payback period (sociotechnical context).

The analysis found that LLMs were able to deliver effective results in technical optimization tasks. Accuracy reached 54.5 percent when looking at the single best solution and as high as 92.8 percent when top five matches were considered, even without fine-tuning. This reflects the models’ ability to align with physics-based benchmarks in scenarios where clear engineering goals, such as cutting carbon emissions, are prioritized.

On the other hand, when the focus shifted to minimizing payback period, results weakened substantially. Top-1 accuracy fell as low as 6.5 percent in some models, with only Gemini 2.0 surpassing 50 percent at the broader Top-5 threshold. The study concludes that economic trade-offs, which require balancing upfront investment against long-term savings, remain difficult for LLMs to interpret accurately.

How consistent and reliable are AI-generated decisions?

The study also examined whether different LLMs converged on the same recommendations. Here, performance was less encouraging. Consistency between models was low, and in some cases their agreement was worse than chance. Interestingly, the models that performed best in terms of accuracy, such as ChatGPT o3 and Gemini 2.0, were also the ones most likely to diverge from other systems. This indicates that while some models may excel, they do not necessarily produce results that align with peers, creating challenges for standardization in real-world applications.

The findings underscore the difficulty of relying on AI for high-stakes energy decisions when consensus is lacking. In practice, building owners, policymakers, and utility companies require not just accurate but also consistent recommendations. Low inter-model reliability highlights the importance of developing frameworks that validate and harmonize AI outputs before they can be integrated into large-scale retrofit programs.

What shapes AI reasoning in retrofit decisions?

The researchers also explored how LLMs arrive at their decisions. Sensitivity analysis showed that most models, like physics-based baselines, prioritized location and building geometry. Variables such as county, state, and floor space were consistently weighted as the most influential factors. However, the models paid less attention to occupant behaviors and technology choices, even though these can be critical in shaping real-world outcomes.

The reasoning patterns offered further insight. Among the tested systems, ChatGPT o3 and DeepSeek R1 provided the most structured, step-by-step explanations. Their workflows followed an engineering-like logic, beginning with baseline energy assumptions, adjusting for envelope improvements, calculating system efficiency, incorporating appliance impacts, and finally comparing outcomes. Yet, while the logic mirrored engineering principles, it was often simplified, overlooking nuanced contextual dependencies such as occupant usage levels or detailed climate variations.

The authors also noted that prompt design played a key role in outcomes. Slight adjustments in how questions were phrased could significantly shift model reasoning. For example, if not explicitly instructed to consider both upfront cost and energy savings, some models defaulted to choosing the lowest-cost option when evaluating payback. This sensitivity suggests that successful deployment of AI in retrofit contexts will depend heavily on careful prompt engineering and domain-specific adaptation.

A cautious but forward-looking conclusion

The evaluation highlights both the promise and the limitations of current LLMs in building energy retrofits. On one hand, the ability to achieve near 93 percent alignment with top retrofit measures in technical contexts shows significant potential for AI to streamline decision-making and improve energy efficiency strategies. On the other, weak performance in sociotechnical trade-offs, low inter-model consistency, and simplified reasoning demonstrate that these tools are not yet ready to replace domain expertise.

To sum up, LLMs can complement but not substitute traditional methods and expert judgment in retrofit planning. They recommend further development of domain-specific models, fine-tuning with validated datasets, and hybrid approaches that integrate AI with physics-based simulations to ensure accuracy and traceability.

For policymakers and practitioners, the study provides an important benchmark: AI can indeed assist in advancing retrofit strategies, especially for carbon reduction, but its current shortcomings demand careful oversight. As cities and communities push toward energy transition goals, ensuring that AI systems are transparent, consistent, and context-aware will be essential before they can be deployed at scale.



Source link

Continue Reading

Trending