Connect with us

AI Research

AI factories are the new power plants of intelligence

Published

on


Artificial Intelligence (AI) is rapidly reshaping how we live, work, and learn. From voice assistants and recommendation engines to generative chatbots and self-driving vehicles, AI is now a core part of everyday life.

But behind every smart application lies something often unseen: infrastructure. That infrastructure is called an AI Factory.

AI Factories aren’t just clusters in the cloud. They are real, physical environments, purpose-built data centres designed from the ground up to support the world’s most demanding AI workloads.

Where traditional facilities host websites or store files, AI Factories train, run, and refine advanced models by converting massive volumes of raw data into real-time intelligence. They are the new production lines of the AI era: manufacturing insight instead of goods.

At their core, AI Factories deliver extreme power, precision cooling, and ultra-fast connectivity. These aren’t upgrades; they’re the foundational infrastructure of the AI economy.

If you’re in cloud, colocation, or enterprise IT, understanding what makes an AI Factory different is critical to staying competitive.

AI Factories: the power plants of intelligence

Just as cities rely on centralised power plants for energy, AI relies on centralised infrastructure to deliver intelligence.

AI Factories combine thousands of high-performance processors (GPUs), ultra-fast interconnection, and advanced cooling systems to run AI workloads at industrial scale.

They’re built to:

  • Train advanced models using vast datasets
  • Deliver real-time inference at scale
  • Support GPU clusters requiring intensive power, cooling, and performance

Bottom Line: AI Factories are not just next-gen data centres. They are the power plants of digital intelligence.

NextDC ad 5

NEXTDC

What makes an AI Factory different?

AI Factories mark a fundamental shift in infrastructure design. Unlike traditional data centres, they’re engineered for the scale, complexity, and performance AI demands.

Key differentiators include

  1. Specialised AI Hardware
    • Thousands of GPUs and AI accelerators (e.g. NVIDIA Hopper, Blackwell)
    • AI-optimised CPUs like NVIDIA Grace
  2. AI-Centric Software & Orchestration
    • Full-stack platforms like NVIDIA AI Enterprise
    • Built-in scheduling, monitoring, and optimisation tools
  3. Extreme Power Density
    • 100kW to 600kW per rack
    • Electrical systems optimised for full-load performance
  4. Advanced Cooling Systems
    • Liquid cooling and immersion solutions
    • Energy-efficient thermal design
  5. High-Speed Interconnectivity
    • InfiniBand and NVLink fabrics for low-latency data flow
  6. Scalable, Sustainable Architecture
    • Modular design with support for sovereign and net-zero goals

Bottom Line: AI Factories are intelligence-first. If your infrastructure can’t support this shift, you risk falling behind.

Traditional data centre vs AI Factory

 Feature  Traditional Data Centre  AI Factory (NEXTDC-Ready)
 Primary Purpose  Apps, storage, websites  AI training, inference, machine learning
 Hardware Inside  CPUs, some GPUs  Thousands of AI-optimised GPUs and chips
 Power Per Rack  5–15kW  30–600kW+ (NEXTDC supports today)
 Cooling Method  Air-based ventilation  Liquid, direct-to-chip, immersion
 Speed of Network  Standard networking  High-bandwidth, ultra-low latency fabric
 Scale of Compute  General-purpose    servers  GPU clusters managed, monitored, orchestrated at scale

Bottom Line: Traditional data centres offer versatility. AI Factories offer intelligence at industrial scale.

What workloads do AI Factories support?

AI Factories are built to handle the world’s most computationally demanding tasks.

1. Model Training

Teaching AI to understand patterns, predict outcomes, and reason at scale:

  • Language models (e.g. ChatGPT)
  • Medical image analysis
  • Autonomous driving systems

2. Inference at Scale

Deploying AI to make real-time decisions:

  • Product recommendations
  • Chatbots and assistants
  • Smart surveillance and object recognition

3. High-Performance Simulations

Powering AI-enhanced simulations:

  • Drug discovery and genomics
  • Financial modelling and risk analysis
  • Climate forecasting and energy grid management

Bottom Line: AI Factories are not general-purpose. They’re purpose-built for high-stakes, compute-intensive AI workloads.


The AI Factory era has arrived

AI Factories are already live and redefining infrastructure across industries.

They enable faster deployment, higher reliability, and scalable AI operations. As power densities, cooling requirements, and compute demands rise, traditional infrastructure is reaching its limits.

What’s Next: Why AI Factories Matter Now

  • 600kW+ rack power is becoming standard
  • AI-specific chip architectures are evolving fast
  • Cooling and interconnect innovation is a must

NEXTDC is building the infrastructure behind Australia’s AI future. With NVIDIA-certified facilities, sovereign-grade security, and national reach, NEXTDC supports everything from GPU-as-a-Service to sovereign AI deployment.

Whether you’re scaling Neo Cloud, building national capability, or launching new services, our infrastructure gives you the power to scale confidently.

Ready to build your AI Factory? Connect with NEXTDC’s infrastructure specialists and start powering what’s next.

NextDC ad 2

NEXTDC



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Research

EU Publishes Final AI Code of Practice to Guide AI Companies

Published

on

By


The European Commission said Thursday (July 10) that it published the final version of a voluntary framework designed to help artificial intelligence companies comply with the European Union’s AI Act.

The General-Purpose AI Code of Practice seeks to clarify legal obligations under the act for providers of general-purpose AI models such as ChatGPT, especially those posing systemic risks like ones that help fraudsters develop chemical and biological weapons.

The code’s publication “marks an important step in making the most advanced AI models available in Europe not only innovative but also safe and transparent,” Henna Virkkunen, executive vice president for tech sovereignty, security and democracy for the commission, which is the EU’s executive arm, said in a statement.

The code was developed by 13 independent experts after hearing from 1,000 stakeholders, which included AI developers, industry organizations, academics, civil society organizations and representatives of EU member states, according to a Thursday (July 10) press release. Observers from global public agencies also participated.

The EU AI Act, which was approved in 2024, is the first comprehensive legal framework governing AI. It aims to ensure that AI systems used in the EU are safe and transparent, as well as respectful of fundamental human rights.

The act classifies AI applications into risk categories — unacceptable, high, limited and minimal — and imposes obligations accordingly. Any AI company whose services are used by EU residents must comply with the act. Fines can go up to 7% of global annual revenue.

The code is voluntary, but AI model companies who sign on will benefit from lower administrative burdens and greater legal certainty, according to the commission. The next step is for the EU’s 27 member states and the commission to endorse it.

Read also: European Commission Says It Won’t Delay Implementation of AI Act

Inside the Code of Practice

The code is structured into three core chapters: Transparency; Copyright; and Safety and Security.

The Transparency chapter includes a model documentation form, described by the commission as “a user-friendly” tool to help companies demonstrate compliance with transparency requirements.

The Copyright chapter offers “practical solutions to meet the AI Act’s obligation to put in place a policy to comply with EU copyright law.”

The Safety and Security chapter, aimed at the most advanced systems with systemic risk, outlines “concrete state-of-the-art practices for managing systemic risks.”

The drafting process began with a plenary session in September 2024 and proceeded through multiple working group meetings, virtual drafting rounds and provider workshops.

The code takes effect Aug. 2, but the commission’s AI Office will enforce the rules on new AI models after one year and on existing models after two years.

A spokesperson for OpenAI told The Wall Street Journal that the company is reviewing the code to decide whether to sign it. A Google spokesperson said the company would also review the code.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read more:



Source link

Continue Reading

AI Research

Every Blooming Thing – Technology and Artificial Intelligence in the garden – appeal-democrat.com

Published

on



Every Blooming Thing – Technology and Artificial Intelligence in the garden  appeal-democrat.com



Source link

Continue Reading

AI Research

Researchers develop AI model to generate global realistic rainfall maps

Published

on


Working from low-resolution global precipitation data, the spateGAN-ERA5 AI model generates high-resolution fields for the analysis of heavy rainfall events. Credit: Christian Chwala, KIT

Severe weather events, such as heavy rainfall, are on the rise worldwide. Reliable assessments of these events can save lives and protect property. Researchers at the Karlsruhe Institute of Technology (KIT) have developed a new method that uses artificial intelligence (AI) to convert low-resolution global weather data into high-resolution precipitation maps. The method is fast, efficient, and independent of location. Their findings have been published in npj Climate and Atmospheric Science.

“Heavy rainfall and flooding are much more common in many regions of the world than they were just a few decades ago,” said Dr. Christian Chwala, an expert on hydrometeorology and machine learning at the Institute of Meteorology and Climate Research (IMK-IFU), KIT’s Campus Alpin in the German town of Garmisch-Partenkirchen. “But until now the data needed for reliable regional assessments of such extreme events was missing for many locations.”

His research team addresses this problem with a new AI that can generate precise global precipitation maps from low-resolution information. The result is a unique tool for the analysis and assessment of extreme weather, even for regions with poor data coverage, such as the Global South.

For their method, the researchers use from that describe global precipitation at hourly intervals with a spatial resolution of about 24 kilometers. Not only was their generative AI model (spateGEN-ERA5) trained with this data, it also learned (from high-resolution weather radar measurements made in Germany) how precipitation patterns and extreme events correlate at different scales, from coarse to fine.

“Our AI model doesn’t merely create a more sharply focused version of the input data, it generates multiple physically plausible, high-resolution maps,” said Luca Glawion of IMK-IFU, who developed the model while working on his doctoral thesis in the SCENIC research project. “Details at a resolution of 2 kilometers and 10 minutes become visible. The model also provides information about the statistical uncertainty of the results, which is especially relevant when modeling regionalized events.”

He also noted that validation with weather radar data from the United States and Australia showed that the method can be applied to entirely different climatic conditions.

Correctly assessing flood risks worldwide

With their method’s global applicability, the researchers offer new possibilities for better assessment of regional climate risks. “It’s the especially vulnerable regions that often lack the resources for detailed weather observations,” said Dr. Julius Polz of IMK-IFU, who was also involved in the model’s development.

“Our approach will enable us to make much more reliable assessments of where heavy rainfall and floods are likely to occur, even in such regions with poor data coverage.” Not only can the new AI method contribute to disaster control in emergencies, it can also help with the implementation of more effective long-term preventive measures such as flood control.

More information:
Luca Glawion et al, Global spatio-temporal ERA5 precipitation downscaling to km and sub-hourly scale using generative AI, npj Climate and Atmospheric Science (2025). DOI: 10.1038/s41612-025-01103-y

Citation:
Researchers develop AI model to generate global realistic rainfall maps (2025, July 10)
retrieved 10 July 2025
from https://phys.org/news/2025-07-ai-generate-global-realistic-rainfall.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Continue Reading

Trending