In recent years, manufacturers have undergone massive digital modernization, integrating operational technology (OT) systems with information technology (IT) networks to support digital transformation, secure remote workers, and enable real-time data sharing. While this interconnectedness brings great efficiency and innovation, it has also introduced significant security risks.
This IT/OT convergence gives malicious actors easier access to previously air-gapped OT environments and exposes legacy systems to modern threats.
Attackers are increasingly leveraging IT/OT convergence to their advantage. For example, it has become common for attackers to leverage social engineering techniques to exploit weaknesses in a company’s own or third-party IT infrastructure and then gain access to operational technology systems.
These types of breaches can significantly disrupt a company’s physical production and distribution capabilities, often for weeks at a time.
As attackers refine their tactics and techniques, OT security is more crucial than ever. Fortinet’s 2025 State of Operational Technology and Cybersecurity Report highlights that OT security is increasingly being prioritized at the executive level, with CISOs now directly responsible for OT security in over half of surveyed organizations. However, it comes with many challenges. OT environments often include legacy technology that is decades old and was deployed long before cybersecurity was a consideration.
Attackers are also increasingly leveraging artificial intelligence (AI) to automate and scale their campaigns targeting OT and critical infrastructure, which adds further complexities. The good news is that manufacturers can also leverage the technology to effectively safeguard their operations and improve their security posture.
AI and machine learning technology can help organizations analyze vast volumes of data across their infrastructure in real-time to uncover possible known and novel threats within their environments. As organizations continue to grapple with the ongoing cyber skills shortage, agentic AI security solutions enable efficient and faster anomaly detection within the security operations center (SOC).
In addition to harnessing the power of AI for their security, there are several other mechanisms that manufacturers should also consider to amplify their security controls.
Establish Visibility and Controls. You cannot protect what you cannot see. Manufacturers must first gain full visibility into their OT assets. Once visibility is established, organizations then need to protect critical devices and ones that may be vulnerable, which requires protective compensating controls that are designed for sensitive OT devices. Capabilities such as protocol-aware network policies, system-to-system interaction analysis, and endpoint monitoring can detect and prevent compromise of vulnerable assets.
Deploy Network Segmentation. To effectively protect OT environments, manufacturers must implement strong network policy controls at all access points and create network zones or segments. Various standards such as ISA/IEC 62443 specifically call for segmentation to enforce controls between OT and IT networks and between OT systems.
Integrate OT into SecOps and Incident Response. OT systems must be explicitly included in security operations and response plans. This means developing OT-specific playbooks, training cross-functional teams, and ensuring executive awareness of the unique risks and consequences of OT breaches.
Adopt a Platform-Based Security Architecture. Many manufacturers rely on a patchwork of security tools that create blind spots and inefficiencies. A unified platform approach can consolidate vendors, improve visibility, and enable faster, automated responses to threats across IT and OT.
Use OT-Specific Threat Intelligence. Generic threat feeds often miss the nuances of industrial environments. Manufacturers should invest in threat intelligence that includes OT-specific indicators and AI-powered analytics to detect emerging risks in real time.
The convergence of IT and OT is inevitable and so are the corresponding risks. By embracing AI, and OT-specific best practices, manufacturers can build a more secure, resilient future. The time to act is now, before the next breach disrupts production or critical infrastructure.
Palantir, a leading artificial intelligence (AI) software company, is emerging as a major U.S. stock favored by Koreans. [Photo source = Yonhap News]
Palantir, a leading artificial intelligence (AI) software company, is emerging as a major U.S. stock favored by Koreans.
According to the Korea Securities Depository on the 13th, the value of Palantir shares (storage) held by domestic investors reached $5.85 billion (8.1329 trillion won) as of the 10th, making it the third-largest foreign stock after Tesla and Nvidia.
At the beginning of this year, Palantir ranked eighth in storage, but it jumped five spots in just nine months. Storage increased by about 2.5 times from $2.3 billion.
Palantir is a company that sells advanced AI services to the military, government, companies, and intelligence agencies. The main goal is to help AI analyze vast and diverse data within the organization to find specific patterns and predict the future to make wise decisions.
In Korea, companies such as HD Hyundai Infracore and Samyang Foods use Palantir systems.
Palantir signed a contract with the U.S. Army last month worth up to $10 billion (W13.8 trillion) over the next decade, making it one of the largest Pentagon software contracts in U.S. history.
Palantir’s stock price more than doubled from 75.63 dollars (105,000 won) at the end of last year to 164 dollars (227,000 won) as of the 12th.
In the second quarter of this year, it exceeded $1 billion in sales for the first time ever and posted a net profit of $0.16 per share.
Big tech is expected to spend nearly $500 billion on artificial intelligence infrastructure next year.
Over the past few years, many corporate budgets have been reoriented toward artificial intelligence (AI) investments. Nowhere is this shift more evident than among the cloud hyperscalers — Microsoft, Alphabet, and Amazon — as well as other tech titans like Meta Platforms and Oracle.
At the center of this unprecedented wave of AI infrastructure spending stands one clear beneficiary: Nvidia(NVDA 0.43%). Whether directly or indirectly, the graphics processing unit (GPU) powerhouse is capturing a significant chunk of the money being spent on AI infrastructure.
Let’s explore how big tech is reshaping the AI landscape — and why these secular tailwinds point to further significant upside for Nvidia.
AI infrastructure spending has accelerated since the launch of ChatGPT
The launch of ChatGPT in November 2022 ignited an unprecedented AI arms race among the world’s largest companies. What’s important to recognize is that their capital expenditures in this battle are not plateauing — they’re accelerating.
Data from Goldman Sachs underscores just how dramatic these dynamics have become. In 2021, Alphabet, Meta, Amazon, and Microsoft collectively had capex of about $100 billion. By next year, Wall Street expects that figure to approach nearly $500 billion.
What does this mean for Nvidia?
Training and deploying large language models (LLMs) and building generative AI applications demands extraordinary amounts of computing power. GPUs are parallel processors, which makes them some of the best chips available to provide the type of computing power AI workloads require. Today, Nvidia commands a more than 90% share of the GPU market, giving it a dominant position within the AI supply chain.
A significant portion of the AI capex surge is flowing directly into GPUs and the supporting data center equipment necessary to maximize their performance.
This dynamic places Nvidia in a uniquely enviable position as the backbone of modern AI development — and it’s poised to capture incremental budget allocations as hyperscalers and other data center operators race to secure its next-generation chips the moment they become available.
Image source: Getty Images.
Is Nvidia stock a buy?
The magnitude of hyperscaler infrastructure investment reflects more than the world’s apparently insatiable appetite for AI computing power. It underscores a deeper reality: AI is becoming the central growth engine for these companies, and securing access to the most advanced chips has shifted from being a matter of technological advantage to being a matter of competitive survival.
The accelerating pace of this spending suggests that corporations are still in the early stages of implementing their AI playbooks. Far from being a speculative bubble, this wave of infrastructure investment is the result of deliberate, long-term strategic planning by some of the world’s most influential companies as they pivot away from their traditional priorities and take on sophisticated projects in robotics, autonomous systems, cybersecurity, and more.
For Nvidia, this dynamic should translate into sustained pricing power for its wares, durable recurring demand, and a multiyear runway for rapid growth. Its GPUs and CUDA software platform have become the gold standard for enterprise AI tech stacks.
Taken together, these tailwinds suggest that Nvidia could experience meaningful valuation expansion from here. As the infrastructure chapter of the AI narrative continues to unfold, Nvidia appears well positioned to remain as a core enabler of big tech’s transformation.
For these reasons, I see Nvidia stock as a no-brainer investment, and view it as one of the most compelling buy-and-hold opportunities in the market.
Adam Spatacco has positions in Alphabet, Amazon, Meta Platforms, Microsoft, and Nvidia. The Motley Fool has positions in and recommends Alphabet, Amazon, Goldman Sachs Group, Meta Platforms, Microsoft, Nvidia, and Oracle. The Motley Fool recommends Nebius Group and recommends the following options: long January 2026 $395 calls on Microsoft and short January 2026 $405 calls on Microsoft. The Motley Fool has a disclosure policy.
north of Fargo, readers have asked several questions about the facility.
The Forum spoke this week with Applied Digital Chairman and CEO Wes Cummins about the 280-megawatt facility planned for east of Interstate 29 between Harwood, North Dakota, and Fargo. The 160-acre center will sit on 925 acres near the Fargo Park District’s North Softball Complex.
The Harwood City Council voted unanimously on Wednesday, Sept. 10, to rezone the land for the center from agricultural to light industrial. With the vote also came final approval of the building permit for the center, meaning Applied Digital can break ground on the facility this month.
“We’re grateful for the City of Harwood’s support and look forward to continuing a strong partnership with the community as this project moves ahead,” Cummins said after the vote.
Applied Digital CEO and Chairman Wes Cummins talks about his company and its plans for Harwood, North Dakota, during a meeting on Tuesday, Sept. 2, 2025, at the Harwood Community Center.
Alyssa Goelzer / The Forum
Applied Digital plans to start construction this month and open partially by the end of 2026. The facility should be fully operational by early 2027, the company said.
The project should create 700 construction jobs while the facility is built, Applied Digital said. The center will need more than 200 full-time employees to operate, the company said. The facility is expected to generate tax revenue and economic growth for the area, but those estimates have not been disclosed.
Here are some questions readers had about the facility.
What will the AI data center be used for?
Applied Digital said it develops facilities that provide “high-performance data centers and colocations solutions for artificial intelligence, cloud, networking, and blockchain industries.” AI is used to run applications that make computers functional, Cummins said.
“ChatGPT runs in a facility like this,” he said. “There’s just enormous amounts of servers that can run GPUs (graphic processing units) inside of the facility and can either be doing training, which is making the product, or inference, which is what happens when people use the product.”
Applied Digital’s $3 billion data center will be constructed just southeast of the town of Harwood, North Dakota.
Map by The Forum
Applied Digital hasn’t announced what tenants would use Polaris Forge 2, the name for the Harwood facility. At a Harwood City Council meeting, Cummins said the company markets to companies in the U.S. like Google, Meta, Amazon and Microsoft.
“The demand for AI capacity continues to accelerate, and North Dakota continues to be one of the most strategic locations in the country to meet that need,” he said. “We have strong interest from multiple parties and are in advanced negotiations with a U.S. based investment-grade hyperscaler for this campus, making it both timely and prudent to proceed with groundbreaking and site development.”
AI data centers need significant amounts of electricity to operate, Cummins said. Other centers have traditionally been built near heavily populated areas, but that isn’t necessary, he said.
North Dakota produces enough energy to export it out of state, Cummins said. The Fargo area also has the electrical grid in place to connect to that energy, he said.
“A lot of North Dakotans, especially the leaders of North Dakota, want to better utilize the energy produced by North Dakota for economic benefit inside of the state versus exporting it to neighboring states or to Canada,” he said.
North Dakota’s cold climate much of the year also will keep the center cooler than in states like Texas, meaning the facility will use significantly less power than in warmer states, Cummins said.
“We get much more efficiency out of the facility,” he said. “Those aspects make North Dakota, in my opinion, an ideal place for this type of AI infrastructure.”
The Harwood, North Dakota, elevator on Thursday, Aug. 28, 2025, looms behind the land designated for the construction of Applied Digital’s 280-megawatt data center.
David Samson / The Forum
How much water will the center use?
Cummins acknowledged other AI data centers around the world use millions of gallons of water a day. Applied Digital designed a closed-loop system so the North Dakota centers use as little water as possible, Cummins said.
He compared the cooling system to a car radiator. The centers will use glycol liquid to run through the facilities and servers, Cummins said. After cooling the equipment, the liquid goes through chillers, much like a heat pump outside of a house. Once cooled, the liquid will recirculate on a continuous loop, he said.
People who operate the facility will use water for bathroom breaks and drinking, much like a person in a house or a car, he said.
“The data center, even with the immense size, we expect it to use the same amount of water as roughly a single household,” he said. “The reason is the people inside.”
Duncan Alexander and dog Valka protest a proposed AI data center before a Planning and Zoning meeting on Tuesday, Sept. 2, 2025, in Harwood, North Dakota.
Alyssa Goelzer / The Forum
Will the AI center increase electricity rates?
Applied Digital claims that electricity rates will not go up for local residents because of the data center.
“Data centers pay a large share of fixed utility costs, which helps spread expenses across more users,” the company said.
Applied Digital’s center in Ellendale, North Dakota, much like the one to be built in Harwood, uses power produced in the state, Cummins said. The Ellendale center, which runs on about 200 megawatts a year, saved ratepayers $5.3 million in 2023 and $5.7 million last year, he said.
“Utilizing the infrastructure more efficiently can actually drive rates down,” Cummins said, adding he expects rate savings for Harwood as well.
How much noise will the center make?
Applied Digital’s concrete walls should content the noise from computers, Cummins said. What residents will hear is fan noise from heat pumps used to cool the facility, he said.
“It will sound like the one that runs outside of your house,” he said in describing that the facility will create minimal noise.
The loudest noise will be construction of the facility, Cummins said.
The facility only will cover 160 acres, but Applied Digital is buying 925 acres of land, with the rest of the space serving as a sound buffer, he said. People who live nearby may hear some sound, he acknowledged.
“If you’re a half mile or more from the facility, you will very unlikely hear anything,” he said.
About 300 people showed up to a town hall meeting on Monday, Aug. 25, 2025, at the Harwood Community Center to listen and to discuss a new AI data center that is planned to be built in Harwood, North Dakota.
Chris Flynn / The Forum
Has Applied Digital conducted an environmental study?
The facility won’t create emissions or other hazards that would require an environmental impact study, Cummins said.
Why move so fast to approve the facility?
Some have criticized Applied Digital and the Harwood City Council for pushing the approval process so quickly. Applied Digital announced the project in mid-August, and the city approved it in less than a month.
Cummins acknowledged that concern but noted the industry is moving fast. The U.S. is competing with China to create artificial intelligence, an industry that is not going away, Cummins said.
“I do believe we are in a race in the world for super intelligence,” he said. “It’s a race amongst companies in the U.S., but it’s also a race against other countries. … I do think it’s very important the U.S. win this AI race to super intelligence and then to artificial general intelligence.”
Applied Digital said it wanted to finish foundation and grading work on the project before winter sets in, meaning it needed an expedited approval timeline.
People in Harwood have shown overwhelming support, Cummins said, adding that protesters mostly came from other cities.
“I can’t think of a project that would spend this amount of money and have this kind of economic benefit for a community and a county and a state and have this low of a negative impact,” he said. “I think these types of projects are fantastic for these types of communities.”