Connect with us

AI Insights

As the artificial intelligence (AI) craze drives the expansion of data center investment, leading U…

Published

on


Seeking a Breakthrough in AI Infrastructure Market such as Heywell and Genrack “Over 400 Billion KRW in Data Center Infrastructure Investment This Year”

What Microsoft Data Center looks like [Photo = MS]

As the artificial intelligence (AI) craze drives the expansion of data center investment, leading U.S. manufacturing companies are entering this market as new growth breakthroughs.

The Financial Times reported on the 6th (local time) that companies such as Generac, Gates Industrial, and Honeywell are targeting the demand for hyperscalers with special facilities such as generators and cooling equipment.

Hyperscaler is a term mainly used in the data center and cloud industry, and refers to a company that operates a large computing infrastructure designed to quickly and efficiently handle large amounts of data. Representatively, big tech companies such as Amazon, Microsoft (MS), Google, and Meta can be cited.

Generac is reportedly the largest producer of residential generators, but it has jumped into the generator market for large data centers to recover its stock price, which is down 75% from its 2021 high. It recently invested $130 million in large generator production facilities and is expanding its business into the electric vehicle charger and home battery market.

Gates, who was manufacturing parts for heavy equipment trucks, has also developed new cooling pumps and pipes for data centers over the past year. This is because Nvidia’s latest AI chip ‘Blackwell’ makes liquid cooling a prerequisite. Gates explained, “Most equipment can be relocated for data centers with a little customization.”

Honeywell, an industrial equipment giant, started to target the market with its cooling system control solution. Based on this, sales of hybrid cooling controllers have recorded double-digit growth over the past 18 months.

According to market research firm Gartner, more than $400 billion is expected to be invested in building data center infrastructure around the world this year. More than 75% of them are expected to be concentrated on hyperscalers such as Amazon, Microsoft, Meta, and Google.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Apple Supplier Lens Tech Said to Price $607 Million Hong Kong Listing at Top of Range

Published

on




Apple Inc. supplier Lens Technology Co. has raised HK$4.8 billion ($607 million) after pricing its Hong Kong listing at the top of the marketed range, according to people familiar with the matter.



Source link

Continue Reading

AI Insights

The Cognitive Cost Of AI-Assisted Learning – Analysis – Eurasia Review

Published

on


A decade ago, if someone had claimed machines would soon draft essays, debug code, and explain complex theories in seconds, the idea might have sounded like science fiction. Today, artificial intelligence is doing all of this and more. Large Language Models (LLMs) like ChatGPT have transformed how information is consumed, processed, and reproduced. But as the world becomes more comfortable outsourcing intellectual labor, serious questions are emerging about what this means for human cognition.

It isn’t a doomsday scenario, at least not yet. But mounting research suggests there may be cognitive consequences to the growing dependence on AI tools, particularly in academic and intellectual spaces. The concern isn’t that these tools are inherently harmful, but rather that they change the mental labor required to learn, think, and remember. When answers are pre-packaged and polished, the effort that usually goes into connecting ideas, analyzing possibilities, or struggling through uncertainty may quietly fade away.

A recent study conducted by researchers at the MIT Media Lab helps illustrate this. Fifty-four college students were asked to write short essays under three conditions: using only their brains, using the internet without AI, or using ChatGPT freely. Participants wore EEG headsets to monitor brain activity. The results were striking. Those who relied on their own cognition or basic online searches showed higher brain connectivity in regions tied to attention, memory retrieval, and creativity. In contrast, those who used ChatGPT showed reduced neural activity. Even more concerning: these same students often struggled to recall what they had written.

This finding echoes a deeper pattern. In “The Shallows: What the Internet Is Doing to Our Brains,” Nicholas Carr argues that technologies designed to simplify access to information can also erode our ability to engage deeply with that information. Carr’s thesis, originally framed around search engines and social media, gains renewed relevance in an era where even thinking can be automated.

AI tools have democratized knowledge, no doubt. A student confused by a math problem or an executive drafting a report can now receive tailored, well-articulated responses in moments. But this ease may come at the cost of originality. According to the same MIT study, responses generated with the help of LLMs tended to converge around generic answers. When asked subjective questions like “What does happiness look like?”, essays often landed in a narrow band of bland, agreeable sentiment. It’s not hard to see why: LLMs are trained to produce outputs that reflect the statistical average of billions of human texts.

This trend toward homogenization poses philosophical as well as cognitive challenges. In “The Age of Surveillance Capitalism,” Shoshana Zuboff warns that as technology becomes more capable of predicting human behavior, it also exerts influence over it. If the answers generated by AI reflect the statistical mean, then users may increasingly absorb, adopt, and regurgitate those same answers, reinforcing the very patterns that machines predict.

The concern isn’t just about bland writing or mediocre ideas. It’s about losing the friction that makes learning meaningful. In “Make It Stick: The Science of Successful Learning,” Brown, Roediger, and McDaniel emphasize that learning happens most effectively when it involves effort, retrieval, and struggle. When a student bypasses the challenge and lets a machine produce the answer, the brain misses out on the very processes that cement understanding.

That doesn’t mean AI is always a cognitive dead-end. Used wisely, it can be a powerful amplifier. The same MIT study found that participants who first engaged with a prompt using their own thinking and later used AI to enhance their responses actually showed higher neural connectivity than those who only used AI. In short, starting with your brain and then inviting AI to the table might be a productive partnership. Starting with AI and skipping the thinking altogether is where the danger lies.

Historically, humans have always offloaded certain cognitive tasks to tools. In “Cognition in the Wild,” Edwin Hutchins shows how navigation in the Navy is a collective, tool-mediated process that extends individual cognition across people and systems. Writing, calculators, calendars, even GPS—these are all examples of external aids that relieve our mental burden. But LLMs are different in kind. They don’t just hold information or perform calculations; they construct thoughts, arguments, and narratives—the very outputs we once considered evidence of human intellect.

The worry becomes more acute in educational settings. A Harvard study published earlier this year found that while generative AI made workers feel more productive, it also left them less motivated. This emotional disengagement is subtle, but significant. If students begin to feel they no longer own their ideas or creations, motivation to learn may gradually erode. In “Deep Work,” Cal Newport discusses how focus and effort are central to intellectual development. Outsourcing too much of that effort risks undermining not just skills, but confidence and identity.

Cognitive offloading isn’t new, but the scale and intimacy of AI assistance is unprecedented. Carnegie Mellon researchers recently described how relying on AI tools for decision-making can leave minds “atrophied and unprepared.” Their concern wasn’t that these tools fail, but that they work too well. The smoother the experience, the fewer opportunities the brain has to engage. Over time, this could dull the mental sharpness that comes from grappling with ambiguity or constructing arguments from scratch.

Of course, there’s nuance. Not all AI use is equal, and not all users will be affected in the same way. A senior using a digital assistant to remember appointments is not the same as a student using ChatGPT to write a philosophy paper. As “Digital Minimalism” by Cal Newport suggests, it’s not the presence of technology, but the purpose and structure of its use that determines its impact.

Some might argue that concerns about brain rot echo earlier panics. People once feared that writing would erode memory, that newspapers would stunt critical thinking, or that television would replace reading altogether. And yet, society adapted. But the difference now lies in the depth of substitution. Where earlier technologies altered the way information was delivered, LLMs risk altering the way ideas are born.

The road forward is not to abandon AI, but to treat it with caution. Educators, researchers, and developers need to think seriously about how these tools are integrated into daily life, especially in formative contexts. Transparency, guided usage, and perhaps even deliberate “AI-free zones” in education could help preserve the mental muscles that matter.

In the end, the question is not whether AI will shape how people think. It already is. The better question is whether those changes will leave future generations sharper, or simply more efficient at being average.

References

  • Carr, N. (2010). The Shallows: What the Internet Is Doing to Our Brains. W.W. Norton.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism. PublicAffairs.
  • Brown, P.C., Roediger, H.L., & McDaniel, M.A. (2014). Make It Stick: The Science of Successful Learning. Belknap Press.
  • Hutchins, E. (1995). Cognition in the Wild. MIT Press.
  • Newport, C. (2016). Deep Work: Rules for Focused Success in a Distracted World. Grand Central.
  • Newport, C. (2019). Digital Minimalism: Choosing a Focused Life in a Noisy World. Portfolio.
  • Daugherty, P. R., & Wilson, H. J. (2018). Human + Machine: Reimagining Work in the Age of AI. Harvard Business Review Press.



Source link

Continue Reading

AI Insights

OpenAI says GPT-5 will unify breakthroughs from different models

Published

on


OpenAI has again confirmed that it will unify multiple models into one and create GPT-5, which is expected to ship sometime in the summer.

ChatGPT currently has too many capable models for different tasks. While the models are powerful, it can be confusing because all models have identical names.

But another issue is that OpenAI maintains an “o” lineup for reasoning capabilities, while the 4o and other models have multi-modality.

With GPT-5, OpenAI plans to unify the breakthrough in its lineup and deliver the best of the two worlds.

“We’re truly excited to not just make a net new great frontier model, we’re also going to unify our two series,” says Romain Huet, OpenAI’s Head of Developer Experience.

“The breakthrough of reasoning in the O-series and the breakthroughs in multi-modality in the GPT-series will be unified, and that will be GPT-5. And I really hope I’ll come back soon to tell you more about it.”

OpenAI previously claimed that GPT-5 will also make the existing models significantly better at everything.

“GPT-5 is our next foundational model that is meant to just make everything our models can currently do better and with less model switching,” Jerry Tworek, who is a VP at OpenAI, wrote in a Reddit post.

Right now, we don’t know when GPT-5 will begin rolling out to everyone, but Sam Altman suggests it’s coming in the summer.

While cloud attacks may be growing more sophisticated, attackers still succeed with surprisingly simple techniques.

Drawing from Wiz’s detections across thousands of organizations, this report reveals 8 key techniques used by cloud-fluent threat actors.



Source link

Continue Reading

Trending