Connect with us

AI Research

We can build safer tunnels with artificial intelligence

Published

on


The future of tunnel construction isn’t just about better explosives, steel, and machinery. It’s digital, data-driven, and smarter. And perhaps most importantly: safer, writes the author. Credit: Norwegian Geotechnical Institute

Every day, new tunnels are being built through rock across the country. The completed tunnels are safe, but the construction phase presents challenges.

For those working with blasting and drilling, the risk of rockfalls, water ingress, or unpredictable conditions is part of daily life. So how can we make this phase safer, more precise, and less costly?

My answer is: with the help of artificial intelligence.

“Rock” refers to the material we drill and blast through, while “mountain” describes the landform we see in nature. This article is about rock.

Too many subjective assessments

Over many years working on various tunnel and mining projects, I’ve seen that many decisions in tunnel construction are still based on experience and often subjective judgment.

In the planning phase, we use and to predict conditions. During excavation, the rock mass is assessed visually, and we analyze how the drilling machine behaves.

For example, rapid penetration into the rock may indicate weaker zones. But without the ability to see inside the rock, these assessments carry a degree of uncertainty. That’s where the risk lies.

Today, we have access to far more data than we actually use. A modern drilling machine collects thousands of measurements per minute while drilling. This is called MWD data—”Measure While Drilling.”

MWD data acts like a signature of the rock: We get information about the rock’s resistance, how much water flushing is needed, and how much pressure is required to drill forward. These data are often just stored and not actively used for decision-making.

Developing machine learning models to predict what lies ahead

In my Ph.D. research, I developed machine learning models that can use MWD data to predict what lies ahead of the tunnel face. What type of rock will we encounter? How weak is it? Should we reinforce the tunnel here? Can we reuse the blasted rock, or must it go to a landfill?

With such models, we can anticipate what’s coming and take action in time. Instead of waiting for a collapse, we can act before it happens.

Here’s how it works: We collect data from the drilling machine. These are transformed into a kind of digital fingerprint of the rock. The compares this with thousands of previous cases and suggests what type of rock we’re dealing with. It all happens in seconds.

And it’s not just the rock type we can predict. The models can also suggest what actions are needed: How far should we blast the next round? Should we reinforce the rock with extra bolts and concrete before continuing? The result is safer tunnels, less resource overuse, and lower costs.

Machine learning gives engineers a new tool to reduce costs and accidents

Society has much to gain. Tunnels provide shorter travel routes, better public transport, and lower climate emissions. But they must be built safely. Rockfall accidents can cost lives and large sums of money. By using machine learning, we give engineers a new tool. A tool that doesn’t replace them, but helps them make better decisions.

Additionally, more efficient and safer tunneling can make underground mining more attractive as an alternative to large open-pit mines. This means less impact on landscapes and ecosystems, and better coexistence between resource extraction and nature conservation.

When this tool is used on a large scale, we can also gather experience from tunnel construction across the country and use it to further improve the models. This creates a positive feedback loop: The more we build, the better we get. The technology can also be adapted for mining and international projects, giving Norwegian expertise a competitive edge.

The future of construction isn’t just about better explosives, steel, and machines. It’s digital, data-driven, and smarter. And perhaps most importantly: safer.

With artificial intelligence, we can “see through the rock”—before we hit it.

More information:
Tom Frode Hansen, Machine Learning for Rock Mass Assessment and Decision-Making in Underground Construction: Towards a reproducible and trustworthy ML-modelling process. www.researchgate.net/publicati … hD-Tom-F-Hansen-2025

Provided by
Norwegian Geotechnical Institute

Citation:
Researcher: We can build safer tunnels with artificial intelligence (2025, August 18)
retrieved 18 August 2025
from https://phys.org/news/2025-08-safer-tunnels-artificial-intelligence.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

AI Research

UCLA Researchers Enable Paralyzed Patients to Control Robots with Thoughts Using AI – CHOSUNBIZ – Chosun Biz

Published

on



UCLA Researchers Enable Paralyzed Patients to Control Robots with Thoughts Using AI – CHOSUNBIZ  Chosun Biz



Source link

Continue Reading

AI Research

Hackers exploit hidden prompts in AI images, researchers warn

Published

on


Cybersecurity firm Trail of Bits has revealed a technique that embeds malicious prompts into images processed by large language models (LLMs). The method exploits how AI platforms compress and downscale images for efficiency. While the original files appear harmless, the resizing process introduces visual artifacts that expose concealed instructions, which the model interprets as legitimate user input.

In tests, the researchers demonstrated that such manipulated images could direct AI systems to perform unauthorized actions. One example showed Google Calendar data being siphoned to an external email address without the user’s knowledge. Platforms affected in the trials included Google’s Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Gemini’s web interface.

Read More: Meta curbs AI flirty chats, self-harm talk with teens

The approach builds on earlier academic work from TU Braunschweig in Germany, which identified image scaling as a potential attack surface in machine learning. Trail of Bits expanded on this research, creating “Anamorpher,” an open-source tool that generates malicious images using interpolation techniques such as nearest neighbor, bilinear, and bicubic resampling.

From the user’s perspective, nothing unusual occurs when such an image is uploaded. Yet behind the scenes, the AI system executes hidden commands alongside normal prompts, raising serious concerns about data security and identity theft. Because multimodal models often integrate with calendars, messaging, and workflow tools, the risks extend into sensitive personal and professional domains.

Also Read: Nvidia CEO Jensen Huang says AI boom far from over

Traditional defenses such as firewalls cannot easily detect this type of manipulation. The researchers recommend a combination of layered security, previewing downscaled images, restricting input dimensions, and requiring explicit confirmation for sensitive operations.

“The strongest defense is to implement secure design patterns and systematic safeguards that limit prompt injection, including multimodal attacks,” the Trail of Bits team concluded.



Source link

Continue Reading

AI Research

When AI Freezes Over | Psychology Today

Published

on


A phrase I’ve often clung to regarding artificial intelligence is one that is also cloaked in a bit of techno-mystery. And I bet you’ve heard it as part of the lexicon of technology and imagination: “emergent abilities.” It’s common to hear that large language models (LLMs) have these curious “emergent” behaviors that are often coupled with linguistic partners like scaling and complexity. And yes, I’m guilty too.

In AI research, this phrase first took off after a 2022 paper that described how abilities seem to appear suddenly as models scale and tasks that a small model fails at completely, a larger model suddenly handles with ease. One day a model can’t solve math problems, the next day it can. It’s an irresistible story as machines have their own little Archimedean “eureka!” moments. It’s almost as if “intelligence” has suddenly switched on.

But I’m not buying into the sensation, at least not yet. A newer 2025 study suggests we should be more careful. Instead of magical leaps, what we’re seeing looks a lot more like the physics of phase changes.

Ice, Water, and Math

Think about water. At one temperature it’s liquid, at another it’s ice. The molecules don’t become something new—they’re always two hydrogens and an oxygen—but the way they organize shifts dramatically. At the freezing point, hydrogen bonds “loosely set” into a lattice, driven by those fleeting electrical charges on the hydrogen atoms. The result is ice, the same ingredients reorganized into a solid that’s curiously less dense than liquid water. And, yes, there’s even a touch of magic in the science as ice floats. But that magic melts when you learn about Van der Waals forces.

The same kind of shift shows up in LLMs and is often mislabeled as “emergence.” In small models, the easiest strategy is positional, where computation leans on word order and simple statistical shortcuts. It’s an easy trick that works just enough to reduce error. But scale things up by using more parameters and data, and the system reorganizes. The 2025 study by Cui shows that, at a critical threshold, the model shifts into semantic mode and relies on the geometry of meaning in its high-dimensional vector space. It isn’t magic, it’s optimization. Just as water molecules align into a lattice, the model settles into a more stable solution in its mathematical landscape.

The Mirage of “Emergence”

That 2022 paper called these shifts emergent abilities. And yes, tasks like arithmetic or multi-step reasoning can look as though they “switch on.” But the model hasn’t suddenly “understood” arithmetic. What’s happening is that semantic generalization finally outperforms positional shortcuts once scale crosses a threshold. Yes, it’s a mouthful. But happening here is the computational process that is shifting from a simple “word position” in a prompt (like, the cat in the _____) to a complex, hyperdimensional matrix where semantic associations across thousands of dimensions create amazing strength to the computation.

And those sudden jumps? They’re often illusions. On simple pass/fail tests, a model can look stuck at zero until it finally tips over the line and then it seems to leap forward. In reality, it was improving step by step all along. The so-called “light-bulb moment” is really just a quirk of how we measure progress. No emergence, just math.

Why “Emergence” Is So Seductive

Why does the language of “emergence” stick? Because it borrows from biology and philosophy. Life “emerges” from chemistry as consciousness “emerges” from neurons. It makes LLMs sound like they’re undergoing cognitive leaps. Some argue emergence is a hallmark of complex systems, and there’s truth to that. So, to a degree, it does capture the idea of surprising shifts.

But we need to be careful. What’s happening here is still math, not mind. Calling it emergence risks sliding into anthropomorphism, where sudden performance shifts are mistaken for genuine understanding. And it happens all the time.

A Useful Imitation

The 2022 paper gave us the language of “emergence.” The 2025 paper shows that what looks like emergence is really closer to a high-complexity phase change. It’s the same math and the same machinery. At small scales, positional tricks (word sequence) dominate. At large scales, semantic structures (multidimensional linguistic analysis) win out.

No insight, no spark of consciousness. It’s just a system reorganizing under new constraints. And this supports my larger thesis: What we’re witnessing isn’t intelligence at all, but anti-intelligence, a powerful, useful imitation that mimics the surface of cognition without the interior substance that only a human mind offers.

Artificial Intelligence Essential Reads

So the next time you hear about an LLM with “emergent ability,” don’t imagine Archimedes leaping from his bath. Picture water freezing. The same molecules, new structure. The same math, new mode. What looks like insight is just another phase of anti-intelligence that is complex, fascinating, even beautiful in its way, but not to be mistaken for a mind.



Source link

Continue Reading

Trending