aistoriz.com
  • AI Trends & Innovations
    • The Travel Revolution of Our Era
  • Contact Us
  • Home News
  • Join Us
    • Registration
  • Member Login
    • Password Reset
    • Profile
  • Privacy Policy
  • Terms Of Service
  • Thank You
Connect with us
aistoriz.com aistoriz.com

aistoriz.com

When AI Freezes Over | Psychology Today

  • AI Research
    • UCLA Researchers Enable Paralyzed Patients to Control Robots with Thoughts Using AI – CHOSUNBIZ – Chosun Biz

    • Hackers exploit hidden prompts in AI images, researchers warn

    • When AI Freezes Over | Psychology Today

    • The future is already here: 5 tips for using AI

    • MIT Researchers Develop AI Tool to Improve Flu Vaccine Strain Selection

  • Funding & Business
    • Japan 10-Year Debt Sale Clouded by BOJ Hike Path, Fiscal Risks

    • Profit Outlook Drags Australia Stocks After Rocky Results Season

    • Australia’s No. 2 Pension Cuts Treasuries Bet on Rising US Risks

    • Deutsche Bank to Rejoin Euro Stoxx 50 After Seven-Year Absence

    • Argentine Pulp Maker Blames Milei as It Seeks Bankruptcy Protection

  • Events & Conferences
    • Revolutionizing warehouse automation with scientific simulation

    • Enabling Kotlin Incremental Compilation on Buck2

    • A decade of database innovation: The Amazon Aurora story

    • Federation Platform and Privacy Waves: How Meta distributes compliance-related tasks at scale

    • Amazon builds first foundation model for multirobot coordination

  • AI Insights
    • What the Tech? Browser with built-in artificial intelligence may change how you search | What The Tech?

    • Artificial intelligence offering political practices advice about robocalls in Montana GOP internal spat

    • “AI Is Not Intelligent at All” – Expert Warns of Worldwide Threat to Human Dignity

    • Mexico says works created by AI cannot be granted copyright

    • why the success of AI depends on good data – Physics World

  • Jobs & Careers
    • Top Life Sciences Companies Set Up GCCs in India in Last 5 Years, says EY India

    • AI PC Shipments to Hit 77 Million Units This Year: Report

    • Hexaware, Replit Partner to Bring Secure Vibe Coding to Enterprises

    • NVIDIA Reveals Two Customers Accounted for 39% of Quarterly Revenue

    • ‘Reliance Intelligence’ is Here, In Partnership with Google and Meta 

  • Ethics & Policy
    • 5 interesting stats to start your week

    • AI ethics under scrutiny, young people most exposed

    • Governing AI with inclusion: An Egyptian model for the Global South

    • Time Magazine names Pope Leo a voice on AI Ethics

    • 5 ways companies are incorporating AI ethics

  • Mergers & Acquisitions
    • FTAV’s further reading

    • Trump Intel deal designed to block sale of chipmaking unit, CFO says

    • Nuclear fusion developer raises almost $900mn in new funding

    • AI is opening up nature’s treasure chest

    • AI start-up Lovable receives funding offers at $4bn valuation

  • Podcasts & Talks
    • New AI Finally Solved The Hardest Animation Problem!

    • OpenAI to Z Challenge

    • Unboxing the new #Pixel10 Pro XL #MadeByGoogle #GoogleGemini #ASMR

    • Follow the yellow brick road 🟨🟨🟨 to Vegas ✨ starting 8/28. Tickets are on sale at thesphere.com

    • This New Physics Engine Lets Jelly Move Like Humans!

AI Research

When AI Freezes Over | Psychology Today

Published

3 hours ago

on

September 1, 2025

By

John Nosta


A phrase I’ve often clung to regarding artificial intelligence is one that is also cloaked in a bit of techno-mystery. And I bet you’ve heard it as part of the lexicon of technology and imagination: “emergent abilities.” It’s common to hear that large language models (LLMs) have these curious “emergent” behaviors that are often coupled with linguistic partners like scaling and complexity. And yes, I’m guilty too.

In AI research, this phrase first took off after a 2022 paper that described how abilities seem to appear suddenly as models scale and tasks that a small model fails at completely, a larger model suddenly handles with ease. One day a model can’t solve math problems, the next day it can. It’s an irresistible story as machines have their own little Archimedean “eureka!” moments. It’s almost as if “intelligence” has suddenly switched on.

But I’m not buying into the sensation, at least not yet. A newer 2025 study suggests we should be more careful. Instead of magical leaps, what we’re seeing looks a lot more like the physics of phase changes.

Ice, Water, and Math

Think about water. At one temperature it’s liquid, at another it’s ice. The molecules don’t become something new—they’re always two hydrogens and an oxygen—but the way they organize shifts dramatically. At the freezing point, hydrogen bonds “loosely set” into a lattice, driven by those fleeting electrical charges on the hydrogen atoms. The result is ice, the same ingredients reorganized into a solid that’s curiously less dense than liquid water. And, yes, there’s even a touch of magic in the science as ice floats. But that magic melts when you learn about Van der Waals forces.

The same kind of shift shows up in LLMs and is often mislabeled as “emergence.” In small models, the easiest strategy is positional, where computation leans on word order and simple statistical shortcuts. It’s an easy trick that works just enough to reduce error. But scale things up by using more parameters and data, and the system reorganizes. The 2025 study by Cui shows that, at a critical threshold, the model shifts into semantic mode and relies on the geometry of meaning in its high-dimensional vector space. It isn’t magic, it’s optimization. Just as water molecules align into a lattice, the model settles into a more stable solution in its mathematical landscape.

The Mirage of “Emergence”

That 2022 paper called these shifts emergent abilities. And yes, tasks like arithmetic or multi-step reasoning can look as though they “switch on.” But the model hasn’t suddenly “understood” arithmetic. What’s happening is that semantic generalization finally outperforms positional shortcuts once scale crosses a threshold. Yes, it’s a mouthful. But happening here is the computational process that is shifting from a simple “word position” in a prompt (like, the cat in the _____) to a complex, hyperdimensional matrix where semantic associations across thousands of dimensions create amazing strength to the computation.

And those sudden jumps? They’re often illusions. On simple pass/fail tests, a model can look stuck at zero until it finally tips over the line and then it seems to leap forward. In reality, it was improving step by step all along. The so-called “light-bulb moment” is really just a quirk of how we measure progress. No emergence, just math.

Why “Emergence” Is So Seductive

Why does the language of “emergence” stick? Because it borrows from biology and philosophy. Life “emerges” from chemistry as consciousness “emerges” from neurons. It makes LLMs sound like they’re undergoing cognitive leaps. Some argue emergence is a hallmark of complex systems, and there’s truth to that. So, to a degree, it does capture the idea of surprising shifts.

But we need to be careful. What’s happening here is still math, not mind. Calling it emergence risks sliding into anthropomorphism, where sudden performance shifts are mistaken for genuine understanding. And it happens all the time.

A Useful Imitation

The 2022 paper gave us the language of “emergence.” The 2025 paper shows that what looks like emergence is really closer to a high-complexity phase change. It’s the same math and the same machinery. At small scales, positional tricks (word sequence) dominate. At large scales, semantic structures (multidimensional linguistic analysis) win out.

No insight, no spark of consciousness. It’s just a system reorganizing under new constraints. And this supports my larger thesis: What we’re witnessing isn’t intelligence at all, but anti-intelligence, a powerful, useful imitation that mimics the surface of cognition without the interior substance that only a human mind offers.

Artificial Intelligence Essential Reads

So the next time you hear about an LLM with “emergent ability,” don’t imagine Archimedes leaping from his bath. Picture water freezing. The same molecules, new structure. The same math, new mode. What looks like insight is just another phase of anti-intelligence that is complex, fascinating, even beautiful in its way, but not to be mistaken for a mind.



Source link

Related Topics:
Up Next

Hackers exploit hidden prompts in AI images, researchers warn

Don't Miss

The future is already here: 5 tips for using AI

John Nosta

Continue Reading

You may like

Click to comment

Leave a Reply

Cancel reply

Your email address will not be published. Required fields are marked *

AI Research

UCLA Researchers Enable Paralyzed Patients to Control Robots with Thoughts Using AI – CHOSUNBIZ – Chosun Biz

Published

2 hours ago

on

September 1, 2025

By

The Editors



UCLA Researchers Enable Paralyzed Patients to Control Robots with Thoughts Using AI – CHOSUNBIZ  Chosun Biz



Source link

Continue Reading

AI Research

Hackers exploit hidden prompts in AI images, researchers warn

Published

3 hours ago

on

September 1, 2025

By

News Desk


Cybersecurity firm Trail of Bits has revealed a technique that embeds malicious prompts into images processed by large language models (LLMs). The method exploits how AI platforms compress and downscale images for efficiency. While the original files appear harmless, the resizing process introduces visual artifacts that expose concealed instructions, which the model interprets as legitimate user input.

In tests, the researchers demonstrated that such manipulated images could direct AI systems to perform unauthorized actions. One example showed Google Calendar data being siphoned to an external email address without the user’s knowledge. Platforms affected in the trials included Google’s Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Gemini’s web interface.

Read More: Meta curbs AI flirty chats, self-harm talk with teens

The approach builds on earlier academic work from TU Braunschweig in Germany, which identified image scaling as a potential attack surface in machine learning. Trail of Bits expanded on this research, creating “Anamorpher,” an open-source tool that generates malicious images using interpolation techniques such as nearest neighbor, bilinear, and bicubic resampling.

From the user’s perspective, nothing unusual occurs when such an image is uploaded. Yet behind the scenes, the AI system executes hidden commands alongside normal prompts, raising serious concerns about data security and identity theft. Because multimodal models often integrate with calendars, messaging, and workflow tools, the risks extend into sensitive personal and professional domains.

Also Read: Nvidia CEO Jensen Huang says AI boom far from over

Traditional defenses such as firewalls cannot easily detect this type of manipulation. The researchers recommend a combination of layered security, previewing downscaled images, restricting input dimensions, and requiring explicit confirmation for sensitive operations.

“The strongest defense is to implement secure design patterns and systematic safeguards that limit prompt injection, including multimodal attacks,” the Trail of Bits team concluded.



Source link

Continue Reading

AI Research

The future is already here: 5 tips for using AI

Published

3 hours ago

on

September 1, 2025

By

The Editors


The future is already here: 5 tips for using AI | The Jerusalem Post

Jerusalem Post/Consumerism

Search, translation, health services, and smart assistants are here – but misuse of AI can risk privacy, safety, and reliability. Here are 5 tips for safe and smart use.

GPT Chat. The right prompt leads to the correct results
GPT Chat. The right prompt leads to the correct results
(photo credit: SHUTTERSTOCK)
ByLIOR NOVIK/MAARIV
SEPTEMBER 1, 2025 22:45






Source link

Continue Reading

Trending

  • Business3 days ago

    The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial

  • Tools & Platforms3 weeks ago

    Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks

  • Ethics & Policy1 month ago

    SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية

  • Events & Conferences3 months ago

    Journey to 1000 models: Scaling Instagram’s recommendation system

  • Jobs & Careers2 months ago

    Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding

  • Funding & Business2 months ago

    Kayak and Expedia race to build AI travel agents that turn social posts into itineraries

  • Education2 months ago

    VEX Robotics launches AI-powered classroom robotics system

  • Podcasts & Talks2 months ago

    Happy 4th of July! 🎆 Made with Veo 3 in Gemini

  • Podcasts & Talks2 months ago

    OpenAI 🤝 @teamganassi

  • Mergers & Acquisitions2 months ago

    Donald Trump suggests US government review subsidies to Elon Musk’s companies

aistoriz.com
  • Privacy Policy
  • Terms Of Service
  • Contact Us
  • The Travel Revolution of Our Era

Copyright © 2025 AISTORIZ. For enquiries email at prompt@aistoriz.com