AI Research
New AWS research shows one Australian business adopts AI every three minutes

The adoption of artificial intelligence (AI) is rapidly accelerating across Australia, with one business every three minutes adopting AI solutions between 2024 and 2025, according to the latest edition of Amazon Web Services’ (AWS) ‘Unlocking Australia’s AI Potential’ report. In total, 1.3 million or 50% of Australian businesses are now regularly using AI, showing a year-on-year growth rate of 16%.
These businesses demonstrate the productivity and economic potential of AI adoption, with 95% reporting an average increase in revenue of 34%. 86% of adopters have already experienced productivity gains, while 94% expect an average of 38% in cost savings.
The report, conducted by independent consultancy, Strand Partners, and commissioned by AWS, reveals that while AI adoption continues to accelerate in Australia, there is a growing gap between startups and large, more mature businesses in the depth of their AI adoption. This AI gap risks creating a two-tier economy in which tech-driven startups innovate more rapidly and outpace their established and less agile competitors.
A two-tier economy emerging
Australian startups, in particular, are enthusiastic and innovative in their use of AI, adopting AI’s most advanced uses far more rapidly than more established companies. 81% of startups in Australia are using AI in some way, of which 42% are building entirely new AI-driven products with AI, leveraging the technology to its full potential.
In contrast, 61% of large enterprises are using AI, but only 18% are delivering new AI products or services, and only 22% have a comprehensive AI strategy.
This gap in innovation risks the emergence of a two-tier economy, where startups are surging ahead of large enterprises in AI integration and adoption. Without deeper integration, these businesses risk missing out on the full potential of AI, falling behind more agile competitors, and driving a two-tier economy that will shape Australia’s prosperity for decades.
Widespread but basic adoption of AI across Australian businesses
While AI adoption is increasing, most Australian businesses are not yet harnessing its most advanced uses, with 58% focused primarily on basic use cases, like driving efficiencies and streamlining processes through chatbots.
Just 17% of Australian businesses are at the intermediate stage of integrating AI across various business functions, and only 24% have reached the most transformative stage of AI integration, where AI is no longer just a tool but a core part of product development, decision-making, and business models, that lead to innovation.
“While it’s encouraging to see a growing number of businesses in Australia innovate with AI and realise revenue, productivity, and cost benefits, our research has uncovered that barriers such as lack of skills and regulatory uncertainty remain, impacting the ability for larger enterprises to deepen their use of AI,” said Michelle Hardie, Head of Professional Services, ANZ, AWS.
“To accelerate Australia’s competitive edge on the global AI stage, it is essential that governments and industry take steps to address these barriers to unlock Australia’s full AI potential. At AWS, we are supporting the broad adoption of AI through our new AI Spring Australia program that is focused on building AI capability and skills across different sectors and industries, including large enterprises, as well as through infrastructure investments and skills training initiatives, including our recent investment of AU$20 billion in Australia.”
Tackling barriers to deeper AI adoption
To accelerate Australia’s competitive edge on the global AI stage, governments and industry must take steps to address the barriers businesses face in unlocking their full AI potential.
A lack of skilled personnel is the leading reason (39%) Australian businesses say is preventing them from adopting or expanding their use of AI. Many reported having the technology and the vision, but are unable to find the people to bring it to life.
This could impact Australia’s global competitiveness and limit economic growth, as 51% of businesses identified AI literacy as being important for future hiring, and only 37% of businesses feel prepared with their workforce’s current skillset. Funding is also a particularly important factor for startups in Australia, with 65% saying access to venture capital is crucial in creating an environment for growth.
A clear, streamlined regulatory landscape is also necessary to give businesses the confidence they need to adopt and invest in emerging technologies. The research found that only 24% of businesses are familiar with the consultation by the Australian government to implement AI regulation and could explain how the legislation would operate.
Those surveyed also estimated they spent 30% of their IT budget on compliance-related costs, such as data privacy and protection compliance, legal consultations, and cybersecurity measures. 73% expect this figure to increase in the next three years.
The path forward for AI innovation
The report uncovered three priority actions to overcome these barriers and unlock the full potential of AI across startups and large enterprises to avoid the emergence of a ‘two-tier’ economy:
- Accelerate private sector digital adoption through skills efforts: A key barrier to AI adoption is not ambition, it’s capability. While 91% of businesses view AI-related skills as essential, only 37% feel their workforce is currently prepared. This gap highlights an urgent need for industry-specific digital skills programs, certifications, and practical training pathways to develop a digitally-skilled workforce that can drive AI-led innovation and growth.
- Create a clear picture for Australia’s pro-growth regulation: A regulatory environment that fosters experimentation and provides certainty will be key to enabling AI adoption across all sectors. Ensuring that AI regulation is predictable and innovation-friendly – and maintains a lower-cost compliance model – will be critical to maintaining and strengthening Australia’s position as a global leader in AI-driven growth.
- Modernise public sector technology: 86% of businesses saying they are more likely to adopt AI if the government leads. By using public procurement and prioritising digital transformation in critical areas like healthcare and education, the government can demonstrate the real-world benefits of AI to citizens, build public trust, and stimulate broader demand for innovative solutions.
AWS’s commitment to unlocking Australia’s AI potential
This important research shows that Australia has the ambition, talent, and tools to lead in AI. But it will take bold, coordinated action across government, industry, and education to ensure every business – of all sizes – can benefit from AI’s transformative potential.
In Australia, AWS is committed to realising the ambitions of local customers, partners, and communities, and creating economic opportunities in the country. We reinforced this in June by announcing AWS’s plans to invest AU$20 billion from 2025-2029 to expand Australian data centre infrastructure and strengthen the nation’s AI future.
AWS has also trained 400,000+ people in Australia since 2017 in digital skills, and will continue to support Australia’s current and future workforce through generative AI programs like AWS AI Spring Australia, AWS Generative AI Accelerator, and AWS AI Launchpad. AWS’s various programs are designed to help both individuals – whether students, career changers, or those new to the cloud – as well as businesses, build job-ready skills and pursue opportunities in the digital economy.
With growing concerns about the development of a ‘two-tier’ AI economy, more needs to be done to equip the workforce with the right skills at scale so organisations can innovate and grow in an AI-powered future.
Learn more about the Unlocking Australia’s AI Potential report.
AI Research
Hackers exploit hidden prompts in AI images, researchers warn

Cybersecurity firm Trail of Bits has revealed a technique that embeds malicious prompts into images processed by large language models (LLMs). The method exploits how AI platforms compress and downscale images for efficiency. While the original files appear harmless, the resizing process introduces visual artifacts that expose concealed instructions, which the model interprets as legitimate user input.
In tests, the researchers demonstrated that such manipulated images could direct AI systems to perform unauthorized actions. One example showed Google Calendar data being siphoned to an external email address without the user’s knowledge. Platforms affected in the trials included Google’s Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Gemini’s web interface.
Read More: Meta curbs AI flirty chats, self-harm talk with teens
The approach builds on earlier academic work from TU Braunschweig in Germany, which identified image scaling as a potential attack surface in machine learning. Trail of Bits expanded on this research, creating “Anamorpher,” an open-source tool that generates malicious images using interpolation techniques such as nearest neighbor, bilinear, and bicubic resampling.
From the user’s perspective, nothing unusual occurs when such an image is uploaded. Yet behind the scenes, the AI system executes hidden commands alongside normal prompts, raising serious concerns about data security and identity theft. Because multimodal models often integrate with calendars, messaging, and workflow tools, the risks extend into sensitive personal and professional domains.
Also Read: Nvidia CEO Jensen Huang says AI boom far from over
Traditional defenses such as firewalls cannot easily detect this type of manipulation. The researchers recommend a combination of layered security, previewing downscaled images, restricting input dimensions, and requiring explicit confirmation for sensitive operations.
“The strongest defense is to implement secure design patterns and systematic safeguards that limit prompt injection, including multimodal attacks,” the Trail of Bits team concluded.
AI Research
When AI Freezes Over | Psychology Today

A phrase I’ve often clung to regarding artificial intelligence is one that is also cloaked in a bit of techno-mystery. And I bet you’ve heard it as part of the lexicon of technology and imagination: “emergent abilities.” It’s common to hear that large language models (LLMs) have these curious “emergent” behaviors that are often coupled with linguistic partners like scaling and complexity. And yes, I’m guilty too.
In AI research, this phrase first took off after a 2022 paper that described how abilities seem to appear suddenly as models scale and tasks that a small model fails at completely, a larger model suddenly handles with ease. One day a model can’t solve math problems, the next day it can. It’s an irresistible story as machines have their own little Archimedean “eureka!” moments. It’s almost as if “intelligence” has suddenly switched on.
But I’m not buying into the sensation, at least not yet. A newer 2025 study suggests we should be more careful. Instead of magical leaps, what we’re seeing looks a lot more like the physics of phase changes.
Ice, Water, and Math
Think about water. At one temperature it’s liquid, at another it’s ice. The molecules don’t become something new—they’re always two hydrogens and an oxygen—but the way they organize shifts dramatically. At the freezing point, hydrogen bonds “loosely set” into a lattice, driven by those fleeting electrical charges on the hydrogen atoms. The result is ice, the same ingredients reorganized into a solid that’s curiously less dense than liquid water. And, yes, there’s even a touch of magic in the science as ice floats. But that magic melts when you learn about Van der Waals forces.
The same kind of shift shows up in LLMs and is often mislabeled as “emergence.” In small models, the easiest strategy is positional, where computation leans on word order and simple statistical shortcuts. It’s an easy trick that works just enough to reduce error. But scale things up by using more parameters and data, and the system reorganizes. The 2025 study by Cui shows that, at a critical threshold, the model shifts into semantic mode and relies on the geometry of meaning in its high-dimensional vector space. It isn’t magic, it’s optimization. Just as water molecules align into a lattice, the model settles into a more stable solution in its mathematical landscape.
The Mirage of “Emergence”
That 2022 paper called these shifts emergent abilities. And yes, tasks like arithmetic or multi-step reasoning can look as though they “switch on.” But the model hasn’t suddenly “understood” arithmetic. What’s happening is that semantic generalization finally outperforms positional shortcuts once scale crosses a threshold. Yes, it’s a mouthful. But happening here is the computational process that is shifting from a simple “word position” in a prompt (like, the cat in the _____) to a complex, hyperdimensional matrix where semantic associations across thousands of dimensions create amazing strength to the computation.
And those sudden jumps? They’re often illusions. On simple pass/fail tests, a model can look stuck at zero until it finally tips over the line and then it seems to leap forward. In reality, it was improving step by step all along. The so-called “light-bulb moment” is really just a quirk of how we measure progress. No emergence, just math.
Why “Emergence” Is So Seductive
Why does the language of “emergence” stick? Because it borrows from biology and philosophy. Life “emerges” from chemistry as consciousness “emerges” from neurons. It makes LLMs sound like they’re undergoing cognitive leaps. Some argue emergence is a hallmark of complex systems, and there’s truth to that. So, to a degree, it does capture the idea of surprising shifts.
But we need to be careful. What’s happening here is still math, not mind. Calling it emergence risks sliding into anthropomorphism, where sudden performance shifts are mistaken for genuine understanding. And it happens all the time.
A Useful Imitation
The 2022 paper gave us the language of “emergence.” The 2025 paper shows that what looks like emergence is really closer to a high-complexity phase change. It’s the same math and the same machinery. At small scales, positional tricks (word sequence) dominate. At large scales, semantic structures (multidimensional linguistic analysis) win out.
No insight, no spark of consciousness. It’s just a system reorganizing under new constraints. And this supports my larger thesis: What we’re witnessing isn’t intelligence at all, but anti-intelligence, a powerful, useful imitation that mimics the surface of cognition without the interior substance that only a human mind offers.
Artificial Intelligence Essential Reads
So the next time you hear about an LLM with “emergent ability,” don’t imagine Archimedes leaping from his bath. Picture water freezing. The same molecules, new structure. The same math, new mode. What looks like insight is just another phase of anti-intelligence that is complex, fascinating, even beautiful in its way, but not to be mistaken for a mind.
AI Research
MIT Researchers Develop AI Tool to Improve Flu Vaccine Strain Selection

Insider Brief
- MIT researchers have developed VaxSeer, an AI system that predicts which influenza strains will dominate and which vaccines will offer the best protection, aiming to reduce guesswork in seasonal flu vaccine selection.
- Using deep learning on decades of viral sequences and lab data, VaxSeer outperformed the World Health Organization’s strain choices in 9 of 10 seasons for H3N2 and 6 of 10 for H1N1 in retrospective tests.
- Published in Nature Medicine, the study suggests VaxSeer could improve vaccine effectiveness and may eventually be applied to other rapidly evolving health threats such as antibiotic resistance or drug-resistant cancers.
MIT researchers have unveiled an artificial intelligence tool designed to improve how seasonal influenza vaccines are chosen, potentially reducing the guesswork that often leaves health officials a step behind the fast-mutating virus.
The study, published in Nature Medicine, was authored by lead researcher Wenxian Shi along with Regina Barzilay, Jeremy Wohlwend, and Menghua Wu. It was supported in part by the U.S. Defense Threat Reduction Agency and MIT’s Jameel Clinic.
According to MIT, the system, called VaxSeer, was developed by scientists at MIT’s Computer Science and Artificial Intelligence Laboratory and the MIT Jameel Clinic for Machine Learning in Health. It uses deep learning models trained on decades of viral sequences and lab results to forecast which flu strains are most likely to dominate and how well candidate vaccines will work against them. Unlike traditional approaches that evaluate single mutations in isolation, VaxSeer’s large protein language model can capture the combined effects of multiple mutations and model shifting viral dominance more accurately.
“VaxSeer adopts a large protein language model to learn the relationship between dominance and the combinatorial effects of mutations,” Shi noted. “Unlike existing protein language models that assume a static distribution of viral variants, we model dynamic dominance shifts, making it better suited for rapidly evolving viruses like influenza.”
In retrospective tests covering ten years of flu seasons, VaxSeer’s strain recommendations outperformed those of the World Health Organization in nine of ten cases for H3N2 influenza, and in six of ten cases for H1N1, researchers said. In one notable example, the system correctly identified a strain for 2016 that the WHO did not adopt until the following year. Its predictions also showed strong correlation with vaccine effectiveness estimates reported by U.S., Canadian, and European surveillance networks.
The tool works in two parts: one model predicts which viral strains are most likely to spread, while another evaluates how effectively antibodies from vaccines can neutralize them in common hemagglutination inhibition assays. These predictions are then combined into a coverage score, which estimates the likely effectiveness of a candidate vaccine months before flu season begins.
“Given the speed of viral evolution, current therapeutic development often lags behind. VaxSeer is our attempt to catch up,” Barzilay noted.
-
Business3 days ago
The Guardian view on Trump and the Fed: independence is no substitute for accountability | Editorial
-
Tools & Platforms3 weeks ago
Building Trust in Military AI Starts with Opening the Black Box – War on the Rocks
-
Ethics & Policy1 month ago
SDAIA Supports Saudi Arabia’s Leadership in Shaping Global AI Ethics, Policy, and Research – وكالة الأنباء السعودية
-
Events & Conferences3 months ago
Journey to 1000 models: Scaling Instagram’s recommendation system
-
Jobs & Careers2 months ago
Mumbai-based Perplexity Alternative Has 60k+ Users Without Funding
-
Funding & Business2 months ago
Kayak and Expedia race to build AI travel agents that turn social posts into itineraries
-
Education2 months ago
VEX Robotics launches AI-powered classroom robotics system
-
Podcasts & Talks2 months ago
Happy 4th of July! 🎆 Made with Veo 3 in Gemini
-
Podcasts & Talks2 months ago
OpenAI 🤝 @teamganassi
-
Mergers & Acquisitions2 months ago
Donald Trump suggests US government review subsidies to Elon Musk’s companies