Connect with us

Tools & Platforms

AI hackbots are a future bulwark against AI-enabled hackers

Published

on


In the two years since world leaders, tech bros, and Elon Musk met at Bletchley Park for the first-ever global AI summit, many of their unsettling predictions about the weaponisation of AI by cybercriminals have become reality. A recent assessment by the UK’s National Cyber Security Centre concluded that all types of cyber threat actors – state and non-state, skilled and less skilled – are now using AI.

While state-sponsored cyber threat actors were the first to leverage AI’s potential for hacking, the NCSC says AI has also made it easier for “novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations.”

After ‘ransomware-as-a-service’ was used by the criminals behind several of 2025’s spate of cyberattacks on leading British retailers, the NCSC’s warning of the “commoditisation of AI-enabled hacking capability” is chilling. Yet, despite its obvious potential for misuse, AI can also give a decisive advantage to those of us who are committed to keeping malicious hackers at bay. In fact, AI technology is perfectly suited to one of the central pillars of any organisation’s cyber defence – the ceaseless search for vulnerabilities in its IT infrastructure.

Traditionally, this function has been performed periodically by ethical hackers, skilled cybersecurity professionals who probe and test for weak spots in digital assets, such as webpages and web servers, that are collectively known as the ‘attack surface’. In a process called penetration testing, ethical hackers stage simulated attacks to identify chinks in the organisation’s defences. They then design and implement fixes to close any gaps before a bad actor can exploit them.

While manual ‘pentesting’ remains a vital component of effective cybersecurity, it can now be turbocharged by the addition of AI-powered ethical ‘hackbots’. These are automated systems which run 24/7 to identify and eliminate potential vulnerabilities in the attack surface. But unlike a conventional, algorithmic program, AI hackbots possess two huge advantages: the ability to work autonomously, while also learning from the systems they interact with and adapting their behaviour according to the information they glean.

Underpinned by a Large Language Model, hackbots have an enormous knowledge base and can spot all common vulnerabilities while also detecting odd behaviour and malfunctions. Crucially, they’re also capable of scanning applications constantly, adapting and exploring potential lines of attack in the way a hostile hacker, whether human or AI, would.

Hackbots as heroes

At a basic level, AI hackbots automate many of the repetitive and time-consuming tasks that a skilled ethical hacker would carry out. But their value far exceeds that of a mere timesaver – their adaptive, autonomous research capability enables them to uncover previously unknown security flaws. Deployed correctly, hackbots will integrate seamlessly with the existing tools used by ethical hackers, serving as a powerful force multiplier rather than just a fillip to human productivity.

This, of course, raises the prospect of the perfect partnership between a highly trained ethical hacker, attuned to the traits, morals and motivations of malicious hackers, and an ethical hackbot able to continuously scan vast attack surfaces, learning and locating potential weaknesses.

But with the hackbot serving as a formidable research assistant, the flesh and blood cybersecurity professional will be able to focus more attention on the challenges that demand creativity, morality and human problem-solving skills. Over time, this partnership may evolve into a symbiosis in which the human supervises, validates and guides the operations of hackbots, as well as manage the important ethical implications of autonomous testing. Fighting fire with fire it may be. But responsibility will always rest with the human firefighter.

Andre Baptista is the co-founder of the ethical hacking platform Ethiack and a visiting professor at the University of Porto.




Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

Committee Encourages Georgia Courts To Adopt, Govern AI

Published

on


Georgia should begin pilot programs tailored to specific use cases of artificial intelligence across each class of court or jurisdiction, an ad hoc committee established by retired Chief Justice Michael P….

Want to continue reading?

Unlock these benefits today when you sign-up for a FREE 7-day trial:

  • Gain a competitive edge with exclusive data visualization tools to tailor to your practice
  • Stay informed with daily newsletters and custom alerts across 14+ coverage areas relevant to you
  • Streamline your business of law needs with integrated news and research in a single destination

Already have an account? Sign In Now



Source link

Continue Reading

Tools & Platforms

Australia is set to get more AI data centres. Local communities need to be more involved

Published

on


Data centres are the engines of the internet. These large, high-security facilities host racks of servers that store and process our digital data, 24 hours a day, seven days a week.

There are already more than 250 data centres across Australia. But there are set to be more, as the federal government’s plans for digital infrastructure expansion gains traction. We recently saw tech giant Amazon’s recent pledge to invest an additional A$20 billion in new data centres across Sydney and Melbourne, alongside the development of three solar farms in Victoria and Queensland to help power them.

The New South Wales government also recently launched a new authority to fast-track approvals for major infrastructure projects.

These developments will help cater to the surging demand for generative artificial intelligence (AI). They will also boost the national economy and increase Australia’s digital sovereignty – a global shift toward storing and managing data domestically under national laws.

But the everyday realities of communities living near these data centres aren’t as optimistic. And one key step toward mitigating these impacts is ensuring genuine community participation in shaping how Australia’s data-centre future is developed.

The sensory experience of data centres

Data centres are large, warehouse-like facilities. Their footprint typically ranges from 10,000 to 100,000 square metres. They are set on sites with backup generators and thousands of litres of stored diesel and enclosed by high-security fencing. Fluorescent lighting illuminates them every hour of the day.

A data centre can emanate temperatures of 35°C to 45°C. To prevent the servers from overheating, air conditioners are continuously humming. In water-cooled facilities, water pipes transport gigalitres of cool water through the data centre each day to absorb the heat produced.

Data centres can place substantial strain on the local energy grid and water supply.

In some places where many data centres have been built, such as Northern Virginia in the United States and Dublin in Ireland, communities have reported rising energy and water prices. They have also reported water shortages and the degradation of valued natural and historical sites.

They have also experienced economic impacts. While data centre construction generates high levels of employment, these facilities tend to employ a relatively small number of staff when they are operating.

These impacts have prompted some communities to push back against new data centre developments. Some communities have even filed lawsuits to halt proposed projects due to concerns about water security, environmental harm and heavy reliance on fossil fuels.

A unique opportunity

To date, communities in Australia have been buffered from the impacts of data centres. This is largely because Australia has outsourced most of its digital storage and processing needs (and associated impacts) to data centres overseas.

But this is now changing. As Australia rapidly expands its digital infrastructure, the question of who gets to shape its future becomes increasingly important.

To avoid amplifying the social inequities and environmental challenges of data centres, the tech industry and governments across Australia need to include the communities who will live alongside these crucial pieces of digital infrastructure.

This presents Australia with a unique opportunity to set the standard for creating a sustainable and inclusive digital future.

A path to authentic community participation

Current planning protocols for data centres limit community input. But there are three key steps data centre developers and governments can take to ensure individual developments – and the broader data centre industry – reflect the values, priorities and aspirations of local communities.

1. Developing critical awareness about data centres

People want a greater understanding of what data centres are, and how they will affect their everyday lives.

For example, what will data centres look, sound and feel like to live alongside? How will they affect access to drinking water during the next drought? Or water and energy prices during the peak of summer or winter?

Genuinely engaging with these questions is a crucial step toward empowering communities to take part in informed conversations about data centre developments in their neighbourhoods.

2. Involving communities early in the planning process

Data centres are often designed using generic templates, with minimal adaptation to local conditions or concerns. Yet each development site has a unique social and ecological context.

By involving communities early in the planning process, developers can access invaluable local knowledge about culturally significant sites, biodiversity corridors, water-sensitive areas and existing sustainability strategies that may be overlooked in state-level planning frameworks.

This kind of local insight can help tailor developments to reduce harm, enhance benefits, and ensure local priorities are not just heard, but built into the infrastructure itself.

3. Creating more inclusive visions of Australia’s data centre industry

Communities understand the importance of digital infrastructure and are generally supportive of equitable digital access. But they want to see the data centre industry grow in ways that acknowledges their everyday lives, values and priorities.

To create a more inclusive future, governments and industry can work with communities to broaden their “clean” visions of digital innovation and economic prosperity to include the “messy” realities, uncertainties and everyday aspirations of those living alongside data centre developments.

This approach will foster greater community trust and is essential for building more complex, human-centred visions of the tech industry’s future.



Source link

Continue Reading

Tools & Platforms

Google Launches Lightweight Gemma 3n, Expanding Edge AI Efforts — Campus Technology

Published

on


Google Launches Lightweight Gemma 3n, Expanding Edge AI Efforts

Google DeepMind has officially launched Gemma 3n, the latest version of its lightweight generative AI model designed specifically for mobile and edge devices — a move that reinforces the company’s emphasis on on-device computing.

The new model builds on the momentum of the original Gemma family, which has seen more than 160 million cumulative downloads since its launch last year. Gemma 3n introduces expanded multimodal support, a more efficient architecture, and new tools for developers targeting low-latency applications across smartphones, wearables, and other embedded systems.

“This release unlocks the full power of a mobile-first architecture,” said Omar Sanseviero and Ian Ballantyne, Google developer relations engineers, in a recent blog post.

Multimodal and Memory-Efficient by Design

Gemma 3n is available in two model sizes, E2B (5 billion parameters) and E4B (8 billion), with effective memory footprints similar to much smaller models — 2GB and 3GB respectively. Both versions natively support text, image, audio, and video inputs, enabling complex inference tasks to run directly on hardware with limited memory resources.

A core innovation in Gemma 3n is its MatFormer (Matryoshka Transformer) architecture, which allows developers to extract smaller sub-models or dynamically adjust model size during inference. This modular approach, combined with Mix-n-Match configuration tools, gives users granular control over performance and memory usage.

Google also introduced Per-Layer Embeddings (PLE), a technique that offloads part of the model to CPUs, reducing reliance on high-speed accelerator memory. This enables improved model quality without increasing the VRAM requirements.

Competitive Benchmarks and Performance

Gemma 3n E4B achieved an LMArena score exceeding 1300, the first model under 10 billion parameters to do so. The company attributes this to architectural innovations and enhanced inference techniques, including KV Cache Sharing, which speeds up long-context processing by reusing attention layer data.

Benchmark tests show up to a twofold improvement in prefill latency over the previous Gemma 3 model.

In speech applications, the model supports on-device speech-to-text and speech translation via a Universal Speech Model-based encoder, while a new MobileNet-V5 vision module offers real-time video comprehension on hardware such as Google Pixel devices.

Broader Ecosystem Support and Developer Focus

Google emphasized the model’s compatibility with widely used developer tools and platforms, including Hugging Face Transformers, llama.cpp, Ollama, Docker, and Apple’s MLX framework. The company also launched a MatFormer Lab to help developers fine-tune sub-models using custom parameter configurations.

“From Hugging Face to MLX to NVIDIA NeMo, we’re focused on making Gemma accessible across the ecosystem,” the authors wrote.

As part of its community outreach, Google introduced the Gemma 3n Impact Challenge, a developer contest offering $150,000 in prizes for real-world applications built on the platform.

Industry Context

Gemma 3n reflects a broader trend in AI development: a shift from cloud-based inference to edge computing as hardware improves and developers seek greater control over performance, latency, and privacy. Major tech firms are increasingly competing not just on raw power, but on deployment flexibility.

Although models such as Meta’s LLaMA and Alibaba’s Qwen3 series have gained traction in the open source domain, Gemma 3n signals Google’s intent to dominate the mobile inference space by balancing performance with efficiency and integration depth.

Developers can access the models through Google AI Studio, Hugging Face, or Kaggle, and deploy them via Vertex AI, Cloud Run, and other infrastructure services.

For more information, visit the Google site.

About the Author



John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].







Source link

Continue Reading

Trending