Connect with us

Tools & Platforms

AI is forcing the data industry to consolidate — but that’s not the whole story

Published

on


The data industry is on the verge of a drastic transformation.

The market is consolidating. And if the deal flow in the past two months is any indicator — with Databricks buying Neon for $1 billion and Salesforce snapping up cloud management firm Informatica for $8 billion — momentum is building for more.

The acquired companies may range in size, age, and focus area within the data stack, but they all have one thing in common. These companies are being bought in hopes the acquired technology will be the missing piece needed to get enterprises to adopt AI.

On the surface level, this strategy makes sense.

The success of AI companies, and AI applications, is determined by access to quality underlying data. Without it, there simply isn’t value — a belief shared by enterprise VCs. In a TechCrunch survey conducted in December 2024, enterprise VCs said data quality was a key factor to make AI startups stand out and succeed. And while some of these companies involved in these deals aren’t startups, the sentiment still stands.

Gaurav Dhillon, the former co-founder and CEO of Informatica, and current chairman and CEO at data integration company SnapLogic, echoed this in a recent interview with TechCrunch.

“There is a complete reset in how data is managed and flows around the enterprise,” Dhillon said. “If people want to seize the AI imperative, they have to redo their data platforms in a very big way. And this is where I believe you’re seeing all these data acquisitions, because this is the foundation to have a sound AI strategy.”

But is this strategy of snapping up companies built before a post-ChatGPT world the way to increase enterprise AI adoption in today’s rapidly innovating market? That’s unclear. Dhillon has doubts too.

“Nobody was born in AI; that’s only three years old,” Dhillon said, referring to the current post-ChatGPT AI market. “For a larger company, to provide AI innovations to re-imagine the enterprise, the agentic enterprise in particular, it’s going to need a lot of retooling to make it happen.”

Fragmented data landscape

The data industry has grown into a sprawling and fragmented web over the past decade — which makes it ripe for consolidation. All it needed was a catalyst. From 2020 through 2024 alone, more than $300 billion was invested into data startups across more than 24,000 deals, according to PitchBook data.

The data industry wasn’t immune to the trends seen in other industries like SaaS where the venture swell of the last decade resulted in numerous startups getting funded by venture capitalists that only targeted one specific area or were in some cases built around a single feature.

The current industry standard of bundling together a bunch of different data management solutions, each with its own specific focus, doesn’t work when you want AI to crawl around your data to find answers or build applications.

It makes sense that larger companies are looking to snap up startups that can plug into and fill existing gaps in their data stack. A perfect example of this trend is Fivetran’s recent acquisition of Census in May — which yes, was done in the name of AI.

Fivetran helps companies move their data from a variety of sources into cloud databases. For the first 13 years of its business, it didn’t allow customers to move this data back out of said databases, which is exactly what Census offers. This means prior to this acquisition, Fivetran customers needed to work with a second company to create an end-to-end solution.

To be clear, this isn’t meant to cast shade on Fivetran. At the time of the deal, George Fraser, the co-founder and CEO of Fivetran, told TechCrunch that while moving data in and out of these warehouses seems like two sides of the same coin, it’s not that simple; the company even tried and abandoned an in-house solution to this problem.

“Technically speaking, if you look at the code underneath [these] services, they’re actually pretty different,” Fraser said at the time. “You have to solve a pretty different set of problems in order to do this.”

This situation helps illustrate how the data market has transformed in the last decade. For Sanjeev Mohan, a former Gartner analyst who now runs SanjMo, his own data trend advisory firm, these types of scenarios are a big driver of the current wave of consolidation.

“This consolidation is being driven by customers being fed up with a multitude of products that are incompatible,” Mohan said. “We live in a very interesting world where there are a lot of different data storage solutions, you can do open source, they can go to Kafka, but the one area where we have failed is metadata. Dozens of these products are capturing some metadata but to do their job, it’s an overlap.”

Good for startups

The broader market plays a role here too, Mohan said. Data startups are struggling to raise capital, Mohan said, and an exit is better than having to wind down or load up on debt. For the acquirers, adding features gives them better pricing leverage and an edge against their peers.

“If Salesforce or Google isn’t acquiring these companies, then their competitors likely are,” Derek Hernandez, a senior emerging tech analyst at PitchBook, told TechCrunch. “The best solutions are being acquired currently. Even if you have an award-winning solution, I don’t know that the outlook for staying private ultimately wins over going to a larger [acquirer].”

This trend brings big benefits to the startups getting acquired. The venture market is starving for exits and the current quiet period for IPOs doesn’t leave them a lot of opportunities. Getting acquired not only provides that exit, but in many cases gives these founding teams room to keep building.

Mohan agreed and added that many data startups are feeling the pains of the current market regarding exits and the slow recovery of venture funding.

“At this point in time, acquisition has been a much more favorable exit strategy for them,” Hernandez said. “So I think, kind of both sides are very incentivized to get to the finish line on these. And I think Informatica is a good example of that, where even with a bit of a haircut from where Salesforce was talking to them last year, it’s still, you know, was the best solution, according to their board.”

What happens next

But the doubt still remains if this acquisition strategy will achieve the buyers’ goals.

As Dhillon pointed out, the database companies being acquired weren’t necessarily built to easily work with the rapidly-changing AI market. Plus, if the company with the best data wins the AI world, will it make sense for data and AI companies to be separate entities?

“I think a lot of the value is in merging the major AI players with the data management companies,” Hernandez said. “I don’t know that a standalone data management company is particularly incentivized to remain so and, kind of like, play a third party between enterprises and AI solutions.”



Source link

Tools & Platforms

Committee Encourages Georgia Courts To Adopt, Govern AI

Published

on


Georgia should begin pilot programs tailored to specific use cases of artificial intelligence across each class of court or jurisdiction, an ad hoc committee established by retired Chief Justice Michael P….

Want to continue reading?

Unlock these benefits today when you sign-up for a FREE 7-day trial:

  • Gain a competitive edge with exclusive data visualization tools to tailor to your practice
  • Stay informed with daily newsletters and custom alerts across 14+ coverage areas relevant to you
  • Streamline your business of law needs with integrated news and research in a single destination

Already have an account? Sign In Now



Source link

Continue Reading

Tools & Platforms

Australia is set to get more AI data centres. Local communities need to be more involved

Published

on


Data centres are the engines of the internet. These large, high-security facilities host racks of servers that store and process our digital data, 24 hours a day, seven days a week.

There are already more than 250 data centres across Australia. But there are set to be more, as the federal government’s plans for digital infrastructure expansion gains traction. We recently saw tech giant Amazon’s recent pledge to invest an additional A$20 billion in new data centres across Sydney and Melbourne, alongside the development of three solar farms in Victoria and Queensland to help power them.

The New South Wales government also recently launched a new authority to fast-track approvals for major infrastructure projects.

These developments will help cater to the surging demand for generative artificial intelligence (AI). They will also boost the national economy and increase Australia’s digital sovereignty – a global shift toward storing and managing data domestically under national laws.

But the everyday realities of communities living near these data centres aren’t as optimistic. And one key step toward mitigating these impacts is ensuring genuine community participation in shaping how Australia’s data-centre future is developed.

The sensory experience of data centres

Data centres are large, warehouse-like facilities. Their footprint typically ranges from 10,000 to 100,000 square metres. They are set on sites with backup generators and thousands of litres of stored diesel and enclosed by high-security fencing. Fluorescent lighting illuminates them every hour of the day.

A data centre can emanate temperatures of 35°C to 45°C. To prevent the servers from overheating, air conditioners are continuously humming. In water-cooled facilities, water pipes transport gigalitres of cool water through the data centre each day to absorb the heat produced.

Data centres can place substantial strain on the local energy grid and water supply.

In some places where many data centres have been built, such as Northern Virginia in the United States and Dublin in Ireland, communities have reported rising energy and water prices. They have also reported water shortages and the degradation of valued natural and historical sites.

They have also experienced economic impacts. While data centre construction generates high levels of employment, these facilities tend to employ a relatively small number of staff when they are operating.

These impacts have prompted some communities to push back against new data centre developments. Some communities have even filed lawsuits to halt proposed projects due to concerns about water security, environmental harm and heavy reliance on fossil fuels.

A unique opportunity

To date, communities in Australia have been buffered from the impacts of data centres. This is largely because Australia has outsourced most of its digital storage and processing needs (and associated impacts) to data centres overseas.

But this is now changing. As Australia rapidly expands its digital infrastructure, the question of who gets to shape its future becomes increasingly important.

To avoid amplifying the social inequities and environmental challenges of data centres, the tech industry and governments across Australia need to include the communities who will live alongside these crucial pieces of digital infrastructure.

This presents Australia with a unique opportunity to set the standard for creating a sustainable and inclusive digital future.

A path to authentic community participation

Current planning protocols for data centres limit community input. But there are three key steps data centre developers and governments can take to ensure individual developments – and the broader data centre industry – reflect the values, priorities and aspirations of local communities.

1. Developing critical awareness about data centres

People want a greater understanding of what data centres are, and how they will affect their everyday lives.

For example, what will data centres look, sound and feel like to live alongside? How will they affect access to drinking water during the next drought? Or water and energy prices during the peak of summer or winter?

Genuinely engaging with these questions is a crucial step toward empowering communities to take part in informed conversations about data centre developments in their neighbourhoods.

2. Involving communities early in the planning process

Data centres are often designed using generic templates, with minimal adaptation to local conditions or concerns. Yet each development site has a unique social and ecological context.

By involving communities early in the planning process, developers can access invaluable local knowledge about culturally significant sites, biodiversity corridors, water-sensitive areas and existing sustainability strategies that may be overlooked in state-level planning frameworks.

This kind of local insight can help tailor developments to reduce harm, enhance benefits, and ensure local priorities are not just heard, but built into the infrastructure itself.

3. Creating more inclusive visions of Australia’s data centre industry

Communities understand the importance of digital infrastructure and are generally supportive of equitable digital access. But they want to see the data centre industry grow in ways that acknowledges their everyday lives, values and priorities.

To create a more inclusive future, governments and industry can work with communities to broaden their “clean” visions of digital innovation and economic prosperity to include the “messy” realities, uncertainties and everyday aspirations of those living alongside data centre developments.

This approach will foster greater community trust and is essential for building more complex, human-centred visions of the tech industry’s future.



Source link

Continue Reading

Tools & Platforms

Google Launches Lightweight Gemma 3n, Expanding Edge AI Efforts — Campus Technology

Published

on


Google Launches Lightweight Gemma 3n, Expanding Edge AI Efforts

Google DeepMind has officially launched Gemma 3n, the latest version of its lightweight generative AI model designed specifically for mobile and edge devices — a move that reinforces the company’s emphasis on on-device computing.

The new model builds on the momentum of the original Gemma family, which has seen more than 160 million cumulative downloads since its launch last year. Gemma 3n introduces expanded multimodal support, a more efficient architecture, and new tools for developers targeting low-latency applications across smartphones, wearables, and other embedded systems.

“This release unlocks the full power of a mobile-first architecture,” said Omar Sanseviero and Ian Ballantyne, Google developer relations engineers, in a recent blog post.

Multimodal and Memory-Efficient by Design

Gemma 3n is available in two model sizes, E2B (5 billion parameters) and E4B (8 billion), with effective memory footprints similar to much smaller models — 2GB and 3GB respectively. Both versions natively support text, image, audio, and video inputs, enabling complex inference tasks to run directly on hardware with limited memory resources.

A core innovation in Gemma 3n is its MatFormer (Matryoshka Transformer) architecture, which allows developers to extract smaller sub-models or dynamically adjust model size during inference. This modular approach, combined with Mix-n-Match configuration tools, gives users granular control over performance and memory usage.

Google also introduced Per-Layer Embeddings (PLE), a technique that offloads part of the model to CPUs, reducing reliance on high-speed accelerator memory. This enables improved model quality without increasing the VRAM requirements.

Competitive Benchmarks and Performance

Gemma 3n E4B achieved an LMArena score exceeding 1300, the first model under 10 billion parameters to do so. The company attributes this to architectural innovations and enhanced inference techniques, including KV Cache Sharing, which speeds up long-context processing by reusing attention layer data.

Benchmark tests show up to a twofold improvement in prefill latency over the previous Gemma 3 model.

In speech applications, the model supports on-device speech-to-text and speech translation via a Universal Speech Model-based encoder, while a new MobileNet-V5 vision module offers real-time video comprehension on hardware such as Google Pixel devices.

Broader Ecosystem Support and Developer Focus

Google emphasized the model’s compatibility with widely used developer tools and platforms, including Hugging Face Transformers, llama.cpp, Ollama, Docker, and Apple’s MLX framework. The company also launched a MatFormer Lab to help developers fine-tune sub-models using custom parameter configurations.

“From Hugging Face to MLX to NVIDIA NeMo, we’re focused on making Gemma accessible across the ecosystem,” the authors wrote.

As part of its community outreach, Google introduced the Gemma 3n Impact Challenge, a developer contest offering $150,000 in prizes for real-world applications built on the platform.

Industry Context

Gemma 3n reflects a broader trend in AI development: a shift from cloud-based inference to edge computing as hardware improves and developers seek greater control over performance, latency, and privacy. Major tech firms are increasingly competing not just on raw power, but on deployment flexibility.

Although models such as Meta’s LLaMA and Alibaba’s Qwen3 series have gained traction in the open source domain, Gemma 3n signals Google’s intent to dominate the mobile inference space by balancing performance with efficiency and integration depth.

Developers can access the models through Google AI Studio, Hugging Face, or Kaggle, and deploy them via Vertex AI, Cloud Run, and other infrastructure services.

For more information, visit the Google site.

About the Author



John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He’s been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he’s written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].







Source link

Continue Reading

Trending