Cast your mind back to the physical server era.
If you ran a mixed environment, and almost everyone did, you were running multiple backup tools. One agent for Windows servers. Something different for Linux. A separate product for your Oracle databases. Another for SQL Server. If you had Exchange, that was its own conversation entirely.
Each tool had its own console, its own agents, its own licensing model, its own support contract, and its own way of failing at 2am on a Sunday.
The backup administrator’s job was less about data protection and more about juggling.
Virtualisation Changed Everything. Briefly.
Then virtualisation arrived, and for a window of time, it felt like the problem was being solved.
VMware changed the architecture. Instead of backing up individual operating systems and applications, you could protect the workload at the hypervisor layer. Suddenly, one or two tools could cover most of what an organisation needed. Agentless backup. Image-level recovery. A single pane of glass that actually meant something.
For many IT teams, this was the first time data protection felt manageable.
It was also, in hindsight, a temporary window of calm.
The Explosion Nobody Planned For
Cloud changed the calculus. Slowly at first, then all at once.
Workloads that once lived on-premises started moving to AWS, Azure, and Google Cloud. SaaS platforms became core business infrastructure. Sales teams standardised on Salesforce. Finance moved to NetSuite. Collaboration shifted to Microsoft 365 and Google Workspace. Engineering teams built in GitHub, Jira, and Confluence.
Then came containers. Kubernetes. Modern managed databases. Edge computing. Each category representing real business data, and each one presenting a gap in whatever backup strategy already existed.
And here is where the pattern repeats.
When a gap appears in data protection, organisations fill it with a point solution. A dedicated SaaS backup tool for Microsoft 365. Another one for Salesforce. A cloud-native backup service from the hyperscaler. A separate product for Kubernetes. Each decision made in isolation, each one entirely rational in the moment.
The result is a modern backup estate that, in terms of complexity, looks remarkably like the physical server era. Except now the tools are SaaS-based and the invoices are monthly.
Short-Term Relief. Long-Term Pain.
The problem with filling gaps with point solutions is that each one solves a narrow problem while contributing to a wider one.
Visibility becomes fragmented. Proving compliance across your entire data estate requires pulling reports from multiple consoles. Understanding your true recovery posture requires aggregating information from tools that were never designed to talk to each other.
Cost structures become unpredictable. Every point solution has its own pricing model, its own renewal cycle, and its own vendor relationship to manage. The cumulative cost of covering a modern hybrid estate with point solutions consistently exceeds what organisations budgeted for.
Recovery confidence suffers. When you need to recover quickly, the last thing you want is to be working across five different tools with five different recovery workflows. Speed and consistency are casualties of complexity.
And with each new workload category that emerges, the problem compounds.
More Tools. More Chaos. No Thread Connecting Any of It.
There is something that gets overlooked in the point solution conversation, and it is bigger than cost or vendor management.
Nothing connects these tools together.
Every point solution is an island. It has its own definition of a successful backup, its own alerting thresholds, its own recovery workflow, and its own way of reporting status. There is no consistency across them. There is no repeatability. When something goes wrong, and it will, there is no common playbook because there is no common platform.
Runbooks that work in one tool mean nothing in the next. An engineer who knows how to recover from one system has to context-switch completely to recover from another. In a real incident, under pressure, that friction costs time. Time costs data. Data costs the business.
And here is where the exponential problem becomes clear.
Each SaaS application you add to your estate does not add one problem. It adds a problem multiplied by every other tool already in the stack. More applications means more tools. More tools means more gaps between them. More gaps means more places where data falls through unprotected, unmonitored, and unrecoverable.
The complexity curve does not grow linearly. It compounds.
Now factor in AI.
Gartner forecasts that 40% of enterprise applications will embed task-specific AI agents by 2026, up from less than 5% in 2025. That is not a gradual rollout. That is a step change happening inside two years, inside the same estate that already has fragmented data protection.
And these agents do not just read data. They act on it. They write to databases. They modify records. They automate decisions that previously required human sign-off. Deloitte notes that agentic AI usage is poised to rise sharply, but only one in five companies currently has a mature model for governance.
That gap matters enormously for data protection. An AI agent that corrupts a database or deletes a record set is not a theoretical risk. It is a new and largely unplanned-for recovery scenario that most organisations have no consistent answer for, because their data protection estate was not built with it in mind.
The tools protecting your Microsoft 365 environment were not designed with AI agent activity in mind. Neither was the separate tool covering your CRM, or the one protecting your cloud databases. Each will handle the problem differently, or not at all.
This is the shape of the problem organisations are walking into. Not just more tools. More tools, with no consistency between them, in an environment where the pace of change is accelerating and the sources of data risk are multiplying faster than point solutions can keep up with.
What a Modern Data Resilience Platform Actually Looks Like
The answer is not simply to find a single vendor and consolidate everything onto their platform. That approach trades one form of complexity for another, typically replacing tool sprawl with vendor lock-in.
What organisations should be evaluating is something more specific.
A data resilience platform built for the modern era needs to be genuinely flexible on three dimensions.
Flexible on what it protects. On-premises infrastructure, hyperscaler workloads, SaaS applications, modern databases, and whatever comes next. Coverage cannot require a new point solution every time the estate evolves.
Flexible on where it stores data. The platform should support cloud storage targets across providers, on-premises targets, and immutable options without dictating which vendor you use. Forcing data into a proprietary storage layer is lock-in dressed up as simplicity.
Flexible on recovery objectives. Different workloads carry different tolerances. A platform should let you set and enforce policies at a granular level without requiring you to manage each workload class in a separate tool.
And critically: no lock-in to any hardware vendor or any vendor-owned cloud. Organisations that are locked into proprietary appliances or proprietary cloud storage discover the cost of that decision slowly, usually at renewal time.
The Pattern Is Clear. The Decision Is a Choice.
The backup tool sprawl of the physical server era was not the result of bad decisions. It was the result of rational decisions made in isolation, without a long-term platform strategy.
The same dynamic is playing out again today. Each point solution procurement is a rational response to an immediate gap. In aggregate, they create the same complexity organisations spent the virtualisation era trying to escape.
The difference now is that organisations can see the pattern coming. The question is whether they act on it before the sprawl compounds, or after.
The tools to do it differently exist. Building around a data resilience platform with genuine flexibility, rather than a patchwork of point solutions, is a long-term decision that pays off in a way individual tool procurement never does.
We have been here before. We know how this story ends.
It is part of why I joined HYCU. More on that in the next post.
