It is very common when we talk about technology to focus on the new and exciting stuff, who doesn’t want to think about how they use cloud, analytics, Kubernetes and AI, that’s where the fun is and there’s nothing wrong with that if you’re not thinking about those things as part of a strategic look ahead you probably are doing yourself and your organisation a disservice.

But a danger to this is the basics of IT can sometimes get missed and that can have a real impact. One such area is the often discussed 3-2-1 backup role, now a British reader of an age may immediately think of a certain dustbin (sorry everyone else!) when we discuss that rule, but this has nothing to do with Saturday night 70’s TV, but is to do with an established data protection process to ensure that your valuable data is properly protected.
The 3-2-1 rule
The rule is made up of four core data protection considerations.
- Maintain at least three copies of your data and applications.
- Store your backups on at least two different types of media.
- Keep one of the backups in a different location.
- Verify your recovery plan has zero errors.
As sensible as each of these elements’ sounds, interestingly there is debate about the relevance of the 3-2-1 rule with the world of data and its protection changing so rapidly. While I understand the debate let me share with you why I think this rule is still hugely relevant to today’s enterprise.
Maintain at least three copies
Why three? If your data is crucial and whose isn’t then we can’t take chances, a minimum of three copies makes sense, our production data then two further copies, it’s important to also consider their placement, for example, if we are taking storage snapshots, do we want our production and two copies all in the same place? What if we lose that production storage repository with our data and backups on it?
Keep one backup in a different location
Alongside keeping at least two backup copies of your production data it’s important to ensure we also mitigate the risk of losing the location in which that data is held and ensure we have at least one in an alternate location, a “DR” datacentre, USB drive, tape and increasingly, of course, the public cloud are all suitable offsite locations, make sure you consider where you hold backup copies but for obvious reasons your backups and live data in the same location is not the most robust of plans.
Two different media types
When there is discussion around the 3-2-1 rule it is the idea of two media types that creates the most debate. Why is this necessary? If I have a storage array, for example, that can take snapshots and replicate them to multiple locations then why would I need to also have an alternate media type? Surely multiple copies in multiple locations are enough for anyone’s data protection?
Maybe it is, but, what if you have such a solution and then discover you’ve had corruption that was unseen and unreported? A silent corruption at the array level that makes your snapshots unusable, not only in production we have also replicated those same corrupted datasets to our multiple locations, all of this unknown and unreported until the day you find yourself without access to not only key production data but when you try to access it from an alternate location you find the same issue with data that is no longer available.
This is where our alternate media comes in, in fact, I’d go further than that, not only alternate media but an alternate backup methodology that takes a copy of your data and holds it in a different format with an independent methodology of access. For example, you’re running NetApp in all of your locations, using their snapshots replicating to multiple locations across your data fabric, but enhance this by using Veeam to take a copy of that data in one of those locations so that you can be independent of NetApp and if you ever have a system-level issue you have a way of recovering that data completely independently and if needed to alternate systems.
Does it matter? Well, it does when it does! The reason for writing this post and wanting to share this rather unexciting and “traditional” part of IT design is that I have recently seen the impact of a scenario very similar to the one I described above, where a business through no fault of their own, found themselves in the position I described above and as we dealt with the fallout we discussed the idea of the 3-2-1 rule which surprisingly was new to them.
The impact on them has been major and a lot of time, money and disruption to their business could have been saved by following this simple rule.
Is the 3-2-1 rule for everyone? Is it still relevant? In most cases, it is an emphatic yes. But you can only answer that for your organisation, but at least ask the question so you understand the best way to protect YOUR data and ensure your protection plans meet the needs of your enterprise, better to have the debate and make an informed choice, than not and be hit by the consequences.