The term “always-on business” may be a market-speak term but it has a background in truth. Few businesses these days can afford to slow down or stop applications during a long backup process, or to merely hope that a restore is going to happen on time. Mission-critical and business-critical applications need to maintain performance and availability 24×7 or nearly so.
The complications are aging legacy backup versions, fast-growing data sets, global availability needs, and the growth of virtualization. These sea changes have created data centers where high availability and data protection clash instead of working together.
You certainly have a choice of newer technologies, some from the backup old guard and many from innovative startups. Assuming you have the budget you need, the real challenge is not that you can’t find acceptable data protection, but that there is so much out there to choose from.
Physical, Virtual and Cloud
Some data centers best benefit from backup appliances, physical and/or virtual. (A combination is ideal.) Backup software remains a critical piece of data protection, especially newer backup products with dedupe and compression, parallel performance, incremental/synthetic full, snapshots and replication. And if I may mix a metaphor, the cloud also muddies the waters of data protection.
The cloud is an important part of the data center, not only for storage but for failover. You need to make certain that the cloud owner properly protects all of you stored data. With public or hybrid clouds that will mean the service provider. In a private cloud it may well mean you.
The goal then is twofold: 1) to guarantee data protection to every application based on its value and priority, and 2) to maintain application high availability any time, any place, even during data protection periods. Let’s look at the elements of such a data protection plan.
- Validate backup. Backup validation falls into two camps: 1) Did the backup complete? And 2) Did the backup complete correctly? The answer to the first is yea or nay; you do or do not have a completed backup. If you have ever been in the situation where you thought a regular backup was going along swimmingly until you go to restore… right. Validate your backup. To validate backup integrity, many backup programs use checksums or hashes to validate that the data was accurately copied. IT verifies data integrity without referring to the original file by comparing checksums, which should be identical. You should be able to run reports that prove both completion and accuracy for your internal quality control and for compliance.
- Rapid restore. Availability means minimal downtime. It also requires that if something does happen, you can restore the application and data before reaching the limits of downtime tolerance. Note the term “restoring the application” as well as the data. Protecting your data with backup, snapshots, replication, or whatever you choose to do will allow you to restore the data to your servers. But you must also be able to restore your application. This is why it is important to be able to do a bare metal recovery on physical application servers, and be able to restore server images on physical and virtual ones. Recovery point objectives (RPO) and recovery time objectives (RTO) are also critical to acceptable restore times. The restore technology is not purely responsible for meeting the service level agreements, since you must also consider your available bandwidth, especially if you are restoring from the cloud or from a remote site; your storage hardware performance; and how quickly and competently your data center administers respond to downtime threats.
- Test disaster recovery. Validation technologies will work on specific backup functions but the data center administrator needs to know how the entire data center is functioning. You do that by consistently monitoring application and data health throughout the data center. Arguably this is easier with a comprehensive single vendor platform. But a single product is not always possible or warranted given your application mix. Even if you are using multiple data protection programs, ascertain that all of them monitor and report their data protection processes.
- Protect performance. Protecting application availability during data protection operations requires a high-performance and non-obtrusive process. Not only does this protect applications against stuttering and downtime, it also lets you back up data more often without impacting application performance. You can draw closer to the ideal of Tier-1 continuous or near-continuous protection for the highest possible level of application and data availability.
Even in an improving economy, many companies attempt to save money by keeping IT staff small and technology purchases low. Yet it is vital to make a business case to senior management for a data protection platform that maintains application availability, meets prioritized service levels, and enables admins to monitor backup and recovery times across an entire environment.