|Chapter 1: Introduction: Why the Backup 1.0 Mentality Is Killing You|
|Chapter 2: 12 Horror Stories—We Thought We Had a Backup!|
|Chapter 3: Whole‐Server Backups|
|Chapter 4: Rethinking Exchange Server Backups|
|Chapter 5: SQL Server Backups|
|Chapter 6: SharePoint Server Backups|
|Chapter 7: Rethinking Virtualization Server Backups|
|Chapter 8: Getting More from Backups: Other Concerns and Capabilities|
|Chapter 9: Keeping Your Backups: Storage Architecture|
|Chapter 10: What’s Your Disaster Recovery Plan?|
|Chapter 11: Upgrading Your Backup Mentality: Is It Really Worth It?|
|Chapter 12: Tales from the Trenches: My Life with Backup 2.0|
|Complete Book (ZIP file)|
Are you still making backups the old-fashioned way? Whether you back up Windows, Exchange Server, SharePoint, SQL Server, or virtualization servers, the old-school backup mentality may not be serving your actual needs. In The Definitive Guide to Windows Application and Server Backup 2.0, IT author and Microsoft MVP Don Jones explores the "Backup 1.0" mentality and its shortcomings, and proposes a new "Backup 2.0" way of thinking. He examines the true business needs behind backup (namely, quick recovery and as little lost work as possible), and proposes new techniques - using leading-edge technologies that are available today - that do a better job of meeting today's business and technology needs. With special chapters devoted to Exchange Server, SQL Server, virtualization, and SharePoint, you'll learn about new techniques and technologies designed to take backups out of the 1960's and into the 21st century.
Throughout computing history, backups have been practical, simple procedures: Copy a bunch of data from one place to another. Complexities arise with "always-on" data like the databases used by Exchange Server and SQL Server, and various techniques have been developed to access that form of in-use data; however, backups have ultimately always been about a fairly simple, straightforward copy. Even magnetic tape—much more advanced than in the 1960s, of course—is still a primary form of storage for many organizations’ backups.
I call it "Backup 1.0"—essentially the same way we’ve all been making backups since the beginning of time, with the only major changes being the storage medium we use. Although many bright engineers have come up with clever variations on the Backup 1.0 theme, it’s still basically the same. And I say it’s no longer enough. We need to re-think why we do backups, and invent Backup 2.0—a new way to back up our data that meets today’s business needs. Surprisingly, many of the techniques and technologies that support Backup 2.0 already exist—we just need to identify them, bring them together, and start using them.
In the first chapter, we will take a look at the pitfalls of Backup 1.0, and then go on to outline the basic questions that you must ask of any backup program. We will conclude with a few different backup approaches as well as an overview of what lies ahead with Backup 2.0!
Horror stories. Tales from the trenches. Case studies. Call them what you will, I love reading them. They’re a look into our colleagues’ real‐world lives and troubles, and an opportunity for us to learn something from mistakes—without having to make the actual mistakes ourselves. In this chapter, I’m going to share stories about backups to highlight problems that you yourself may have encountered. For each, I’ll look at some of the root causes for those problems, and suggest ways that a modernized “Backup 2.0” approach might help solve the problem. Some of these stories are culled from online blog postings (and I’ve provided the original URL when that is the case), while others are from my own personal correspondence with hundreds of administrators over the years. One or two are even from my own experiences in data centers. Names, of course, have been changed to protect the innocent—and those guilty of relying on the decades‐old Backup 1.0 mentality.
The important takeaway is that each of these stories offers a valuable lesson. Can you see yourself and your own experiences in these short tales? See if you can take away some valuable advice for avoiding these scenarios in the future.
Recovered from the horror stories of the previous chapter? Ready to start ensuring solid backups in your environment, the Backup 2.0 way? That’s what this chapter is all about, and what I call “whole server backups” is definitely the right place to begin. This is where I’ll address the most common kinds of servers: file servers, print servers, directory servers, and even Web servers—the workhorses of the enterprise. I’ll show you what some of the native solutions look like, discuss some of the related Backup 1.0‐style techniques and scenarios, and detail why they just don’t cut it for today’s businesses. Then I’ll assemble a sort of Backup 2.0 wish list: All the things you want in your environment for backup and recovery. I’ll outline which of those things are available today, and wrap up by applying those things to some real‐world server roles to show how those new techniques and technologies impact real‐world scenarios.
Ask anyone in the organization what their most mission critical piece of infrastructure is, and you'll probably hear "email" as a common answer. Or you might not: Many folks take email for granted, although they expect it to be as available and reliable as a telephone dial tone. Users who have never suffered an email outage almost can't imagine doing so; once they do experience an outage, they make sure everyone knows how much they're suffering. As one of the most popular solutions for corporate email, Exchange Server occupies a special place in your infrastructure. It's expected to be "always on," always available, and always reliable. Disasters simply can't be tolerated. What's more, users' own mistakes and negligence become very much your problem, meaning you have to offer recovery services that are quick and effective, even when you're recovering something that a user mistakenly deleted on their own.
In this chapter, you will learn about Exchange Server's native backup and restore capabilities as well as the challenges that they present. By examining the old-style Backup 1.0 solutions for Exchange recovery, you will understand what works and what doesn't in these traditional solutions. This chapter goes on to detail how Backup 2.0 will improve restore scenarios and disaster recovery for Exchange Server, and closes with a list of some Exchange-specific concerns such as de-duplication, data corruption, and search and e-Discovery.
More and more companies are using Microsoft SQL Server these days—and in many cases, they don't even realize it. While plenty of organizations deliberately install SQL Server, many businesses find themselves using SQL Server as a side effect, because SQL Server is the data store for some line ‐ of ‐ business application, technology solution, and so on. In fact, “SQL sprawl” makes SQL Server one of the most challenging server products from a backup perspective: Not only is SQL Server challenging in and of itself, but you wind up with tons of instances!
Here's what I see happening in many organizations: The company has one or more “official” SQL Server installations, and the IT team is aware of the need to back up these instances on a regular basis. But there are also numerous “stealth” installations of SQL Server, often running on the “Express” edition of SQL Server, that the IT team is unaware of. The data stored in these “stealth” installations is no less mission critical than the data in the “official” installations, but in many cases, that data isn't being protected properly. Dealing with this “sprawl” is just one of the unique challenges that Backup 2.0 faces in SQL Server.
Microsoft's SharePoint Server has probably had the most variety in its backup and restore solutions. The first version of the product was essentially a modified version of Exchange Server, and used the same database engine that Exchange did at the time. Today, SharePoint Server uses multiple databases to store its content, configuration, search catalogs, and more—and even stores some critical files as simple disk files. All that data stored in different places helps make SharePoint Server one of the most difficult Microsoft server products to work with in terms of business continuity and disaster recovery. It becomes even more complex when you start dealing with SharePoint Server farms— collections of servers designed to serve up the same content for load-balancing purposes. Is it even possible to move beyond the Backup 1.0 mindset and start using Backup 2.0 when it comes to SharePoint?
Microsoft defines three levels of data recovery for SharePoint Server:
Virtualization is the hot new code word for today's businesses, and it's changing everything we thought we knew about backup and recovery. I actually consider that to be a Really Good Thing, because it means-at least with regard to virtualization-we don't have to un-learn as many Backup 1.0 habits in order to see how a Backup 2.0 technique might be more effective.
Oddly, it's almost like virtualization vendors know that backups are complicated and that native solutions are usually deficient in key capabilities, because most virtualization vendors-we're talking Citrix, VMware, and Microsoft, here-don't really provide any native backup capabilities at all. Sure, most of them have backup solutions they'll sell you, but most of those are still good old Backup 1.0-style, "get it done during the evening backup window" solutions.
But let's take a step back and look at why virtualization backups have the potential to be more challenging than a physical server backup.
Problems and Challenges
As illustrated in Figure 7.1, a virtualized server runs on a virtualization host-like VMware vSphere or Windows Hyper-V. The host controls the hardware of the physical machine, while the virtualized server has its own, virtualized hardware environment. The virtual server's hard disks are generally just files sitting on the physical host. I realize this is probably nothing new to you; I'm just setting some context.
Figure 7.1: Virtual server running on a virtual host.
There's more to a backup strategy than just grabbing the right files and making sure you can restore them in a pinch-although that's obviously a big part of it. A solid backup strategy also concerns itself with disaster recovery in a variety of scenarios. You need to make sure your backup system itself has some redundancy—nothing's worse than being without a backup system! Because backups inherently involve data retention, in this day and age, you also have to concern yourself with the safety and security of that data as well as any legal concerns about its retention. That's what this chapter is all about: Dealing with the "extras" that surround a backup strategy. I'll look at how traditional Backup 1.0 techniques addressed these extras, and suggest ways in which we might rethink them for a Backup 2.0 world.
Disaster Recovery and Bare-Metal Recovery
I've already written quite a bit about disaster recovery, or bare-metal recovery, which is what you do when an entire server dies and you need to restore it. In the bad, bad, bad old days, disaster recovery always started with re-installing the server's operating system (OS) from scratch, then installing some kind of backup solution, then restoring everything else from tape-a time-consuming process because spinning data off of tape isn't exactly the fastest activity in the world.
Even the Backup 1.0 mentality got sick of that process, though. Today, most third-party backup solutions provide some kind of "restore CD," which can be used to boot a failed server. The stripped-down OS on the CD, often based on DOS, WinPE, or a proprietary OS, is smart enough to find the backup server and receive data being streamed across the Internet; depending on the solution, it might also be smart enough to read data directly from an attached tape drive. Figure 8.1 shows an example of one of these recovery disks in action.
Storage has long been a difficult companion for backups. The first backups were stacks of punched cards, although magnetic tape quickly came onto the scene to store more data and permit somewhat faster recovery. Ever since, we've struggled with where to put our backups. Buying a new terabyte file server inevitably meant buying another terabyte of backup capacity; some companies used—and still use—file filtering technologies to reduce the amount of data on their file servers, primarily to help control the amount of data that has to be backed up.
That's the ugly thing that happens when technology—and its limitations—start to drive the business rather than the business driving the technology. Sure, keeping errant MP3 files off your file servers might be a good idea for any number of reasons, but in general shouldn't users be able to put any business-related data onto a file server without worrying that it might not be backed up? Isn't all our business data worth backing up?
This is where storage comes into the Backup 2.0 picture. For all the great things that Backup 2.0 can do in terms of backing up our data and allowing fast and flexible restore operations, it's useless if it needs more space than we can give it.
Much of this book has focused on backup and restore rather than disaster recovery. The difference? I regard "restoring" as something you do with a single file, or a group of files, or a single email message, or an entire mailbox—something less than an entire server. It might be a "disaster" that a file was accidentally deleted, but it's typically a disaster for one or two people—not the entire business. A true disaster, in my view, is when an entire server goes down—or worse, when an entire data center is affected.
The reason much of this book has focused on restores is that, frankly, it's what we spend more time doing. It's not all that common for an entire server to fail, or for an entire data center to encounter a disaster. It definitely happens, but what happens a lot more is someone needing you to pull a single file or mailbox from backups.
In this chapter, however, I'm going to focus entirely on disaster recovery. Disasters do happen—floods, hurricanes, power surges, and so forth can take out entire servers or even entire data centers. One time, I had to deal with a complete week-long power outage when someone ran their pickup truck into the transformer on the corner of our office's property—talk about a disaster. In fact, I'll use that story as a kind of running example of where Backup 1.0 really let me down.
We're going to stick with our Backup 2.0 manifesto because it's just as applicable to disaster recovery as it is to a single-file recovery:
Backups should prevent us from losing any data or losing any work, and ensure that we always have access to our data with as little downtime as possible.
What's it going to cost you to implement Backup 2.0, and is it worth the time and money? Obviously, I can't give you specific numbers because those numbers will depend on exactly what you're backing up, how distributed your network is, and a number of other factors. But what I can do is show you how to calculate that cost, and to calculate the cost of just staying with your existing backup infrastructure. You can do the math and figure out whether a 2.0‐inspired redesign is going to be beneficial for you.
Let's Review: What is Backup 2.0?
Before we do that, let's quickly review what Backup 2.0 is all about. As always, we're focused on a business goal here, and we're not concerning ourselves with past techniques or technologies. Here's what we want:
Backups should prevent us from losing any data or losing any work, and ensure that we always have access to our data with as little downtime as possible.
The problem with old‐school backup techniques is that they rely primarily on point‐in‐time snapshots and on relatively slow tape‐based media for primary backup storage. That means recovery is always slower than it should be, and we always have a lot of data at‐risk.
In the second chapter of this book, I shared with you some of the horror stories of Backup 1.0. I did so primarily as a way of highlighting how poorly our traditional backup techniques really meet our business needs. In this chapter, I want to do the opposite: share with you some stories of Backup 2.0, both from my own experience and from stories you readers have shared over the year‐long production of this book. Names have been changed to protect the innocent, of course, but I think you'll find these to be compelling examples of how Backup 2.0 has been applied. Where possible, I'll share information about the infrastructure that goes with these stories so that you can see some of the creative and innovative ways Backup 2.0 is being used in organizations like your own.
Backup 1.0 Fired Our Administrator…and Backup 2.0 Promoted Me
I'm going to start with what I think might be my favorite story. This was shared by a reader, although as I mentioned, I'm making up new names for everyone involved...
By sponsoring a book with Realtime Publishers, you will connect your technology company with thousands of IT professionals who need information on the technology topic of your choice. Realtime Publishers works with only the best authors in the IT field to produce expert-level publications that appeal to and educate the IT professional audience.
Visit sponsorships.realtimepublishers.com to learn more about our wide array of sponsorship and content marketing opportunities.