But no amount of practice or preparation can prepare you for the real thing.
So what is a REAL catastrophic disaster like? We decided to ask some companies that had actually recovered from a serious disaster.
The irony is that Quest is a technology management company who provides disaster recovery and business continuity services to SMBs and enterprises – it was a true test to see what happens when disaster strikes the DR guys.
We were hit with severe high winds and a week of heavy rain that ultimately caused eight utility poles to fall outside of our building.
The power went out, the road was blocked by hot wires and transformers, and everyone who made it into work that morning were trapped in the building.
Initially, battery and generator backup provided phone and Internet capability. And by utilizing resources at several other locations, the company was able to continue to function until we got the all-clear to evacuate – that’s when DR efforts began in full. We executed on our own DR plan – and by 3pm were operating completely remotely, with some of our employees at our Business Resumption Center and others working from home. Customer service calls, billing, email, phones – everything we needed to keep functioning was operational.
Lessons Learned: Conducting DR drills and testing our DR plan quarterly was and is fundamental, but even so we had to deal with keeping our 100 person staff up to-date on what’s happening, no power for 36 hours and the refrigerated food spoiled and no one fed the fish. Even little disasters can have a huge impact. You need to be as prepared for a mundane disruption as for a catastrophic one.
Tim Burke, CEO of Quest
One of our clients – Whiteflash.com – immediately comes to mind. Whiteflash.com is an upscale diamond e-retailer based in Houston. When Hurricane Rita threatened the Gulf Coast, they couldn’t afford to close the business. As an e-retailer, their business isn’t confined to the Gulf Coast. They process orders and inquiries coming in from every corner of the globe, 24 hours a day.
So management designated all personnel to safer locations and maintained normal business activities remotely. As a customer of cloud computing, Whiteflash.com’s interoffice communication and collaboration between sales and management went as smoothly as when all departments sat under one roof.
In their highly-competitive sector, if you’re not available to process an order or respond to inquiry, someone else will be. As an e-retailer, they can’t wait weeks or months to retrieve hard data if something should happen to its systems.
Loss of data or a prolonged inability to access to it could have put them out of business.
Yehuda Cagen from Xvand Technology Corporation
As the IT Manager at Breazeale Sachse & Wilson LLP, a law firm in Baton Rouge, Louisiana with 160 users, I have to make sure email is up and running 24/7. Email touches every aspect of our business, and we can’t afford any loss of information or downtime—in a law firm, time is literally money, as we work by billable hours.
In the past, we had issues with our email appliances delaying, which lead us to seek a system that didn’t require a person to monitor a physical device.
With a location in the Gulf Coast and office in New Orleans, our business is in an area prone to natural disasters and hurricanes. When Hurricane Katrina struck in 2005, we had to evacuate and our severs had to be shut down, risking critical client information.
We had to go into New Orleans under armed guard to regain access to documents and email that had not yet been captured by the tape backup system prior to Katrina’s landfall. After this devastating experience, we began working with Mimecast.
If we ever face another natural disaster, our uptime won’t be at the mercy of our physical location. Mimecast allows us to sleep soundly knowing that our clients can send an email and get in touch with us no matter where we are, and their information is always protected.
Luke Corley, the IT Manager for Breazeale Sachse & Wilson LLP
When I came to TFI in the fall of 2006 they had no DR plan on paper. They had a few laptops and a tower that were to be used in deployment but there was nothing on paper. Disaster Recovery was not on my resume but with this position it was a new project to be explored. By the time Hurricane Ike rolled around in 2008 we actually had a plan of action procedure and departmental agents assigned for delegation. TFI had leased a small office in Austin to deploy to. We had executed our office closure preparedness with Hurricane Edouard the month before so we thought we were ready.
The National Weather Service is the home page on my internet browser between the months of June 1st and November 1st. I had been watching Hurricane Ike since its reported inception. On Wednesday September 3rd, 2008 it was apparent from tracking models that the Gulf Coast at Galveston/Houston was going to take a direct hit between Friday evening and Saturday morning. Mandatory evacuations were being announced and that afternoon we made the decision to close the office on Friday so we could prepare. On Thursday we notified our employees, executed the office closure preparedness plan and prepared the physical office for hurricane as required by building management.
My car was loaded with equipment, the network was shutdown in the business office and I had taken extra supplies and backup tapes to the colo for safekeeping. IKE hit early morning Saturday. The TFI disaster recovery team executed the call tree and we took stock of those that did not have property damage and deployed the team to Austin on Sunday night. The hotels in Austin were packed. People had brought their dogs and cats and people who had not made a reservation were waiting in line to get a room. I went to the office to setup. We had a shared internet connection with the building services, a communication cabinet that configured a VPN connection to the collocation facility, a server tower, 6 laptops and two printers. I set up in a 10 X 10 leased office two reference tables, a server table and 6 chairs.
The team of 10 people arrived the next morning. We were able to connect to databases and files at the colo but we had no email. Our email replication solution had failed. Plan B was we did have a website TFIEmergency.com that we broadcast to so we posted updates for mass information and we did the rest of communications through our colo fax server and a makeshift hotmail account. We were receiving everything we needed to perform our tasks but it was tight, tense and the hours were long. We had 10 people working for 16 hours for 5 days in a 10 X 10 room.
This was an invaluable experience. Although we had our moments, the team bonded and those of us who deployed for IKE have a special respect for each other. We learned a lot. The first thing was to lease a bigger space. TFI now has two colo facilities. One in Austin and one in Houston. The first IT project was to replace our replication solution for Exchange with CA ARCserve High Availability. Later we put our primary payroll service for our customers in the cloud and migrated that process to a SaaS vendor. We drill at least twice with staff before June 1st at both disaster recovery venues. Whenever we make a significant change to our IT applications or infrastructure we test that modification effectiveness and availability at the colo venues as part of implementation.
Melinda Martin, Information Technology Manager – TFI Resources, Inc.