Print Email

Planning for Data Center Move Is Critical

Clearly defining the steps and outcome before a move, upgrade or build helps ensure a smooth transition to success

9/3/2014 12:33:45 AM | This is the first of a two-part series on moving a data center. Part two talks about having a backup plan and the need for defined roles during a data center move.

Two moves equal one fire, Mark Twain supposedly said, referring to their relative disruptions. But while fires are covered by insurance, data center moves—no matter whether for tasty upgrade or nasty-problem remediation—bring to mind Winston Churchill's quote about blood, toil, tears and sweat.

It's rare—though pleasant and gratifying—to develop a something-from-nothing greenfield data center. That is, as Wikipedia defines it, planning and implementing a project lacking constraints imposed by prior work, analogous to construction on greenfield land with no inhibitions imposed by existing buildings. More frequently, new space is someone else's old space, with vestigial infrastructure to be dealt with and upgraded power and cooling required.

There are more similarities than differences among build, move and upgrade efforts. Tom D'Auria, chairman and CEO of New York-based Information Methods Inc., notes that "there are always factors of a data center build and initial installation in a move." He says that an upgrade is the least disruptive alternative, usually done in place with possible additional power requirements.

When moving or building, instead of the usual real estate “location, location, location” quest, data center siting relies on space, structure, power, cooling, staffing and seismic criteria, to name a few. Things won't go well if environmentals and staff aren't in place at project start; projects have reported such unpleasant episodes as plastic sheets draped over disk drives while rainwater dripped from the ceiling with sewer pipes backed up and flooding beneath raised floor.

Myriad reasons motivate these projects: i.e., cramped space, outmoded infrastructure, power/cooling limitations, inadequate or unreliable connectivity, unsuitable location, seismic risk or damage, etc. And, of course, corporate changes have technology consequences; mergers, acquisitions, divestitures, even reorganizations can be reflected in years of effort.

Projects are sometimes motivated by blended/hybrid reasons. For example, when an airline established its own data center, taking over reservation processing from another company, they moved functional processing and simultaneously built/staffed a new data center.

Plan, Plan, Plan First

No matter the project's reason—or how well it's supported up the ranks—money talks and maybe rules. Stan King, CEO of Information Technology Co. in Falls Church, Virginia, says that all the planning in the world will not win out over deep pockets. "Money fixes a host of sins and failures," he says. "Always increase your budget by 40 percent, you’ll need it."

Data center projects present limited (or one-time) opportunities for change, so think beyond the present and anticipate changing technology, business needs and in-place vendors. For example, IBM's zEnterprise EC12 system was the first full-size mainframe able to sit on concrete with no raised floor. Similarly, new data centers can provide green or greenish environmental attributes with significant savings such as evaporative coolers and dehumidifiers instead of refrigeration. And they can follow newer standards such as seismic earthquake proofing for more reliability.

But resist change for change's sake; don't change too much. You needn't follow the general advice to change one thing at a time. Analyze change opportunity/risk balances.

Attend to basics: first things first. Once, a data center upgrade didn’t have the contractor filing for proper construction permits. The building inspector essentially quarantined the area, prohibiting access until someone called in a personal favor from the authorities.

Decide whether to move equipment or install new. If it's a reasonable time for upleveling capabilities, installing new makes sense, can reduce or eliminate a cutover outage, eliminates move expense and avoids risks of damaging in-place equipment during the move. Upgrades, though, may increase software costs. While new equipment—especially when mated with some current gear—requires configuration and testing, that can be done in parallel with current production work.

A full-scale disaster recovery (DR) exercise is essentially a data center move. Being current on DR documentation, practice and results is great preparation for such a project. You can plan a new site as DR host, flip production there as you usually do and retain the old site for recovery.

If you move, choose a company experienced and insured in handling computer equipment rather than a generic office-move operation which may not understand, for example, how heavy mainframe components are and that they can damage carpeted areas. An incremental/segmented move can be less disruptive than a big-bang cutover but may not be possible given operational needs, and may simply create problems felt as death by a thousand cuts. A cutover, of course, may fail or take longer than anticipated and reduces fallback capability.

It may be necessary to use intermediate staging, either physical for hardware or temporary storage for migrating data. This adds complexity and potential costs, and risks having neither the old or new facility having control or responsibility for what's in transition.

The detailed project plan should be a living document: that is, not written, approved and filed, but referenced and updated as needed or with progress, and shared with all stakeholders. IBM Systems Assurance documents provide a good model for installation checklist, identifying major, minor and contingency items for verification. To avoid complacency and items falling into it's-someone-else's-job black holes, sections should be universally shared to identify dependencies and ensure that everyone understands their responsibilities.

Schedules/milestones/deadlines are critical; indicate whether dates are hard (unchangeable) or soft (flexible). These govern physical plant changes; software changes/upgrades, if any; hardware changes/upgrades (prior to or after the move), if any. Distributions should include, for example, customer staff, consulting support staff, building management, electrical contractor and any unions required.

Let Everyone Involved Know Their Role

Data center moves/builds/upgrades are complex projects. Depending on company environment, they may be managed by a project office or by a single trail boss with authority to direct people and allocate resources. Whichever is chosen, authority must be clear; this can't be done by committee.

Unless a company is overstaffed, people are usually occupied keeping production work running and don't have time to undertake complex major changes. And unless staff has previously managed infrastructure projects, doing one can involve painful and expensive on-the-job learning. An experienced consultant/contractor will apply processes and checklists aiming for smooth execution and will provide flexible resources to handle unavoidable disruptions.

IBM has System z platform moves down to a science, from disassembly to reassembly to multi-level testing before returning systems to customer use. Ensure that other vendors are alerted to your project and on-site or available with spare equipment. Something being damaged in transit can turn a fully checklisted project into an unplanned DR scenario, while your data center and staff are fragmented. The project checklist should have usual and backup/emergency contact information for all hardware/software vendors and should be walked through the plan to ensure such matters as power, coolant, floor access tiles, lighting and cables are in place.

As much as possible, cross-train staff across normal technology boundaries so people can pitch in as needed. Similarly, emphasize teamwork/coordination to avoid company politics; project incentives can orient everyone to common goals. Ensure that hardware/software documentation is available wherever it will be needed, in forms not requiring use of systems being changed or moved.

Communicate requirements clearly and verify understanding. Avoid disconnects, like when a customer specified a six-inch raised computer room floor, meaning a paneled raised floor under which to run mainframe bus/tag channel cables, but the contractor poured a six-inch thick concrete slab. Fortunately, it was a ground-floor computer room but, with the raised floor on top of that, equipment was very near the ceiling.

Software—i.e., operating systems, vendor products, local applications and procedures—needs elaborate and comprehensive checklists for handling versions, upgrades, fixes, contracts, license keys and costs. Vendors differ in how they'll handle data center changes, so contact them early to avoid crisis orientation and work to minimize costs of any parallel operation. If you're upgrading or acquiring products, apply leverage to run an evaluation period during the project.

Order external network connections early so there's time for testing and benchmarking; emphasizing potential growth may motivate your ISP to over-provision your facility. Ensure that local cables are specified and ordered well in advance.

Part two will address plan execution issues and managing the many moving parts of a major data center project.

Gabe Goldberg has developed, worked with, and written about technology for decades. Email him at destination.z@gabegold.com.

Please sign in to comment.

Sign In




Join Now!
Big Data Demands Big Iron Skills

Big Data Demands Big Iron Skills

The effects of a mass exodus from the mainframe ranks could affect a number of computing trends, not the least of which is big data.

Read more »

Best System Programmer Attributes

Best System Programmer Attributes

Mainframers give advice and lessons learned on what helps in the real world.

Read more »