Print Email

Migration Misconceptions

What do you lose by moving off the mainframe?

9/11/2013 1:01:01 AM | Editor’s note: This article is based on content initially published in a series of SHARE President’s Corner blog posts.

Moving off the mainframe is not an easy task and shouldn’t be taken lightly. It’s frequently a multi-year, multi-million-dollar effort fraught with risk.

You’ve probably seen headlines where a CIO has announced plans to move off the mainframe. You’re less likely to see a headline where these CIOs admit they made a mistake and are cancelling that project.

In one example, a company budgeted $10 million for a one-year migration from a mainframe to a distributed environment. Eighteen months into the project, the company had spent $25 million and only managed to offload 10 percent of the workload. In addition, it had to:

• Increase staff to cover the over-run
• Implement steps to replace mainframe automation
• Acquire additional distributed capacity over the initial prediction, even though only 10 percent had been moved so far, and
• Extend the dual-running period at even more cost due to the schedule overrun

Not surprisingly, the executive sponsor is no longer with the company.

What’s the business purpose for moving? Herd mentality? Everyone else is doing it? We’ve already seen that many companies still use mainframes, and are continuing to invest in them.

Still, many point to lower costs as a reason to move to distributed servers. However, we’ve seen that looking at total cost of ownership (TCO)—including on-going software licenses, maintenance, energy and air-conditioning costs, and labor costs—the mainframe is often less expensive. IBM has conducted nearly 100 studies comparing costs for companies that have considered re-hosting applications on distributed servers. The average cost of the distributed alternative is 2.2 times more than the mainframe. Only four cases showed lower costs for distributed servers.

In addition, few discuss the cost of switching. New hardware, software, processes, testing, training, etc., are not cheap. And oftentimes, you simply can’t replicate the same capabilities of the mainframe versions of the applications on the distributed platform. Re-coded applications might not work as well, unanticipated problems arise, and organizations give up many of the mainframe attributes they rely upon. Let’s look closer at what you lose when you move off the mainframe:

Efficiency
Typically, distributed servers support a single application workload. How many business applications does a typical company run? Inventory control, order entry, accounts payable, accounts receivable, HR, shipping, business intelligence, research and development (multiple projects). How many servers would be necessary to support these applications?

Distributed systems are best at handling known and expected types of work when serving a particular business application. The mainframe is better at handling different and unexpected types of work when serving various applications for a large number of users.

Distributed servers are sized for peak demand, and additional server machines are implemented to handle failover, development work, testing and so forth, generally resulting in much unused capacity. Therefore, the average utilization of a distributed server farm is very low, usually in the 5 percent to 25 percent range.

Having this kind of unused or underused performance power and resources is not cost-effective. Dealing with large numbers of servers with low utilization on each is a situation that most IT executives want to avoid.

The recent trend to virtualize servers, and even the cloud movement, can be seen as an attempt to consolidate workloads, thus better utilizing capacity and reducing the number of servers. Essentially, it’s an attempt to take a multitude of individual servers and create one giant “super-server” with sharable resources.

In other words, it is an attempt to duplicate what the mainframe already is, and has been for decades. The difference is with a mainframe you’re not layering yet another technology that has to be managed and can break down at multiple points on top of the one you’re already running. Also, the mainframe has been optimized for managing multiple disparate workloads. There are literally decades of hardware and software innovations that were specifically designed and implemented to ensure that the modern mainframe is the best mixed-workload server on the planet.

Security
With increasing attention on security, it’s important to note that the mainframe has the highest server security rating in the industry. The Evaluation Assurance Level (EAL) of an IT product or system is a numerical grade assigned following the completion of a common criteria security evaluation, an international standard in effect since 1999. IBM mainframes have EAL5+ certification. What does that mean? “The intent of the higher levels is to provide higher confidence that the system's principal security features are reliably implemented.”

Security is built into every level of the mainframe’s structure, including the processor, OS, communications, storage and applications. Security is accomplished by a combination of software and built-in hardware functions, from identity authentication and access authorization to encryption and centralized key management. Despite how Hollywood portrays the mainframe, in reality, an incident of a mainframe being hacked or infected by a virus has never been reported.

Reliability
The mainframe has a “mean time between failures” that’s measured in decades. In other words, it has unmatched reliability and security, which contribute to its 99.999 percent availability, commonly called “the five nines.” Such high availability means near continuous operation with unplanned downtime of only 5 minutes over the course of a year. Quick recovery and restoration of service after a fault greatly increase availability.

Next time you’re trying to get money out of an ATM, buy stocks, reserve an airline ticket or pay a bill online, think about how important reliability really is. How much does an unplanned outage cost? It depends on the industry, but can certainly be millions of dollars per hour. And this doesn’t just affect lost sales and revenue, but also company image and reputation.

To enhance this reliability, the mainframe is capable of non-disruptive hardware and software maintenance and installations. This allows installation and maintenance activities to be performed while the remaining systems continue to process work. The capability to perform rolling hardware and software maintenance in a non-disruptive manner allows businesses to implement critical business functions and react to rapid growth without affecting availability of business functions.

Mainframe Upgrade
In addition to utilization of distributed servers and storage, and workload consolidation, the hardware and maintenance TCO category also includes the reduction of the mainframe net present value costs through trade-in value. With distributed servers, companies often don’t consider the asset disposal costs of aging or obsolete equipment.

With the mainframe, growing companies typically receive credit for existing MIPS (i.e. capacity) investments, and a full trade-in value applied to upgrade and grow MIPS.

However, when companies upgrade to the next generation of distributed systems, they must repurchase the existing processor capacity, plus any growth; and the lifetime of these systems is typically three to five years. The long-term TCO implications can be significant.

One Reason to Consider
Perhaps one reason that companies are considering moving off of the mainframe is that they don’t want to be held hostage by IBM. That’s where the user community can help. Representing more than 1,800 of IBM’s largest customers, SHARE helps ensure that IBM continues to deliver value for the dollars they charge for hardware and software. SHARE also drives requirements into IBM to make mainframe hardware and software more usable and continue delivering return on investment.

Final Thoughts
Even though the mainframe concept dates back to the 1950s—with the latest generation tracing its roots back to 1964—it’s gone though many significant changes while continuing to support applications that were created decades ago. Today, mainframes support Linux and Java and many significant initiatives including cloud, mobile computing, big data and business analytics. In the past, mainframes were large and had special cooling requirements. Today they’re not much bigger than a large refrigerator and can run anywhere.

There are certainly business concerns that should be evaluated—and re-evaluated often—as to choice of computing platform for running today’s businesses. The choice of using a mainframe is not an either-or proposition, however. Mainframes and servers can happily coexist. There’s a role for the mainframe and a role for servers. Myths about the usability or viability of the mainframe should be ignored and the business drivers (cost, benefit and risk) for considering switching technologies need to be carefully considered.

Janet L. Sun is the immediate past president of SHARE Inc.


Please sign in to comment.

Sign In




Join Now!
Big Data Demands Big Iron Skills

Big Data Demands Big Iron Skills

The effects of a mass exodus from the mainframe ranks could affect a number of computing trends, not the least of which is big data.

Read more »

Best System Programmer Attributes

Best System Programmer Attributes

Mainframers give advice and lessons learned on what helps in the real world.

Read more »