Print Email

Overcoming IBM Z Inertia

9/17/2019 8:17:36 AM |
In the TV and movie series Star Trek, the USS Enterprise spaceship (NCC-1701) has various speed ranges and methods for changing its velocity, including impulse power for sub-light speed and warp drives for speeds faster than light. It also has a mechanism for keeping microgravity, plus acceleration and deceleration, from tossing the people and contents around like rag dolls and popcorn kernels: inertial dampers.
 
In the history of IT, we have also seen great acceleration over the past decades, and enterprises have had to constantly adapt to the latest innovations that popped up in order to stay competitive and current with the state of the art, or even the “bleeding edge.” Many organizations that have struggled to adapt found themselves tossed around and run ragged. And yet, there’s a special subclass of those largest of organizations on Earth that seem to have made it through quite well, often while making deep use of the IBM mainframe (now IBM Z).
 
That legacy has been a mainstay for powering the world economy. However, staying true to its roots has sometimes seemed to limit its routes forward, at least in the context of those who make use of it. While no one could legitimately accuse IBM Z of traveling light, sometimes it seems like there’s been a serious damper on changes in the ecosystem since before the days of OS/2 Warp.
 
Now, don’t get me wrong: the technology has continued to outdistance any other platform since the beginning, and we’ve had USS for enterprise computing for nearly a quarter century (first announced as OpenEdition in March of 1995), but if IBM and other mainframe ISVs and OEMs (i.e. non-IBM software and hardware producers) have continued to explore new frontiers with their innovations, that impulse for change has been more cloaked among customer organizations that rely on the mainframe for their core processing.
 
Or, more precisely, that impulse has been most dampened among the people who are directly responsible for keeping the mainframe functioning. As we look at all the connections between that technology and the organizations that have built their success on it, it starts to become apparent that there are many local legacies that can mitigate impulses for local advancement.

Maintaining Mainframe Legacies While Meeting Modern Requirements
 
As clean, clear and functional as the mainframe legacy is, tying that technology to the organizations that use it can be a complicated matter, as both continue to advance and evolve, and finding the optimal place for them to stay connected isn’t always a matter of simply upgrading the past. New strengths emerge on the mainframe, while the changing world economy brings new opportunities, emphases and directions to those organizations that have mainframes. And trying to hold on to how things were done in the past while moving forward can feel like an increasingly tangled experience.
 
There is a famous story from the ancient Greeks of how Alexander the Great established his claim to becoming ruler of all of Asia by undoing a knot that was famously impossible to untie. Rather than slowly and methodically pulling each rope out of each loop, which the scale and complexity and tightness of the knot precluded, he just drew his sword and sliced it in half, accomplishing the desired result.
 
Likewise, many mainframe shops have found that they are strangulated by such knotty legacies of local culture, configurations, exits, utility program, methods and applications that it feels impossible to advance to meet modern requirements and opportunities while keeping integrity with the past. Sadly, the conclusion too often has been that it is necessary to change platforms in order to start fresh. 

Why sadly? Because it is so rare that there is a viable alternative platform to a fully-functional mainframe environment. And so, organizations spend tens or hundreds of millions of dollars building a new, alternative non-mainframe environment to renew their IT onto, only to discover that, with the exception of a few “low-hanging fruits” that barely took advantage of the strengths of the mainframe, they’re unable to move their core processing to the new environment. Consequently, they’re left with an even more tangled circumstance of double the platforms, double the cost, and the mainframe still doing the lion’s share of the work while the light, relocatable workloads end up costing as much by themselves as the entire mainframe environment still does.
 
At this point, it is easy for mainframe technologists to get jaded about change, and refocus their efforts on just surviving until they retire while rocking the boat as little as possible. “The way we’ve always done it,” becomes the only apparently safe way forward. After all, as a representative of one mainframe shop was once quoted as saying, “We started sunsetting this thing so long ago that the sun is coming back up again!”
 
But what if there were an effective way to slice through the immobilizing aspects of decades of legacy without trying to leave the uniquely functional mainframe platform and context behind?
 
Renewing IBM Z Culture and Technology
 
In 2004, I wrote a white paper about the need to get a new generation on the mainframe before the current generation moved to retirement. Then, in 2008, everyone’s retirement plans tanked. Combined with uncertainty about the availability of healthcare post-retirement, that has led to an abundance of superannuated mainframers continuing to keep the platform running. That’s a good thing, as far as it goes.
 
But eventually, those people will start to move to retirement and other forms of unavailability. Mainframe shops have begun to recognize this by finally starting to hire a new cohort of mainframers. And unlike most of their predecessors, these new mainframers will have some very strong experience-based ideas about how intuitive and easy computers can be to work with. In fact, there is a new generation of technologies arriving on the mainframe just in time for these new people to pick up and become effective with them—such as, for example, Zowe.
 
But, how to move to the future while respecting our investment in past legacies that are still running the economy? After all, there are 250 billion lines of COBOL (plus more lines of other mainframe application languages) keeping the world running, plus vast amounts of local mainframe utility programs in Assembler, REXX, CLIST, C, PL/I and even application-oriented languages. They can’t just be investigated, rewritten and retested—especially when they “ain’t broken,” so it’s hard to justify fixing them.
 
Yet they are broken if they prevent forward motion. And any local system that is too complex to change has become an obstacle, even if it’s also an enabler.
 
The good news is: If you are conceptually prepared to move to a new one in order to renew it, then moving your current generation of technology and mainframe culture to a fresh one that is also on the mainframe should also be feasible. It’s time to slice through to the future.
 
REPRO Versus REORG
 
As a conceptual analogy, think of the difference between reorganizing (“REORG”) a VSAM file in place, versus just copying its contents to a fresh file (“REPRO”). In the former case, you have to keep it useable while reorganizing it. In the latter, you take a break and start fresh.  
 
The interesting thing is, a REPRO may take a fraction of the time of a REORG for what were effectively the same results, as long as you don’t mind shifting to a new copy of the file rather than keeping an uninterrupted connection with the original file. You might call that, “thinking outside of the box,” or at least outside of the dataset.
 
My analogy is that, in other areas of technology, business, and even life, when we take a step over and move to a fresh context we can often make things happen more efficiently and effectively than if we just try to make constant changes in place.
 
Regeneration, RePros and Context
 
So, how do we slice through the Gordian Knot of tangled legacy while still taking advantage of the definitively unique strengths of the mainframe platform and ecosystem?
 
To me, this looks like a job for a new generation of mainframers who take a professional approach to building their environment and their careers. We might call these responsible professionals “RePros.” And let’s assign them a result, rather than a specific method, based on a measurable outcome rather than a particular implementation.
 
In other words, whether looking at platform and connectivity architecture, application design, or solution usage and configuration, assign these RePros the task of documenting what is currently expected from the solutions in place, and then building on that to see where the organization anticipates opportunities to move forward. Then have them research and report on what new configurations could simply displace what is in use, with the option of running the new configuration on the mainframe as well.
 
Sometimes, that will actually mean spending less money while not acquiring any new solutions—for example, when an out-of-date in-house utility program is being used and maintained that could be replaced by functionality available in an IBM, ISV or OEM solution you already have in place.
Sometimes, it will mean letting go of things you’re still using out of habit but which are no longer adding value. To quote novelist Stephen King, “Kill your darlings.”
 
And sometimes it will mean acquiring or developing brand new solutions that bring a substantial uplift in cost-benefits value compared to old ways that are increasingly obscure and clunky.
 
But in all of these cases, it will also renew your mainframe culture, as you empower new mainframers to learn to be true professionals who take a big-picture view of your organization’s wellbeing and future, and feel ownership about working with their mentors and current context to help move you there.
 
Rebuilding the Future of IBM Z
 
The greatest weakness that comes from something that still works is the incapacity to change it until it truly breaks. But, as “In Search of Excellence” guru Tom Peters put it so well, “If it ain’t broke, fix it anyway.”
 
Today’s mainframe legacies have often become stumbling blocks for moving to the future, precisely because they do work, being based on such excellent technology and such a functional history. However, it’s never too late to rebuild the future on the platform that got us here so successfully.

Use the new tools, such as Zowe, that are available for innovation, documenting what already works and finding ways to make new things that more effectively work more simply, easily, and cost-effectively. And employ your established context, experts and expertise in building a new generation of mainframe professionals in the process.
 
Engage!
Join Now!
Lessons Learned

Lessons Learned

More seasoned mainframers advise their younger selves and Generation Z workers.

Read more »

A Brief History of Mainframe Memory Technology

A Brief History of Mainframe Memory Technology

Read more »