Reflecting on Technology Progress
As IBM Z advances and evolves, varied skills are needed
9/6/2017 12:30:30 AM |
By Gabe Goldberg
Anyone who’s worked with mainframes since the 1960s, 70s, 80s or maybe even last week can be dazzled by how much our technologies have advanced. Progress is beyond what might have been imagined, no matter the metric considered: processor speed, parallelism and memory; storage flexibility and capacities; connectivity; and from-the-start stellar reliability, availability and serviceability. Our smartphones and watches, and maybe thermostats and Wi-Fi toasters, would outperform high-end early mainframes.
There's been abundant change on the personal scale too: my first day working at IBM I learned to program a keypunch drum control card; at the data center where I later worked, the hardware manager had come up as an Electric Accounting Machine plugboard wiring programmer. We referred to microfiche for OS source code, PTF/APAR information and microcode listings. A leap forward was IBM's InfoSys product, providing a comprehensive monthly updated on-site database, IBMLink and other offerings. Resident and visiting system engineer and program support representative faces were mostly replaced by the IBM Support Center.
While ongoing change is inevitable, today's mainframes still need skilled technical workers, and people—and, for better or worse, managers, corporate policies, office settings, etc.—are the same as ever. How has care and feeding of enterprise systems evolved along with the platform's advances?
Old School Was a Great School
"Old school" system programmers were, and are, multi-talented practitioners with diverse knowledge and skills. In many shops, small teams supported complex environments with varied mixes of specialization and cross-training: coding languages relevant to their organization (e.g., Fortran, PL/I, Rexx, etc.) and foundational assembler/macro; debugging systems (via dump reading), networks, utilities, and applications and vendor products; building and installing systems; implementing automation based on primitive native facilities; managing security and resource allocation; customizing environments and developing exits and system enhancements; developing procedures and training console operators; sometimes wrangling cabling peripheral equipment and networks; and more.
Then and now, true system programmers are always innovators, tool builders and problem solvers. Now-retired Richard Schuh described using ACP, the predecessor to z/TPF, and desperately needing a development and test environment. When VM/370 was released, after making extensive local modifications, IBM credited the site for being first to fully support ACP as a guest OS.
Dan Skwire, owner of LinkedIn group "First Fault Problem Solving,"
notes that "back in the day, I loved testing and debugging very complex code using tools that hadn't yet matured."
There's still need and room for experts handling specialty projects such as developing/fixing local tools and utilities, integrating disparate vendor products and enabling exploitation of new environments such as Internet of Things (IoT) and cloud computing.
But times have changed; maintenance via ZAP has given way to structured PTF/APAR procedures via System Modification Program/Extended (SMP/E). And for better or worse, due to good news (e.g., improved native tools) and bad (e.g., less system-level training and documentation, skills dilution) some system programming jobs have devolved to mostly administrative chores.
Today's New School Enables Doing More with Less in Complex Environments
Skwire notes that problem solving is still important, but that tools have matured. "Remote dial-in is possible; trapping tools are better," he says. "Standalone dumps are huge and impractical but far less likely to be needed."
There can be a disconnect between system programmer generations, with new schoolers using script-oriented tools, preformatted fill-in panels, maintenance via SMP/E, and installations from DVD or download, and old schoolers not entirely considering that to be system programming.
With increased task separation and fragmentation, there's less cross-training and sometimes less comprehensive systems vision (e.g., hardware and software). I/O configurations have become easier as heavy copper channel cables gave way to ESCON/FICON. Extended architecture provided easier and faster I/O, and I/O management evolved to hardware configuration definition panels. Task splitting can be especially true with Linux on the mainframe but managed by distributed systems administrators.
Modern mainframes needn't be touched to be managed; there's no setting physical dials and pressing blue buttons. Now it's all done with mouse clicks, even defining and IPLing LPARs.
And even folks who've transitioned from old to new school write mostly high-level languages (typically Rexx, perhaps Java and whatever's standard at a site), using venerable HLASM (High Level Assembler) only for critical performance applications or system code. Tools can aid developing and testing Assembler, C/C++ and Metal C programs under z/OS.
System automation once longed for is largely here. A big step was IBM standardizing and enforcing message design, so operators no longer watch consoles to act on urgent messages, but are alerted by tools for performance and error situations. Administration such as Tivoli for alert generation is critical for filtering the important from the mass of routine.
Syzygy Incorporated's Brian Westerman describes a step in system installation evolution (e.g., when he first installed MVS/SP to migrate from MVS 3.8 (which often took months to install): done over a weekend.
But progress sometimes has rough edges, such as integrating Linux technology and culture into the mainframe ecosystem. Each side contributes to meeting business needs, though both sometimes resist learning from the other. It's essential to bulletproof proliferating servers to IBM Z standards while mainframe procedures support Linux administration, operation and problem solving.
Even while Wild West mainframe practices have yielded to civilizing influences, with many creative opportunities becoming routine tool use, true system programmers still understand the fundamental architecture of their platforms, enabling them to seek simplicity of use, efficiency and further automation.
A long-running controversy has been use of local system code. While providing flexibility, this also imposes a not-always obvious support burden. Over time, most sites adapted to using IBM- and vendor-defined exits, even though this annoyed some veterans. IBM's 1980s introduction of Object-Code Only accelerated reduction in local code but led to more and better architected and documented system exits.
In the Real World, Do You Like Chickens or Eggs?
It's hard to say what's caused system programmer skills dilution, and it's hard to argue against simpler tasks for managing today's more powerful and flexible systems, especially as old school experts retire. But easier/simpler and more automated work can lead to reduced deep understanding of how IBM and vendor products work. Especially with fewer formal training options and less time for what used to be valuable on-the-job training, there are grounds for concern that system "programming" has become "administration" or "janitorial" work.
Or perhaps the work matured along with the technology, just as most people no longer change their cars' spark plugs. And Westerman suggests that replacing local modifications with exits and parameters isn't dumbing down the work: it's an opportunity to do things once and not worry about them again. Even so, as staffs have thinned, there's ongoing need for fundamental old school technology skills.
Practitioners can suffer from multiple painful trends: Organizations can seek people experienced in "language du jour," students select languages based on jobs requirements, and employers might not cross-train existing employees. There can be—or appear to be—a skills shortage, while willing and talented workers are underutilized. And as experienced people leave, there's lost industry and site wisdom and memory, with many individuals lacking resources, need, desire or planning to learn the wisdom of the ages.
Non-technical—and sometimes trend or fashion oriented—management can be a challenge for system programmers accustomed to knowledgeable support "upstairs." And outsourcing or offshoring leads to reduced jobs and challenges managing contractors, distance or languages.
The IBM Academic Initiative
continues recruiting, motivating, training and placing new mainframe workers, many in very traditional system programming roles. 60-plus year-old SHARE user group
continues as a forum for IBM, customers, ISVs and everyone else in the mainframe ecology to share information and perspectives, helping preserve what made the mainframe such an industry leader from its introduction.
Both environments preserve fundamental thinking and problem-solving skills while teaching and supporting the newest technologies and tools. It remains to be seen whether the term "system programmer" will fall into disuse. But replaced by what? Surely not "administrator," which can't describe, for example, capacity planning, performance analysis, network design, cloud integration or architectural planning. The mainframe, system of record for small-to-giant organizations, will never sink to the "try rebooting" problem solving methodology.
Much more stable hardware and software allows system programmers to deal more efficiently with guest systems and, more than occasionally, lower-skilled users, sometimes the victims of self-inflicted problems. Even so, it's hard to imagine mainframes without their traditional tool-building culture, let alone the "imagine things as they might be, not just as IBM shipped them" mindset. No matter how far we've come—DOS to z/VSE, OS/360 to z/OS, CP/67 to z/VM, PARS to z/TPF—there's no reason to expect technology or related careers to stagnate and no time for boredom.
Today's DevOps—development and delivery emphasizing communication and collaboration between product management, software development and operations professionals—is facilitated by IBM z Systems Development and Test Environment. This workstation is a standalone platform for mainframe application demonstration, development, testing and education.
Cloud will back-end and IoT will front-end mainframe processing. And there will always be "the next big thing" leading short-attention-span pundits to declare the mainframe's doom. They'll be no more correct than in previous decades. Automation will increase and improve but skills will still be needed deciding what to automate, actions to perform and to do when things go awry. Responsibilities will continue to broaden and success will come to those emphasizing philosophy and structure over simpler metrics such as lines of code managed, broadly optimizing for oneself and others to follow.
Being a quick study, the problem solver of last resort, and the person who can always find information—not just a "system janitor"—will always be a career.
Through decades of changes, IBM, customers and user groups such as SHARE have developed, preserved and enhanced the "mainframe mindset," which distinguishes the platform from others in ways beyond the hardware and software. Practitioners take pride in maintaining production environments and anticipating/diagnosing/solving problems rather than just IPL again to make them disappear. Skwire's book, "First Fault Software Problem Solving”
outlines good practices for problem resolution and prevention and system recovery. Similarly, mainframe user support generally outpaced that of distributed systems, learning or fixing problems and anticipating changing needs.
Gabe Goldberg has developed, worked with, and written about technology for decades. Email him at email@example.com.