Print Email

Virtualization: More than VMs and LPARs

Early enterprise systems virtualization helped programmers work better and faster

6/10/2015 12:00:23 AM | This is the first of two articles on virtualization. This article focuses on virtualization from the perspective of a programmer or computer user who writes programs and submits work to a computer system. The second article takes the view of the planner or administrator who uses LPARs or VMs to support multiple OSs on the same enterprise computer.

In the early days of mainframe programming, one program at a time ran on a computer and programmers wrote channel programs to perform I/O operations. The one-program limitation gave way to many programs at the same time and channel programming was replaced by access methods that had a more productive way of handling I/O using open/close and get/put packages of instructions called macros. Once memory, processors and I/O devices became bigger and faster, the stage was set for virtualization.

Address Spaces Needed Virtualizatio
n

Application programmers first experienced virtualization through the partitions or regions where their programs ran and through changes in the access methods. Partitions or regions are a range of addresses called an address space. In the early 1970s, address spaces where programs ran became virtualized when IBM introduced virtualized storage into its systems. IBM didn’t invent virtualization but its designers and programmers helped to prove that virtualization was a better alternative to systems that utilized only real storage. IBM’s move to virtualization had several important impacts. Address spaces generally became larger and dynamic in size, as was the case with MVS, on the programmers input through their job control language (JCL).

VS1 and MVS

At the launch of virtualization, VS1 was a virtualized OS where the jobs that ran were limited to sharing one 16 MB address range that was allocated among jobs. Typically, there were a few CICS regions running along with a series of partitions of different sizes like 128K or 256K that took on work. There was no time sharing option (TSO) in VS1 so there was often an address space that ran a program that programmers used to edit programs and JCL. Some companies used an IBM program called Source Program Maintenance while ROSCOE from Computer Associates was another popular tool. Both supported many users in a single address space.

MVS was like VS1, only more advanced because it had many virtual address space ranges to share depending on the practical limitations of the machine. It also had TSO, which offered many interactive possibilities including support for foreground compiles of application programs.

Virtualization Helped Programmers

With virtualized address spaces in VS1 and MVS, application programmers could do many things that could not be done before. Application programs could be written in high-level languages (HLL) like COBOL, which made it possible to create applications more rapidly. Speed was important because companies were rapidly computerizing every significant business process. At the same time, companies were growing their pool of programmers with non-computer science people so a HLL was important, otherwise it took years to learn low-level languages like basic assembler language.

Virtualization did many routine tasks automatically making life easier for the programmer. VS1 and MVS routines loaded the program in any available range of memory addresses in the computer keeping track of the memory addresses, specifically in blocks called pages. When there was contention for real memory, the OS swapped the least accessed pages to disk until they were needed. When they were required—when a program branched to instructions on that page—the OS would automatically bring that page back into memory. This process was called resolving a page fault and was done without intervention of the application program.

This way of doing things, the rules of virtualization, tracking busy pages and handling page faults, represents an amazing invention that is used today in most computer technology. Virtualized memory is a foundational technology—perhaps the most useful in the last 40 years.

Access Methods

When VS1 and MVS become available commercially, access methods like the queued sequential access method (QSAM), basic direct access method and indexed sequential access method (ISAM) were already part of the previous systems so it inherited them. They benefitted from virtualized address spaces but they were written before virtualization so they had built-in design limitations. Because the address space could be larger, you could specify a significant number of buffers in the JCL and have the buffers in virtual storage. Optimizing the block sizes in combination with the number of buffers improves performance by reducing the execution time of the program.

The buffers and block-size example is a broad generalization of the potential impact of bringing many buffers of a significant block of records into virtual memory. Depending on the manner in which the program processes data, the records might have a greater likelihood of being in memory when accessed, thereby improving the throughput of the program and its job stream.

VSAM Exploits Virtualization

VSAM was written to exploit virtualization and could be used as an alternative for the legacy access methods because it contained built-in replacements for the other access methods. VSAM has entry-sequenced data sets that work like QSAM only better. VSAM also has a direct-access replacement called relative record data set and an ISAM replacement called keyed sequenced data sets. The VSAM substitutes for the legacy access methods offer the opportunity, through the access method services utility IDCAMS to substantively improve performance specifying freespace, control interval and control area size, as well as the imbed and replicate options.

Making a Difference

Virtualization really changed things then and still does today, as it’s omnipresent in computer architecture. Suddenly, in the 1970s, partitions and regions could get larger making way for the use of HLLs thus enabling non-computer professions to more easily learn languages that allowed them to write new applications.

Virtual memory also made it possible to have large special-purpose jobs like CICS and IMS that ran in their own address spaces. These special, interactive transaction processors had large memory requirements because increasingly programs were being written to handle processing in real time and not batch. The IMS and CICS systems had tables that were list of resources like programs, files, queues, transactions, etc. The list of programs had the addresses of programs, which ideally were in real memory when they were called. The list of files had a set of data sets that were almost always open and anchored buffers full of data in memory. These systems could not work well without virtualization, as the real memory limitation of the day could not support it.

Joseph Gulla is the IT leader of Alazar Press, a publisher of children’s literature.

Please sign in to comment.

Sign In




Join Now!
Save Money With Mobile

Save Money With Mobile

At some point, the volume of mobile transactions is sure to impact your peak workload pricing.

Read more »

Breaking Through CPU Roadblocks

Breaking Through CPU Roadblocks

The Warning Track implementation on the zEC12 and the Blocked Workload Support in z/OS provide mechanisms to clear lower-priority work out of the way without violating z/OS priority dispatching to any great extent.

Read more »