Print Email

More MVS Jiu Jitsu

How designers overcame UIC Update’s impact on CPU consumption

2/20/2013 1:01:01 AM | I so much enjoyed writing about reduced preemption as a brilliant way to approach a problem that I’d like to relate the story of another problem that was just getting worse, and how it was turned on its head to provide a solution. MVS performance designer Bernie Pierce was a key contributor to this solution as he was with reduced preemption, but this time I had some good ideas myself. Let me first describe the problem.

Like any respectable operating system, z/OS virtualizes memory. That is, it creates a mapping from the virtual memory that applications reference to the real memory that the hardware actually accesses. The mapping is incomplete in that some virtual pages are not “backed” by a real page. OSs do this to conserve real memory, which they want to use only for data pages that are actively being used. Other data pages that have not been recently referenced are written out to disk. Of course, the OS is ready to read the data pages back into real memory in the event that they are referenced again.

The whole scheme is based on observing the reference patterns and on a time-tested rule of thumb. That rule of thumb is that data pages that have been most recently referenced will soonest be referenced again and that those that have least recently been referenced will least soon (if one can say that) be referenced again. An algorithm based on this rule of thumb is called a Least Recently Used (LRU) algorithm, and such algorithms have been implement in many contexts. The context we want to look at here is how to determine which pages of data in virtual memory should be written to disk so that the real memory they’re occupying can be reused for other data that’s currently being referenced.

The System z hardware provides some help in determining which pages to write out and which ones to keep in real memory. Each page of real memory has an indicator, called a reference bit, which is turned on whenever data on the page is referenced by an instruction. There is also an instruction, called Reset Reference Bit, which allows the OS to check whether the indicator is on and to reset the indicator. The OS can repeatedly scan through real memory checking to see which pages have been referenced since the last scan. Based on this, the OS maintains an Unreferenced Interval Count (UIC), and the scan is called UIC Update. When the OS needs to write some pages out to disk, it selects pages with the highest UIC. That’s all the background needed, so now we can get to the problem.

As the amount of real memory on the system increases, the CPU cost of UIC Update gets to be a larger and larger percentage of the total. Through the history of the mainframe, the amount of real memory continued to increase—at least until 1996. At that time, the architecture held the limit at 2 GBs, so the UIC Update problem was suspended for a while. But in 2000, z/Architecture was introduced, removing the 2 GB limit. In the meanwhile, installations had augmented their real memory with expanded storage, which was electronic memory within the complex, but not byte-addressable like real memory. Systems often had two or three times as much expanded storage as real memory. Under z/Architecture, all that expanded storage could be converted to real memory, ballooning the amount of real memory and really giving a jumpstart to the UIC Update problem.

At that time, my approach to the problem was very simple and will hardly seem worth all this buildup—but that’s the point. What I find noteworthy is that it took the problem and used it as the basis for a solution. We were pressed for time so I needed something easy to implement. My solution was to just not perform the UIC Update scan so frequently!

Now, because of the abundance of real memory, there was no need to manage it so tightly. At the time, many installations had already configured enough memory so their systems did practically no paging at all. So, why spend a larger percentage of your compute power determining which would be the best pages to write to disk when so few were actually being written out? The time interval between UIC Update scans was increased by a factor of 10, and that seemed to do the trick. But it was just a stopgap.

Concern was raised again when, in z/OS release 8, the maximum supported real memory was raised from 128 GBs to 4 TBs. A totally new approach was needed, not just another Band-Aid. I had an idea, but Bernie Pierce had a better one. They both accomplished what was needed—a flattening of the growing cost of UIC Update as the amount of real memory increased—but Bernie’s was more elegant and less complex.

I don’t want to go into the details of either algorithm, but one attribute they shared was that the UIC Update cost became proportional to the paging rate. So, as the amount of real memory increases, the paging rate, of course, decreases, and with this algorithm, so does the cost of UIC Update. This is, to me, a wonderful example of taking a problem and manipulating it so that it solves itself. Formerly, adding more real memory increased the cost of UIC Update, but now it decreases as more memory is added—another real Jiu Jitsu solution, turning the enemy’s strength to your own advantage.

If you’re wondering why MVS was around for 30 years before anyone thought of this solution, the first reason is that it only works if you already have a lot of real memory and a low paging rate. But, this was probably the case for a decade before the problem was actually solved. This kind of problem also seems to need to get the attention of the kind of person who thinks to deflate the tires in order to keep the top of the proverbial truck from getting stuck under the overpass. Bernie was that sort of thinker—and me too, but only once in a while and not so elegantly as he.

Bob Rogers worked on mainframe system software for 43 years at IBM before retiring as a Distinguished Engineer in 2012. He started with IBM as a computer operator in 1969. After receiving a B.A. in mathematics from Marist College two years later, he became a computer programmer at the IBM Poughkeepsie Programming Center, where he worked on the OS/360 OS. Rogers continued to work on mainframe OS development for his entire career at IBM. He contributed to the transitions to XA-370 and ESA/370, and was lead software designer for the transition to the 64-bit z/Architecture. More recently, he implemented the support for single z/OS images with more than 16 CPUs and was a lead designer of the z/OS support for the zAAP and zIIP specialty engines. He has been a popular and frequent speaker at SHARE for many years.

Please sign in to comment.

Sign In


RJKinsman
Very interesting article, and very intriguing. You say, "They both accomplished what was needed—a flattening of the growing cost of UIC Update as the amount of real memory increased—but Bernie’s was more elegant and less complex."

I'm very curious about how the solution works. I hope you plan on describing it in a future edition!

Thanks!
3/9/2013 11:19:07 AM
Join Now!
Enhancement Highlights

Enhancement Highlights

IBM DB2 12 for z/OS is ready for Hybrid Transactional/Analytical Processing.

Read more »

Special Report: Mainframe Skills

Special Report: Mainframe Skills

Mainframe clients find help from IBM and others to mitigate the effects of a skills shortage.

Read more »