Print Email

A Brief History of Mainframe Memory Technology

6/13/2019 2:34:51 PM |

If memory serves, it wasn’t that long ago when it did so without the aid of electronics. We needed paintings, sculptures, oral traditions, books, punched cards and other inscriptions. But yesterday’s records weren’t so much broken as digitized over the past 75 years, and while we may now rely on such storage, too often we forget where we came from and how we got here. Sometimes it’s good to have a memory refresh and recall from the archives.

As we journey back to unlock these memories, one of the keys to appreciating where we came from is the formative role of the second World War, with some of the earliest examples of electric processing, from the German Enigma machine to Alan Turing’s solution for cracking it, as highlighted in the movie “The Imitation Game.” Much of the original development and usage of these concepts and devices was shrouded in military secrecy, and often not made public for decades after. You could say it’s a type of memory protection.

Certainly, some of the original experts who helped develop the various early kinds of computer memory had protected memories of their own, being sworn to military secrecy about what they’d learned, but still allowed to make judgements about what technological ideas would work as long as they didn’t explain why.

Early Mainframe Memory 

It’s easy to look back today from the vantage point of nearly five decades’ experience and assume that semiconductors were the inevitable technology for computers to use for their primary storage, or main memories. But as Emerson Pugh’s book “Memories That Shaped an Industry” points out (on page 254), it wasn’t until “June 1971 when the world’s first commercial computer with an all-semiconductor main memory, the IBM System/370 Model 145, was introduced.”

Yes, that’s right. The original System/360 mainframe computer also used something else for its main memory: cores—little tiny ferrite toroids—i.e. donut-shaped pieces of metal alloyed from various elements including iron. But you knew that, right? After all, the term “mainframe” refers to the main cabinet where frames of these cores served as the memory for the central processing unit (CPU) that was housed with them. And “core memory” likewise hearkens back to this—as do “core dumps” when you get to see the contents of memory displayed over reams of paper or the virtual equivalent.

Interestingly, with $5 billion of investment—in 1964 dollars—IBM moved the state of the art forward in many ways. But it settled on using those little tiny cores, each amounting to one bit—for the original System/360 random access memories, even for computers like the Model 75 which could be configured with one, two or four processor storage units with 2 million bits (256K bytes) of core memory each—so up to one megabyte or 8,388,606 little tiny metal rings. How tiny? Well, they originally started out big enough to fit on a child’s finger, but by 1966 they were typically 21 millimeters in outside diameter with 13 millimeter diameter holes, through which between two and four wires had to pass depending on the particular configuration for reading and setting them.

And that was good enough technology to get Apollo 11 onto the moon in July of 1969, which required the full prowess of the System/360 Model 75.

Primary Storage Technologies

Now, while those tiny little ferrous wheels eventually gave way to bipolar semiconductors on the mainframe, which later were superseded by complementary metal-oxide semiconductors (CMOS), there were, of course, earlier forms of memory that predated and coexisted with them between the end of the war and the April 7, 1964 announcement of IBM’s ultimate machine. And there were also many types of secondary storage for data and programs, from punched cards to tape, drums and disks, just to name a few.

Probably the most well-known type of these primary storage technologies was the vacuum tube, similar in materials to a glass light bulb or fuse. In fact, there was a variety of different tubes, and while they often stored just one bit apiece, there were also Williams tubes, which used cathode rays as part of a configuration for storing many bits on a single tube. Other devices were also tried over the decades, including wave pulses in a liquid such as mercury, metal tubes and cores with multiple holes.

Even when toroid cores had effectively displaced all of these for read/write random access memory (RAM), there were also a number of different read-only memory (ROM) configurations for storing programs and microcode that never needed to be changed, ranging from frames that used the presence or absence of a core to indicate a value of one or zero, to special cards with punched holes or printed circuits that remained static.

Balancing Compatability and Change 

Interestingly, you could write programs and create data using core memory and computer architecture in the 1960s that could still be used on today’s mainframes. But, while IBM has substantially kept its commitment to compatibility across models and generations of the mainframe, some important things have changed, too.

From a memory perspective, easily one of the most important advances beyond the underlying technology for storing data is the amount available at one time for the CPU to use for instructions and data.

As anyone who has been working on mainframes for decades will tell you, we used to have to fit programs and data into very little memory, which involved all kinds of schemes for moving things back and forth between primary and secondary storage. That’s one of the reasons virtual memory and other forms of virtualization were created on the mainframe in the 1960s.

When IBM designed the original System/360, there was theoretical room for up to 16 megabytes of immediately addressable memory (i.e. the memory that the CPU could directly read from and write to). That required three bytes, or 24 bits, for the binary number that specified the unique location for where to find every byte in the mainframe’s memory. Given a maximum physical memory size of one megabyte for the Model 75, that seemed like a lot of elbow room.

By 1983, that elbow room had been more than used up, and IBM extended the architecture (XA) to 31 bits, reserving the 32nd bit of that four-byte address as an indicator of whether a given memory address was exclusively in the first 16 megabytes or could be found anywhere in the two gigabytes addressable by XA. The dividing line between the first 16 megabytes and the rest of the XA memory range became known as “the line.”

Over the years that followed, IBM created other memory-stretching innovations such as hiperspaces that made additional parallel 31-bit memory blocks available to running address spaces. Since the turn of the millennium, IBM has now spread the mainframe’s full address range to 64 bits, or 16 exabytes. And the dividing line between the top of the 31-bit range and the rest of that gloriously vast memory elbow room? “The bar!” That makes me chuckle as I’m reminded of the receptions at SHARE, where you have to go through the line to get to the bar.

Of course, this article is too brief to contain more than a few words about all of these bits and bytes, without even beginning to address concepts such as storage keys and little- and big-endian values. But, I’ve allocated space for one terminating tidbit: Have you ever noticed that the characters in the IBM logo are each made with eight shaded bars? That’s kind of like three eight-bit characters, for a total of 24 bits… I guess I’ll draw the line there. Ah, the memories.

Join Now!
Virtualization Today

Virtualization Today

VMs and LPARs opened doors to testing and increasing utilization of computers.

Read more »

Of Elephants and Mainframes

Of Elephants and Mainframes

There are at least five (and probably a lot more) reasons why mainframers are like elephants.

Read more »