Print Email

Ensuring Data Storage Longevity

10/2/2018 3:11:04 PM |

If you recognize the number series "2401, 2415, 2420, 2440 3410, 3420, 3422, 3440" you've been in mainframe IT long enough to know how important reliable data storage is, both short- and long-term. Those numbers are successive generations of 9-track (round) tape, introduced in 1964 with System/360 and evolving over decades. And they don't even include tape devices that came before (2321 Data Cell Drive, 7-track tape), after (3490/3590, tape libraries, virtual tape libraries) or in the middle (3850 mass storage system).

Backup and Archival Data

Data comes in many varieties, related to why it exists and how it's stored: active, warehouse, transactional, backup, archival and more. I'll skip over the first three forms and focus on backup data (briefly) and archival data (primarily).

Because backup data recovers from human error, equipment failures and external catastrophes, its only reason for existing is restoring data to a recent image. Archival data may be needed for legal or industry compliance, historical recordkeeping, merger and acquisition due diligence, unanticipated queries/searches, or reconstructing operational environments. Backup data can be stored piecemeal as long as it can be completely restored. Archival data is holistic, a complete/consistent image. For a detailed explanation of why multiple backup copies—even cloud storage—don't constitute archived data, see this Storage Switzerland blog.

Preserving Data Across Media Changes

One cosmic-scale tool for avoiding loss of Internet data is the Wayback Machine. It holds more than 20 petabytes, but organizations and individuals can’t rely on the Wayback Machine alone to preserve data. Additional steps need to be taken.

The challenge then becomes preserving data across media changes (physical media form factors, logical record formats, recording technologies, etc.). The industry has passed through many now obsolete technologies from which data could, at best, be recovered with difficulty. Some perhaps shouldn't have been used for corporate data (floppy disks, Zip disks). Others (e.g., PC-mounted DAT tape drives) may have been adequate at the time but are orphans now. Much of this material is essentially lost, though it may be close at hand.

In addition to the progression of tape device numbers, there's also been an alphabet soup of software data formats in the mainframe world: DDR, PETAPE, IEHMOVE and vendor-specific or (worst of all) locally developed variants. Perhaps more threatening is the fact that magnetic tape degrades in different ways, with the binder holding magnetic media to tape deteriorating, and the coating flaking off the base substrate. The magnetic field can also degrade to the point of unreadability, or bleed between tape layers causing data corruption. Niche vendors provide open-system type hardware to emulate some now-gone disk/tape drives, but these are hardly practical for creating archives. And while technology museums like the Computer History Museum and Living Computers Museum and Labs cherish and sometimes even operate obsolete equipment, they don't make it available for emergencies.

The Reality of Data Loss, Human Error and Equipment Failure

The mainframe's decades-long reputation as a robust platform has earned it a mission-critical role in data-intensive industries. So the need for immediate data access over a long (and perhaps unlimited) period of time can be imposed by internal requirements, industry practices and external compliance standards such as SOX, BASEL III, Solvency II, Anti-Money Laundering, and Medicare.

Trade press and mainstream news love stories of disastrous data loss. Simple incidents can happen when programmers or administrators store data volumes under desks, at home or in car trunks, where they quietly become unreadable. Stan King, CEO of Falls Church-based IBM Business Partner ITC, described the process of trying to resurrect data from the last series of backups taken by a defunct company. "The tapes had been stored in a cool, dry environment, likely a disaster storage facility, and looked in great shape," he noted, and the good news was that they had the right hardware, 9-track 6250 BPI tape drive. The first tape read and restored to its former geometry on an emulated disk drive. But the same couldn’t be said for the second tape. About half of the 44 tapes failed. Tape cleaning and retensioning might have improved results, but that was economically impractical.

Gerard Nicol's article describes another gloomy but likely common scenario of wrangling backup tapes offsite.

And while "Star Trek: The Lost Files" sounds like a TV episode, it’s actually an article that describes the extreme measures needed to recover unique historical material related to the original classic program.

Besides obsolete or failed equipment and media, logical problems can occur, too: loss of catalog, data index, encryption keys, human error or mischief. One of the most interesting programs I ever wrote used PL/I multitasking to recover critical but corrupted VSAM data from failing drives which were due for imminent replacement. Of course, the corruption had spread to all backup images without being noticed. Separately archiving and retaining pristine data would have avoided a fire drill.

Migrating on-site data to hosted/cloud requires deciding what to upload and what to lose. Data can be left behind for a number of reasons, from 8mm film to 3420 reel tapes found in old warehouses. Even moving data centers can cause data loss because there are no devices to read old media. And as the Nicol article notes, errors can occur in shipping or processing physical media. Verification works better than trust.

Data Center Evolution Planning

And on-site storage isn’t disaster proof. Even with internal archiving, adoption of technology can obsolete material. Data center evolution planning must include preserving access to archives.

Optical storage—CD/DVD-style but scaled to industrial capacity—has great potential but care must be taken in media choice, storage and handling along with realistic expectations for long-term stability. Discs still don't match stone tablets for preservation across millennia.

When creating archives, security should be balanced with accessibility. Requiring pervasive strong encryption for data—within and beyond one's data center—mandates key management over time and across staff changes.

One simplification has been long-term DASD format stability. Instead of incompatible 2311/2314/3330/3350/3370 changes, volumes have emulated 3390 disks for quite some time, even though the underlying geometry has evolved to higher density and larger capacities. Virtualization and emulation solve some forward-migration problems but there's not yet an "eternal" storage technology, no matter what vendors promise. Nobody is yet thinking of data storage on time scales like the 10,000-year clock, also known as the Clock of the Long Now. Personal data and music has migrated from cassette tapes through floppy disks, PC-mounted tapes, 8-track tapes, and varying capacity CDs and DVDs. Good luck getting a cassette player in a car. Even CD changers are disappearing in favor of playing MP3s. Services and devices can convert home movies to DVD, but that doesn't scale to data center migration requirements. It's bad enough when personal material is left behind, but that shouldn't happen in a corporate setting.

Beware proprietary archival formats. Today's vendor of choice may fail or be bought, with their products eventually abandoned. You should periodically ensure that archives can be read and restored as needed—partially, selectively or fully. Consider technologies and services that support migration to newer storage, and can be verified as sound and backward compatible.

Backup Isn’t Archive

One example of an organization recognizing and addressing the need for large-scale permanent access to material is the Virginia Foundation for the Humanities, which in 2014 recognized that to create content today requires behaving and thinking like a library—investing in machine and human resources to make what was created yesterday and what is produced today available tomorrow and for years to come.

After all this, one simple data storage rule applies: Backup isn’t archive. It's wonderful having robust near-term data storage recoverable from the worst possible disaster, error or malfeasance. But that's not the same as being able to recover and process your data in a month, year or decade.

Gabe Goldberg has developed, worked with and written about technology for decades. He can be contacted at Destination.z@gabegold.com.

Join Now!
IBM Redbooks: Stay Informed

IBM Redbooks: Stay Informed

IBM Redbooks give you what you need, when and where you need it.

Read more »

IBM Machine Learning for z/OS

IBM Machine Learning for z/OS

The IBM Machine Learning for z/OS platform can continuously create, train and deploy a substantial volume of analytic models at the source.

Read more »