We’re all well aware that what goes on, on a mainframe, is much more complicated today than it was 20 years ago. Basically, company employees would use CICS or IMS DC to access data that was probably stored in IMS DB or DB2.
Nowadays, the users of our data can often also be customers; that data is probably a lot more complex and, as well as being stored on our mainframe, it’s often distributed. Sometimes, what seems as simple as a single transaction can often involve CICS, IMS, DB2, MQ or any combination of them. It may look like four transactions from one point of view, but, to the user, it is only one— because it is only doing one task and achieving one result for them. And that’s OK. The real problem arises when something goes wrong. How do you begin to find where the problem is?
This is the problem that many users face because they have separate monitoring software for each subsystem. When you look at the flow of logic for a transaction, it could start in CICS, access DB2, go back to CICS, involve IMS, go back to CICS, access some VSAM files and finally end up in CICS again. How do you even make a start at finding what’s causing that transaction to run slowly?
One solution is to employ staff who are experts on all the different subsystems. They’ll probably have a feel for what’s causing the issue you’re experiencing. The problem is that most sites don’t have these kinds of expert. And where sites do have expertise, it tends to be within specific subject areas. And these days there seems to be little cross training of staff because it can be expensive, difficult and people seem to lack enthusiasm for it.
The solution is a piece of software that will conserve these individual specialists’ time—after all, it is a valuable and limited resource—and ensure they are all analyzing the correct data from the same time period, while working together to investigate the bigger picture. This kind of collaboration leads to a reduced time to resolution, more of these specialists’ time being focused on problem resolution. It’s also useful, when an outage or, more frequently, a slow-running transaction occurs, for the first techie on the scene to be able to identify that there is an issue and start to respond to the situation.
This is where IBM’s Transactional Analysis Workbench for System z
comes in. It’s a tool for problem-solving across subsystems, helping these first responders as well as subject-matter experts in specific areas look at the big picture and drill down to find the details they need. In fact, the software can provides a lifecycle view of transaction activity across subsystems and, so, change the way problem resolution is performed by ensuring everyone is looking at the same transactional data. It uses System Management Facilities (SMF), trace and log records to follow transaction flow. And it allows for the better assignment of problems to the correct group of specialists. That way management will have greater confidence that any problems are assigned to the appropriate specialists for resolution.
Transactional Analysis Workbench merges logs from multiple subsystems to present a consolidated, cross-subsystem view of a transaction’s lifecycle. The Interactive System Productivity Facility dialog browser provides a consistent interface to all log types from all subsystems—finding, navigating, filtering, formatting. When you know how to work with one log type, you know how to work with them all. It provides automated file selection for IMS logs, DB2 logs and (soon) SMF. There’s specific additional support for combined CICS-DBCTL reporting (other combinations coming soon: CICS-DB2 and IMS-DB2). And there are various SMF record-type-specific batch reports (aimed at transaction analysis).
In addition, Transaction Analysis Workbench for System z can also help application development teams in many ways. Because application releases must perform well when deployed, application teams can perform validation testing during roll-out. The evaluation can occur at the transaction level. Typically, an application team may not know how to obtain the performance data they need, and sometimes access to the data they need is not permitted. They may also have limited or no knowledge of tools that use instrumentation data.
Workbench helps by automating the collection of instrumentation data; the application development teams do not have to acquire those skills. It performs automated reporting of validation testing, and includes reporting through CICS PA and/or IMS PA, in addition to its own reports. It analyses instrumentation data for performance exceptions, and provides easy recognition of validation testing against expected results. It provides transaction lifecycle views of transaction exceptions, identifying what part of the transaction is causing problems. It saves the results from each validation testing run, and facilitates collaboration with system programmers and/or DBAs for help with transaction exception diagnosis.
It seems like quite a useful piece of software for organizations to have.
Trevor Eddolls is CEO at iTech-Ed Ltd, an IT consultancy. A popular speaker and blogger, he currently chairs the Virtual IMS and Virtual CICS user groups. He’s editorial director for the Arcati Mainframe Yearbook, and for many years edited Xephon’s