Hinting, perhaps, at what we can expect tomorrow, May 19, Geoff Buss of IBM UK Ltd held forth on system managed storage at the recent UK Computer Measurement Group conference in Brighton. In many ways, it was an an apology for things past, with Buss acknowledging that while IBM knows what users need, it has neglected to meet many needs over the past nine years. Not least of these is VSE, but Buss dismissed that system, saying I don’t address that at all, and neither does IBM actually. Quite. Instead, he focused on MVS and VM, admitting that the DFSMS/VM system managed storage facility has a long way to come before it catches up with MVS DFSMS, although that’s far from perfect. It provides for disaster back-up and recovery at the application level. However, this depends on the use of tapes and transportation to off-site stores, and applications are not linked to the SMS, storage management subsystem, which means that they are not system-managed. Further, local back-up is a problem with high-availability data, primarily because of limited opportunities to lock the data set or file. Tape devices are pose another problem.

Glaring omissions

Like optical storage and disks, they ought to be system-managed, but they are not part of that environment, even if they are part of the storage hierarchy and may be DFSMS-owned and managed. As for DFSMS/VM, Buss says that it has started modestly, providing an interactive interface for the storage administrator. It has a data mover, essentially intended as a migration tool for moving 3380-based mini-disks to 3390 disks. Nonetheless, two glaring omissions are back-up and recovery and space management functions. It provides functions at the disk volume level, not at file level and does not use the pooling concept for physical storage volumes like MVS SMS storage groups. Workstation Data Save Facility/VM, WDSF/VM, provides VM-based back-up and archive serve functions for distributed data residing on AIX, MS-DOS, OS/2, SunOS and MacOS workstations. But WDSF/VM is not policy driven and has no space management functions. Further, it has neither a storage group concept nor MVS-based equivalent server functions. So, given that IBM recognises these shortcomings, where is it planning to take users? Buss says that availability, particularly back-up and recovery at both local and disaster levels, is to be addressed. It is a high priority item for which IBM is to deliver function. Back-up of data should enable data to be copied with no, or absolute minimum disruption. This should be accomplished by locking the file momentarily, a matter of milliseconds, while a logical copy of the data is taken. Once completed, the set would be released and the physical back-up could be created independently of the primary data set’s availability. This function needs to be a managed service and the storage administrator should have a means of defining it to DFSMS. Secondly, all back-ups associated with a data set, or group of sets, should be managed together. Since local back-ups of data sets are managed by DFSMS, it would be sensible to do the same for disaster back-ups. The administrator needs to be able to specify the same management attributes for an aggregate back-up group as found in the management definitions for a data set, and this would enable disaster back-ups to be managed by DFSMS. Also, the ability to send disaster back-ups direct to the off-site store is priority, replacing the truck method which ought to be consigned to the dustbin of history. With the introduction of Escon channels and increasing speed of data transmission, it is becoming more feasible to send back-up data direct to the storage devices. Nonetheless, the channel lengths need to be extended beyond five miles and data transmission speeds have to increase drastically before theory becomes a reality.

By Janice McGinn

Since the storage hierarchy contains cache, disks, optical and tape devices, some of them have to be system-managed. Tape, despite being fundamental to the hierarchy, is a major exception. Similarly, optical devices aou

ght to be system managed, especially when optical storage contains system-managed object data. Why not, questions Buss, enable DFSMS to allocate a data set directly to tape and then manage both the tape data set and the volume on which it resides? The administrator would need to be able to specify whether a data set is to be allocated to tape or disk, and this could be achieved by specifying the allocation criteria, and DFSMS would specify tape or disk by the data set’s performance criteria. Finally, to round off the idea of system-managed tape, the administrator needs to be able to group tape volumes logically. This means creating pools of tape volumes based on either a set of usage or data characteristic criteria. This is available for system-managed optical devices through storage group definition and should be easy enough to extend to tape. IBM has to focus on VM storage and get it up to scratch. But the two areas that are crying out for attention are space and availability management. In order for VM functions to be consistent with MVS, the administrator must be able to define space and availability management policies via externally specified attributes. Also, the management functions must be consistent with those in MVS. When CMS files become inactive and need to be stored for later use, there has to be a way of storing the data in a more economical form. DFSMS/MVS, manages the situation with a hierarchy of storage levels, associated device types and data compaction facility. DFSMS/VM, says Buss, should have a similar function and form of storage hierarchy to support file migration. He suggests that the migration function for inactive files could be provided by a product like VMBackup-MS but stresses that regardless of how the function is provided, DFSMS/VM will be an integral component of the system management facility, and in order that DFSMS/MVS and DFSMS/VM are consistent, tape storage must be included in the storage hierarchy. As the world moves away from a monolithic mainframe strategy, IBM is having to address the requirements of distributed environments. However, there is no DFSMS product or set of products that provide a consistent approach to system managed storage for data on workstations and networks. Back-up, migration and archive criteria for distributed data should be specified externally through management class definitions. But the nature and organisation of distributed data dictates a more flexible policy, one that can accommodate various levels of function. It should be possible to back-up data at the individual file level; sub-directory level; directory level; and aggregate or file group level.

The mainframe’s role

The mainframe’s role in a distributed environment should be one of management. Inactive data could be sent to the host and stored on DFSMS primary storage group volumes, where it could be managed like any other inactive MVS data set. Otherwise, the data could be sent to the VM mainframe to be stored as a CMS file and managed by DFSMS/VM. Having outlined the future requirements of DFSMS, Buss went on to discuss the general direction, saying that the main theme is consistency across all environments in SAA and Unix. SMS compatibility with DB2 would be a boon, and its a wonderful paradox that IBM’s storage guys share San Jose facilities with the DB2 developers. Also, IBM can blame no one else but itself for the proliferation of conflicting operating environments. A user interface for interacting with the storage management products and functions must be a graphical one. It must be workstation-based and Windows-driven, and ought to present the same look-and-feel regardless of environment. Next, the entire storage hierarchy should be system-managed and controlled, evolving from from today’s administrator manual control. Product functions should support the administrator through high-level and task-oriented external interfaces. Facilities ought to be common to both MVS and VM, and tools should be integrated and co-operative – all of which will reduce staff numbers and overheads, althoug

h costs will surely rise as demand for more complex storage devices increases.