Blogs

DB2 and Disk Storage Virtualization: Part 1

Why disk storage virtualization is so important

Traditionally, storage provisioning on the mainframe has been a slow and complex process. Apart from procurement and hardware installation, there are host activities for the Input/Output Definition File (IODF), SMS storage groups, and Automatic Class Selection (ACS) routines. IBM created zDAC to help the system programmer match the IODF to the storage controller, but that is only one aspect of the provisioning process. There are also considerations when DB2 System Point-in-Time Recovery is being utilized or Symmetrix Remote Data Facility (SRDF) and Peer-to-Peer Remote Copy (PPRC) are used for long distance array-based replication. The entire process is also mired in change control—which, while necessary, can be tedious and time-consuming.

The inevitable result of such a painful and extended process is that end users frequently request more storage than is immediately required so they can reduce the number of times they go to the well. Sometimes this extra space is not used—or worse, it is allocated and not used, which is a hidden form of waste. For example, if a DB2 linear dataset is created with an 8 GB PRIQTY, that space is utilized (according to all normal capacity planning and measurement tools) even if DB2 writes to only a fraction of the allocated dataset.

After the storage has been fully provisioned, there are still challenges that can lead to poor application performance. Some Fibre Channel drives can be as large as 600 GB. Just think how many MOD-9s can be carved out of one of those!

Storage administration

The storage administrator must consider a number of questions:

  • How should the very large disks that are in the controller be managed?
  • What is the best way to meet SLAs for the application?
  • How can important production applications be separated from, for example, test and QA?
  • What are the performance requirements (IOPS or MB/sec)?
  • What are the availability requirements (RAID protection, remote replication)?

One of the key factors that makes this activity more complex is the recent reduction in the IOPS density of Fibre Channel drives. Spinning disks have become much, much larger but have not gotten significantly faster, so they have been delivering IOPS at approximately the same rate over the past few years. Figure 1 illustrates this trend.

 
DB2 and Disk Storage Virtualization: Part 1  figure

As spinning disks have become larger, I/O capability per GB has decreased at an alarming rate. (Source: Seagate.com)

 
IOPS density is the I/O capability of the drive (measured in I/Os per sec) divided by the size of the drive in gigabytes. It is clear from Figure 1 that four 146 GB Fibre Channel drives can deliver four times the I/Os that one 600 GB Fibre Channel drive can provide. The problem, however, is that 146 GB drives (and smaller) will be unavailable soon due to the availability of higher-density alternatives. This has been the trend over the last 15 years; smaller drives are being replaced by higher-capacity ones, forcing you to deploy DB2 systems on these very large actuators, and magnifying the possibility of contention between workloads sharing the same physical drives.

Figure 1 shows that current 600 GB Fibre Channel drives are capable of about 0.25 IOPS/GB, far less than the slow 9 GB drive represented by the first bar in the chart. This reduction in IOPS density is an insidious trend that cannot continue if applications are to achieve their SLAs. A quantum change is required.

This article is continued in DB2 and Disk Storage Virtualization: Part 2.

[followbutton username='IBMdatamag' count='false' lang='en' theme='light']