You’ve heard about IASP technology many times over the years, but you’ve been ignoring it, haven’t you? Let’s take a fresh look at IASP, how your Power System running IBM i could take advantage of the technology, and why you should care.
An independent auxiliary storage pool or IASP is a logical address space that maps to a set of physical disk storage where the database and IFS data is placed. The term “independent” indicates that this pool of disk is switchable between other partitions in a PowerHA cluster as opposed to the system ASP, which is not switchable.
IASP is often deployed on external storage (SAN), which enables several types of cluster configurations, including multiple site clusters. It is also deployed widely in smaller shops with internal disks using IBM PowerHA Geomirroring to replicate data between two systems.
Because the application and database tables are placed into this independent volume group, the option exists to run processes on the IBM i system that can see that address space. The IASP address space can be included in a job using an option in a job description (*JOBD), in the submit job command for batch (SBMJOB), or with the command SETASPGRP.
When applications run, they now run over the SYSBAS name space as well as the IASP name space as a single environment. Note: SYSBAS and the IASP cannot contain the same library name or the IASP attach will fail.
The IASP is seen as a *ASP device type and must be varied on using the VRYCFG command to make it available to your partition. Typically this might happen during an IPL so it is attached automatically. There can be secondary IASPs associated with a primary IASP that are all attached and become available when the primary device is varied on.
When Is IASP Important?
IASP is important on shared storage cluster technology that also features hardware replication. On other platforms, it is common for the base operating system objects to live in one address space (e.g., user names, configuration information) and application data in another.
A PowerHA role swap or failover operation requires no IPL. Point the IASP at the target system and vary on. This is a much simpler and automated method that has less of an impact on your operations if a PTF or OS upgrade is needed…or an unplanned event occurs.
Data Resiliency and Security
- Leave non-critical IASPs offline, bring online only when needed
- Encrypt the IASP where your sensitive data resides
- Meet compliance requirements for segregating data
Isolate or Consolidate
- Have multiple versions of same application available but segregated
- Have multiple applications on one partition but segregated in separate IASPs
- Consolidate applications currently running in separate servers
- Easily and quickly replicate a copy of application data for offline backup, development, or testing using FlashCopy
- Archive data or reports to lower cost external storage
Over the next couple years, as PowerHA, external disks, and application segregation become more common place, IASP technology on IBM i is going to be required learning. The technology has seen a lot of movement since POWER7 and even more with POWER8, especially for alternatives to internal disks. The result of that will be shrinking hardware costs, less complexity, and increased redundancy using clustering technology.
So why, despite the benefits—and after a decade of IASP technology—are some administrators still unsure of it? Let’s see what our contact at IBM has to say about it.
Steve Finnes on IASP Technology
Steve Finnes is the Worldwide Offering Manager for PowerHA & CBU at IBM. Here, he talks about IASP and the concerns that administrators might have about migrating to that environment:
“Well, Chuck, first I should point out that the mystery associated with the IASP environment has diminished in the past few years as a lot more of our customers are now deployed with PowerHA and the understanding about shared storage clustering is getting out there among our IBM i customer base. The kinds of questions we get are: does it require applications changes? How do we migrate? And so on.
“I like to draw an analogy to other operating system environments like AIX where shared storage clustering is common and the IASP equivalent is called a volume group. Thanks to the simplicity of single-level store on IBM i, we’ve got mixed OS and applications separated at the address level, but they can all end up in the same physical disk area, which is also why our customers have traditionally run everything out of the system ASP. On other platforms, they’d already be segregated by volume group.
“To set up shared storage clustering in the IBM i world, we first need to move the data out of the system ASP over into the IASP. This normally does not require application changes; it has more to do with job descriptions and library lists. I should also mention that most of the well-known commercial apps in the IBM i space are IASP enabled.
“The other part of the shared storage cluster has to do with the SYSBAS objects. We sometimes hear people asking why these objects can’t be in the IASP, too. So, we first moved the data out of SYSBAS into the IASP and now we need to keep the SYSBAS on each node in the cluster synchronized. The SYSBAS data is an attribute of a given node in the cluster and is part of the operating system environment, so it cannot go into an IASP. Each node in the cluster has its own active operating system and SYSBAS. The SYSBAS contains information specific to that OS and hardware that enable the IASP data and applications to run on the other nodes in the cluster.
“Objects in SYSBAS that are placed in the admin domain are referred to as monitored resources. Information about changes to those objects is relayed at a logical level via the admin domain to the other nodes in the cluster. There is a parallel in other operating system clusters where these cluster resources are synchronized separately from the volume group data. A simple example for IBM i customers would be user profiles. If the user profiles were inside of the IASP with the data, how would a user issue commands to vary on the IASP?
“The basic building block is a shared storage cluster where the LUNs can be switched between nodes in the cluster. The IASP data can also be replicated using the IBM storage replication technology Metro Mirror and Global Mirror or with internal disk and geomirroring.
“What we’ve seen is that there really are no big technical roadblocks associated with implementing IASPs on customer systems. What I recommend to our customers is to do an IASP workshop with our lab services team. They will actually set up your application environment into a PowerHA cluster as part of a hands-on workshop that lasts three or four days. You’ll come out of that workshop pretty much knowing what to do.
“This is the future and it’s here now. Get to know it, understand it, and don’t be afraid of it. We’re way beyond experimental; PowerHA is mainstream technology now and the pricing is very attractive, particularly for the small shop staying with internal disk.”
IASP and Automated Operations
From a monitoring standpoint, all IASP environments share the same SYSBAS, so there will only be a single QSYSOPR message queue to monitor if you’re using Robot Console for message queue monitoring and automation. From an application automation standpoint, job schedulers such as Robot Schedule can be installed into the IASP where the data resides and could even be installed in SYSBAS to run operational processes and the IASP as long as the installed library names are different. Note: This capability was recently added to Robot Schedule.
Resources for learning and implementing this technology abound. IBM offers an IASP Enablement Workshop, PowerHA for i Clustering, and Independent Disk Pools Implementation classes in person or online. There are also sessions at the semi-annual COMMON and IBM Edge events. Seek out these opportunities and you’ll share Steve’s conclusion…IASP works!