So, it’s time to set the record straight! Here are some common excuses we’ve heard for delaying high availability, and ways that you can get around them.
1. High Availability Is Too Expensive
There is a common misconception circulating the IBM i marketplace that high availability solutions are so expensive only the biggest companies can afford this technology. IBM recognized that HA is just as critical for smaller companies and didn’t want expensive Power Systems hardware to be a roadblock. So, they created the CBU (capacity backup unit) server over a decade ago. This server ships with only one processor turned on and is capable of spinning up more processors on the fly for rent in the case of a real disaster.
As for the HA software itself, some solutions are more affordable than others, allowing you to realize a greater return on investment. With a software-based solution, there’s no need to oversize your production or hot backup systems to get adequate performance. There’s also no need to order excessive disk storage or memory. Find a solution that is frugal with your resources, designed to ensure the most efficient use of hardware and communications infrastructure while placing the lowest burden on your team.
2. High Availability Is Too Hard to Implement
HA technology can be easier to implement if you select the right vendor. Beware of vendor solutions that are over-engineered or whose user interfaces could be considered legacy.
Set up should take hours, not days. If the HA technology you’re looking at uses software-based replication, you should be able to define your library and IFS rules for replication within one working day.
However, like any good rumor, there is some truth to this one. Role swaps are the more complicated task in any HA scenario. With role swaps, it’s important that you understand which applications need to be ended and how to properly end them as you switch servers. It’s best to perform a test-while-active role swap before doing a full swap.
Like testing software or anything else, you should expect challenges and exceptions that you might have to work around. Plan for this or you are being unrealistic.
3. High Availability Takes Too Much Time to Maintain
There are two parts to this myth. The first is that you can completely ignore your HA solution once it’s been set up. The second is that monitoring the solution must be a manual effort performed hourly in order to make sure it’s still working properly. And since it’s such an important piece of your HA/DR strategy, you do want to make sure that it’s working properly.
Instead, seek out a solution that has self-monitoring and self-healing processes built into the rules. If these rules do not come preconfigured, find a vendor who also offers monitoring software that can help you build rules to automate the monitoring of your HA software.
Your HA software provider should also be willing to help integrate—you cannot afford to have a person dedicated to monitoring this software manually. It really should be: set it up, monitor it automatically, and manage by exception.
4. High Availability Is Too Inflexible
The all-or-nothing data replication mentality is not even completely true in hardware-based replication solutions like PowerHA, but the most flexibility can be found in solutions that use software-based replication.
With PowerHA, it actually is all or nothing for data in an independent auxiliary storage pool (IASP). However, most data and applications usually have a correlating component in SYSBAS that needs to be replicated even after PowerHA.
Since software-based replication is more granular—you can pick and choose what you replicate, including SYSBAS—it can be used alone or alongside some hardware-based solutions. The result is individually tailored data replication any way you slice it.