High availability (HA) solutions are perhaps most commonly used to ensure that data is accessible during failures and outages. Savvy IT teams implement this software in preparation for complete disaster scenarios—hoping that they never have to use it—but they also realize that HA can help avoid downtime and provide business continuity in many other scenarios.
1. Availability After Disaster
The main focus of high availability projects for many organizations is to ensure application and data availability if and when the production server becomes inaccessible, even after a significant failure. HA solutions accomplish this by replicating changing data across great distances in order to keep production data and the replicated backup data separate and safe, ideally on a server in a different geographic region.
High availability solutions for Power Systems technology have been in use for over 30 years and continue to improve as speed off the Power server increases along with the speed of communications. In IBM i environments, teams use this software to replicate all libraries and all IFS directories that are critical to the process of providing application resiliency on a role swap, which should be performed at least once a year, possibly once a quarter, depending on your business continuity goals and risk of disaster.
2. Data Propagation
Data propagation describes the movement of data from one or more data sources to one or more local access databases according to propagation rules. It feeds data warehouses and makes data more accessible to users.
Some organizations take advantage of the remote journaling feature within high availability solutions to transmit data in real time across the network. As data changes, it is immediately sent and then applied to a consolidated database.
HA solutions can also keep objects or data files in sync across a network of IBM i servers. For example, if you need to update a rate table or inventory file across a network of IBM i servers, an HA solution could take one or more objects and propagate them across multiple servers.
You could also do the reverse with several smaller tables on smaller IBM i servers and use the HA solution to transmit copies of the data to a consolidated server. For example, a bank with hundreds of branches, each with their own IBM i server, might use many-to-one data replication within their HA solution to consolidate multiple P05 servers into a larger P30 server to make reporting easier.
3. Business Intelligence
Business intelligence (BI) can often cause performance issues on production servers. End users build queries over data and choke the system with poorly-written query rules. Through no fault of their own, end users have a tool, but don’t really understand how the data is efficiently queried—it’s just a fact of life for many in the BI world.
In an ideal world, however, it would be best to offload these queries to another system to lessen the burden on the production system and keep business applications performing optimally while ensuring that the data remains current. Believe it or not, this is possible.
For years, many organizations have used high availability solutions to replicate the data and objects from production to a backup server to run their queries. Of course, this doesn’t prevent poorly-written queries from degrading system performance on the backup server, so it’s still prudent to keep an eye on things with a performance and application monitoring tool, but your HA solution will keep the data fresh and confine performance-gobbling queries to the backup server so you can do business and service your customers even while someone is running a killer query.
4. Backups on Secondary
Many organizations have found that high availability solutions offer new options for backups. Data and objects—including the IFS—can be replicated to a target server, which you would back up periodically. For a truly safe and clean backup, you would stop the replication process. Your option is to use the save-while-active process.
Save-while-active (SAVACT) was established in the IBM i operating system well over a decade ago to help eliminate downtime due to backups. It is an IBM i save command attribute that can be used to back up data and objects while they are in use. With remote journaling, data is constantly being applied to the target server as it is brought over from the source, so your objects are changing while you execute a backup.
With the save-while-active checkpoint option, you would end remote journaling, execute your save until you get a clean checkpoint, then restart remote journaling and continue with your save operation. This often reduces downtime to a five-minute window, even though the actual backup may run for a few hours.
By using an HA solution to run these backups on a secondary system, you no longer have any backup-related downtime on your production servers and negligible to no downtime on the secondary system.
5. Hardware or Software Maintenance
Performing maintenance on your production server usually results in some downtime for adding resources, modifying major software application levels, or replacing a failing component, such as one of your drives that is being mirrored.
To avoid downtime for hardware and software maintenance, many organizations use their high availability solution to do a planned role swap from source to target server, replace the hardware or upgrade the software on the original source server by shutting it down, and then restart journaling. The original source becomes the new target server until they do another role swap.
The key here is that you must be able to successfully execute a role swap, which shouldn’t be a problem since your HA software should also make it possible to test role swaps regularly.
6. System Migrations
Things can get tricky when the time comes to move data from a server sporting an older operating system to one with IBM i 7.1 or higher. You don’t want any downtime for your business-critical applications running on the system.
Some organizations solve this problem by, you guessed it, using their high availability solution. By installing the software on the production or source system and then replicating the data in real time across servers, the data is fresh and you’re ready to recompile the application code on the newer OS. Once that is set, all that remains is to role swap from the old OS over to the new OS and your system upgrade is complete.
By using an HA solution for system upgrades, you can move the data to a newer system level without going through multiple steps and you can do it at your own pace—just leave this HA setup as is until you’re comfortable with the health of the application at the new OS level.
7. Data Conversion
Another factor in system upgrades is converting data from an older operating system to a newer system while the data is active on production. Here again, organizations have found that high availability software is very good at moving the raw data and objects from IBM i to IBM i.
An HA solution allows you to define new rules that move data while it changes on the old production server to the new target server in another data center until you’re ready to run production from the new data center. Once you do the full move to the data center, you simply turn off replication. At that point, you might decide to create new rules to take the newly converted data back to your full-time target server so that you have replication going again for this new workload.
Taking a data replication approach during these daunting data migration and data movement challenges has proven to be an asset at many organization, but it does shake up the traditional thought process. Data replication allows you to move the data over time unlike the save-and-restore approach, where you’d have to shut down for a weekend to travel the data across the country and then do a restore.
8. Regulatory Compliance
Compliance regulations are not driven by technology, but many industries are required to have a proper backup and business continuity plan in place for IT emergencies. Whether it’s SOX, PCI, or GBLA, they all require you to prove the effectiveness of any HA/DR solution you may have in place.
Organizations should be able to turn to their high availability solution to provide audit, setup, and history reports (i.e., dashboards) that help them pass those pesky audits with ease. Most HA solution rules are database-driven and a simple query over the data should provide any auditor with proper information about what you’re replicating.
Additionally, you must be able to prove that you have tested your role swaps—just another reason to practice them! Ideally, your HA solution is able to track this activity automatically or put information messages into the system log on the server. If not, you may just have to keep track of this manually in a document. No matter which way is easier for you, auditors need proof that you are doing what you say you are.
These are just a few of the most common uses for high availability solutions and data replication beyond disaster recovery. Some companies have been known to use HA solutions to build real-time test data on a development partition, which is one reason to use software-based replication instead of or alongside hardware-based replication, since it is more flexible and can be used to face many IT challenges.
24/7 business demands 24/7 system and application availability. When you’re ready to avoid downtime—be it planned or unplanned—Robot HA is the fastest, easiest, most affordable way to establish high availability at your organization.