It used to be that running IBM i workloads in the cloud was unthinkable, especially for production applications. Not anymore! The annual IBM i Marketplace Survey Results show that 23 percent of IBM i shops are running their applications fully in the cloud or in a hybrid cloud environment.
With many big cloud players now offering to run Power Systems workloads alongside more mainstream platforms—in addition to the many managed service providers (MSPs) also offering their own cloud services—this figure will undoubtably increase in the coming years.
Moving applications to the cloud requires planning, but (if done right) allows you to offer a better user experience, scale with flexibility, and manage costs. It also provides you with the opportunity to innovate and modernize faster than before.
But moving workloads to the cloud is much more than running your IBM i application in somebody else’s data center. If this is how you are approaching it, then you are likely to miss out on some of the fantastic opportunities that cloud computing provides.
Which IBM i Workloads Are the Best Candidates for Cloud?
Workloads with variable demands
Public cloud services were born as a result of the retail sector having peaks in business. These peaks are typically in January or February for the travel industry while others have significant spikes on the build-up to Christmas or Black Friday sales. Without cloud, businesses would have to ensure that enough capacity is available, even though the additional headroom isn’t required for the majority of the year.
Applications that don’t add to revenue
Applications like email and payroll are critical, but unless your company provides email or payroll services, there is no added value to running these applications on-prem. No matter how efficiently you run these applications, it will have a negligible impact on your bottom line. For many, these types of applications are well-suited to run in the cloud.
New or unknown workloads
What about applications or workloads where you are not 100 percent sure of the amount of infrastructure and/or resources you require to deploy them? Put them in the cloud where you can flex up or down at the drop of a hat. And remember, they don’t need to live there forever. Think of this as a try-before-you-buy model.
Applications that are accessed globally
Many cloud providers allow you to deploy workloads to different regions around the world. Different services are available in different regions, and in some cases the price of these services can differ depending on which region you choose. If an application you intend to move to cloud is used all over the world, it makes sense to deploy it in the cloud. For applications used in a specific geographical area, you’ll want to deploy it in the region closest to where most of the users are located to help minimize any potential response time latency.
Application data with fewest regulatory restrictions
Do you have any applications with no or little personal (customer-related) data? These applications are great candidates for migrating to the cloud as you are likely to suffer the least regulatory resistance with them. Personal data can be subject to quite a few regulatory controls, including data residency, data sovereignty, and data localization. A summary of these controls can be found below:
- Data residency relates to where a business states that their data is stored. This residency is often governed by a company’s desire to take advantage of more financially advantageous tax jurisdictions.
- Data sovereignty is somewhat more expansive than data residency. It states that the data is subject to the laws and governance within the country where it is collected.
- Data localization, often thought to be the strictest of the three, states that data records must remain within the geographical borders that they were created in.
Be sure to consider these three things when considering the geographical region where you want to deploy your application.
Cost Considerations for IBM i Workloads in the Cloud
Cloud computing has a reputation for being comparatively cheap as there are no capital expenditures or a tangible data center to add to the cost. But buyer beware! Cloud can become expensive if you have over-specified your requirements and do not closely monitor cloud usage.
Cloud providers charge by the resources and services used, multiplied by the time you use them for. Some charge by the hour, others by the minute, and some by the second. In addition, there are charges for the IBM i operating system and licensed programs that are governed by IBM processor groups.
Some providers also charge you for and allow you to choose between an enterprise or scale-out server, whether to utilize solid-state disks (SSDs) or hard disk drives (HDDs), and what type of protection the disks should have (RAID5 or mirroring are common options).
Here are some decisions you’ll have to make:
- Whether to opt for a shared or dedicated server
- Number of cores
- Amount of memory required
- Amount of disk space required
In these days of virtualization, uncapped processors, and enterprise pools, it’s not uncommon for system administrators to not really know how much resource workloads actually consume. It can be hard to understand what a typical day is or when and where your peaks and troughs are. That’s why it’s imperative that you size your systems appropriately before moving to the cloud.
How Do I Size IBM i Workloads for Cloud?
There is a popular saying: what is measured is managed. A great way to start measuring your IBM i workloads is to use IBM Collection Services. Collection Services are free with the IBM i operating system and take snapshots of your system every 15 minutes by default. These snapshots, which can be adjusted to run anywhere from every 15 seconds to once every hour, provide visibility into system resources such as CPU, disk, memory, and disk arm utilization. These are just some of the key metrics that are required for sizing your workload as you prepare for migration to the cloud.
Some words of warning, though. Collection Services data (found in QMPGDATA or QPFRDATA) can become quite large. It can also be nearly impossible to interpret without help.
Performance Navigator software from HelpSystems uses Collection Services data, so it has zero overhead. It also provides a graphical interface that can help you analyze the hundreds of different metrics that Collection Services collects. Performance Navigator allows you to visualize the data in relation to your workload, whether that is a single VM (partition), server, or the entire enterprise running on different hardware.
Perhaps best of all, with the Performance Navigator tool, you can get a preview of what your workload would look like on new (cloud) hardware and it allows you to artificially grow specific workloads to see when a specific threshold would be reached or predict when you might need to request additional funding for those times when you need extra cloud resources. Pretty cool, right?
The historical performance data on your servers is a treasure trove of information regarding actual usage over time. But you must access and interpret this data to get to the bottom of performance issues or inform future hardware investments. Performance Navigator can help! Request a live demonstration and we’ll show you how.