Trends such as virtualization have allowed for a great deal of efficiency from business hardware. As a result, server utilization and expectations are rapidly increasing. However, there may be some unexpected downsides to pushing for higher utilization. The efficiency of virtualization and cloud technology, writes New York Times blogger Quentin Hardy, are putting more pressure on computer systems and the people managing them.
The cloud has allowed for an always-connected world, and advances in its technology have made it easier for consumers and business users alike to fill up their free time with additional tasks. Hardy compared this trend to software-monitored workload balancing, which has a similar effect on hardware. As a result of these expectations, our technology is always on in an effort to get the more and more utilization. According to the New York Times, the Lawrence Berkeley National Laboratory achieved data center utilization rates as high as 96.4 percent.
But is more utilization always a good thing? What other performance data should administrators pay attention to? For those thinking about increasing the utilization of their network nodes, it may be beneficial to first look at the performance data IBM i collects.
IBM i Performance Analysis
IBM i has many relvant performance metrics that you should pay attention to since they give you hints as to the overall system utilitation:
- Memory pool faulting per second
- Temporary storage
- OLTP transactions per hour
- CPU seconds per transaction
- CPU usage by workload
- Disk busy percent
- Disk I/O per second
- Elapsed time of batch jobs
It is also important to be able to communicate metrics in terms that are relevant to the rest of the company. Executives, for example, are most likely interested in seeing this information presented in a dashboard. Supplying this data by workload—for instance, for your ERP, manufacturing planning, or core banking application—would be appropriate.
So, what can go wrong with high utilization when the company's hardware is being used to its maximum potential? This question reveals the problem in focusing on utilization while ignoring other important metrics.
More Is Not Always Better
Current technology trends have contributed to the idea of infinite scalability, but hardware does have its limits. When a computing resource is handling one process, it isn't available for other tasks. According to IBM, response time begins to suffer when utilization rates exceed 50 percent; it is significantly impacted beyond 80 percent utilization.
A high spike in CPU utilization—say, 90 percent—can result in noticeable delays even when disk arm utilization is low. In addition, high utilization rates for both CPU and disk can cause response times exceeding two seconds.
Taking several seconds to handle requests is likely to result in a great deal of end-user frustration, so the best option is to achieve a balance and make sure workloads do not overburden the system.
Customizing IBM i Settings
Many companies would benefit from performing historical performance analysis, which requires a longer retention period.
Similarly, administrators should consider the frequency at which this data is collected for near real-time presentation of data as well as proactive notification when performance is suffering. It's also advisable to invest in a tool to bring together data from multiple IBM i partitions and provide a more thorough understanding of the available performance data across the entire IBM i landscape.
Make the Most of IBM i Performance Analysis
Making sense of IBM i performance data provides visibility into your system, application, and hardware usage.
To bridge the gap between hypothetical and reality, Robot Monitor, the real-time performance monitor by HelpSystems, consolidates your IBM i performance information from multiple workloads and partitions. It even collects relevant data from AIX and VIOS. The result—increased control over your system activity—allows you to more thoroughly investigate and report on specific IBM i workloads or overall system performance. In addition, node visualization tools allow you to efficiently address issues related to specific systems, lessening the impact on end users. Experiencing a slow down? Drill down to the top CPU, I/O, or temporary storage consuming jobs with just a click.
Robot Monitor also makes it easy to fine-tune data collection settings. With an overall view of all Power systems (IBM i, AIX, VIOS), administrators can easily set data collection parameters for one or all systems. This also allows IT staff to see the status of their systems with a quick glance. For example, create a custom dashboard showing the status of all your partitions and the critical system metrics, application subsystems, and jobs running there. When a critical threshold has been set but not exceeded, the data will be color-coded green or blue. Exceeded? Perhaps yellow or red.
In summary, using these features to harvest relevant data from the core application servers can lead to greater efficiencies without the trade-off of slower response time and other negative impacts for end users. To speed your response time, see how Robot Monitor can go to work for you.