For many administrators, time spent playing the role of an IBM i data hunter or fisherman can see them tracking problems to their source or staring for hours into icy pools of data, hoping to catch something worthwhile. Even when they reel in a big problem, there are always plenty more lurking in the shadows, evading capture. It’s an endless quest for administrators, and one that requires tools that are fit for the purpose.
Two of the most challenging issues administrators face are identifying specific, application-related problems, and knowing where and why auxiliary storage is being used. Pack away the poles and call off the hounds. We’ll show you how to overcome these issues and reduce the time you have to spend hunting and fishing on the IBM i.
Deep, proactive application monitoring is essential to stay in control of your service delivery. It’s vital to stay on top of application performance and availability. Imagine the value that moving from system monitoring into true business monitoring could bring to your organization. This can be achieved by probing every application running on the IBM i and extracting critical information. Checking a variety of application metrics in production can help you understand the status of the components within an application environment, from both a current and historical perspective. With thoughtful planning and the right set of data, proactive monitoring can help you quickly correct negative application performance or avoid it altogether.
Inevitably, some application errors will occur. At the very least, proactive monitoring provides you with the ability to detect problems as they happen and, more importantly, fix them before they impact users. If problems are going to happen, it’s better that you find them before your users or customers do. Monitoring applications to detect and respond to problems before an end user is aware that they exist is a common requirement, especially for revenue-generating production environments. Most administrators understand the need for application monitoring and keep an eye on system statistics such as CPU utilization, throughput, and memory usage. However, there are many parts to an environment. Administrators that effectively anticipate problems give their business a competitive edge by understanding which metrics to capture.
Some examples of application monitoring for IBM i include: Monitoring for MIMIX objects in error, monitoring MIMIX data group status, or checking the number of expired BRMS volumes. You can also use an SQL-based monitoring tool to check for open IBM problems, the number of days since a file was last changed, the number of logical reads on a file, credit limits and P&L data, customer orders and invoices, or stock replenishment figures. Proactive monitoring should be flexible enough to allow you to monitor status, count, or number of any application data element.
A major benefit of application monitoring is being able to establish historical trends. This type of monitoring provides a way to gauge whether changes to the application have affected performance and, if so, how. If a fix to a previous issue shows slower response times, you question whether the fix was properly implemented. Likewise, if new features prove significantly slower than others, you can focus the development team toward understanding the differences. Performance statistics also assist in resolving misconceptions about how an application is (or has been) performing, helping to offset conclusions not based on fact. When performance data is not collected, subjective observations often lead to erroneous conclusions about application performance.
QTEMP Library Size and Growth
At first glance, objects in library QTEMP appear to be temporary. An outsider’s view of QTEMP implementation is that, for every job on the system, a unique QTEMP library exists; that you can access objects in QTEMP only from within its associated job; and that when a job ends normally, the job’s QTEMP library is deleted along with all the objects in it.
The truth of QTEMP implementation, however, is somewhat different. If your System i is running under security level 50, every QTEMP library is actually a permanent object on the system. When a job begins, the system creates a QTEMP library for the job and assigns an address for the library, as well as each object in it in the operating system’s permanent root address system. If a job or the system ends abnormally, on the next IPL the OS uses these addresses to locate all stranded QTEMP libraries and objects and clears them from the system. So, internally, QTEMP and the objects in it are permanent.
At security level 50, IBM i no longer stores those addresses. Instead, the system treats every QTEMP library and its objects as temporary. That may sound better, but if a job or the system ends abnormally, the OS can’t automatically locate and delete the misplaced QTEMP library and its objects when you IPL. Therefore, when you implement security level 50, you need to execute the Reclaim Storage command more often to recover objects that get lost and then specifically delete them. Otherwise, you could have some serious storage issues.
Monitoring the size of QTEMP libraries and the count of objects in those QTEMP libraries is one way to combat potential storage problems. These actions are invaluable in detecting situations where jobs or applications may impact auxiliary storage by looping and filling up QTEMP libraries. They can also help provide identification of hidden disk use. However, if you’re having an issue with storage consumption, the major challenge is identifying where the storage is going fast enough to resolve the problem before it becomes your most important issue.
Reducing the time you spend hunting for data requires real-time visibility and granular insights on your environment’s important applications, as well as the ability to uncover otherwise hidden use of disk resource. With these two issues resolved, imagine what you could do with all that free time!