Why You Need to Monitor More than Just Your Application's Performance
Pure performance measurement, incidentally, tells you little about an application’s ability to perform efficiently — especially regarding its resource utilization.
Many companies, quite sensibly, pay close attention to the performance of their service level applications. After all, a company’s ability to reliably meet SLAs is a primary reason for doing business with them in the first place. However, an overreliance on performance monitoring can ironically lead companies down a path towards decreased performance.
The reason is that an application’s performance — its ability to keep response times below a certain threshold — only tells half of the story. Seemingly fast applications often only appear as such because they suck up a disproportionately large amount of resources. In IT, appearances are not only skin-deep, but can in fact mask expensive fundamental inefficiencies. Without comprehensive monitoring and capacity management, it’s difficult to assess the true cost of applications.
Getting Back to Basics
As companies develop and release changes to their applications — often concurrent with moves to virtualized or cloud-based environments — their primary goal is to keep operations running smoothly. Fortunately, measuring performance is completely straightforward: if your application response rates are under 1.5 seconds, you’re meeting your SLAs and satisfying customers.
On the other side of the equation, calculating the global cost of inevitable changes in resource utilization (CPU, memory, disk, etc.) is much more difficult. Many organizations see the hypothetical value in doing so, but with solid performance measurements and happy customers, there’s less incentive to take a closer look at what’s under the hood. At this point, however, the only way to improve performance is to pay for more capacity.
That’s an expensive mentality for any company. In elastic, variable-cost environments, the performance of low-efficiency applications is often artificially inflated. In other words, an application can appear to be efficient because you’ve swallowed the costs of boosting its speed with unnecessary server resources — resources which often kick in automatically and invisibly.
At the same time, those costs are often shrugged off: “We’re paying more than we’d like for capacity, but we certainly can’t afford to fall below the performance thresholds that we’ve promised customers.” Oftentimes, overuse appears as a necessity rather than a problem.
There’s No Need to Fly Blind
However, companies would never support an application if they knew that it used double the server capacity needed to handle traffic. Where companies should be improving the application development process and rolling out updated features more regularly, they instead incur unnecessary and significant costs. Without sufficient metrics and analytical insight, this is where companies are truly flying blind.
Of course, it’s not hard to get a rudimentary picture of capacity usage; implicit testing can reveal general trends, and many free programs, such as those offered by Linux, generate superficial metrics through the command line. But in a highly elastic, month-to-month environment — without fixed server investments — such measurements are inadequate to determine the complex costs that applications incur.
Making use of specialized tools, such as Vityl Capacity Management, companies can gain a more precise, dynamic, and holistic picture of their applications’ resource utilization. Moreover, when accurate data is connected to predictive algorithms, IT professionals can anticipate how specific application changes will alter their capacity usage. Even under different demand scenarios, companies can effectively balance their resource needs across the entire IT infrastructure.
Even more broadly, the implication is that companies need to not only measure resource utilization in addition to performance, they need to realize that performance should never be the end goal. Rather, IT should leverage data analytics to drive efficient development and decrease the lead time for deploying new application releases, and even automate the whole process altogether. The result is not just a high-performing application, but an agile, flexible IT infrastructure.
Effective capacity management processes are the only way to deliver the highest quality service—at the lowest possible cost. Learn how in our guide Getting Started: A Manager’s Guide to Implementing Capacity Management.