Article

8 Things Your Capacity Management Tool Should Do

Solaris, Windows, UNIX, Linux, AIX
Posted:
February 15, 2019

Main content

So, you need a new capacity management tool. Maybe your tool doesn’t support your entire IT environment. Maybe it’s hard to use or maintain. How do you know what to look for, especially as technologies advance, both for capacity management tools and IT environments? We had our capacity management expert identify 8 things the right tool should do.

1. Help you prioritize your efforts

Due to the increased complexity of modern IT environments and broad scope of your responsibilities, knowing where to start is one of the biggest challenges. You need a tool that can identify or predict issues and rank them in order of urgency and importance. Is there anything in need of immediate remediation? Where will you encounter problems in the next 6 months?

2. Save you time through automation

IT environments are only getting bigger and more complex, so you need to automate as much as possible to keep up. Automating repetitive tasks, using universally applicable analytical methods, and producing trustworthy results that are simple to interpret, will allow you to focus more on advanced tasks with greater business value.

3. Be easy to use

In recent years, with the rise of digitalization and DevOps, many businesses have started to transfer capacity management responsibilities to product or development teams, hoping to become more business embedded and improve agility. This group of users needs fast, easy-to-follow recommendations to keep other initiatives rolling while doing capacity management. Simple, intuitive user interfaces and workflows are key to making this successful.

4. Provide hybrid IT support across the whole solution

Every innovation in IT adds to the complexity. Very few (if any) new trends cause everything to change. They just add to the stack of technologies and architectures available and increase the complexity. Examples from recent years are the adoption of public cloud and container technology. Your capacity management solution needs to be flexible and modular enough to incorporate those new technologies and allow you to manage them alongside existing technologies. What you don’t need is a new tool for each new technology.

5. Support scenario-based planning

Your business wants to grow... without outgrowing its capacity. And supporting future needs is one of the basic responsibilities of a capacity management tool. A capacity management tool should help you understand system behavior and model different circumstances such as:

  • Seasonal spikes in demand
  • Moving applications to the cloud
  • Workload consolidation

Capacity modeling will help you identify risk and solve the capacity problem before it even happens.

6. Versatile data integration

You likely have a monitoring solution in place already, and most capacity management tools provide data collection, it may be easier to use data from existing tools. This way end users can access data from a variety of sources—within one place. Whether the data stays in the original location or is extracted and stored in your tool (or both), your tool should be able to integrate data for your use.

7. Be scalable

Ongoing trends in platform technologies (virtualization, containers etc.) as well as application frameworks (microservices, cloud native etc.) where more and smaller components form a service will impact how you do capacity management. Here's how:

  • More objects to track and record data about
  • The impermanent nature of those objects forces you to sample data more often to capture all significant events
  • More objects mean more relationships to track – the amount of metadata required will increase in relation to the fragmentation

Even though the actual scope of physical assets or business applications may not grow, new technologies may show that your tool isn't up to the task. You should make sure that your solution can scale to meet those new requirements.

8. Have light-weight data collectors

Capacity Management is a data-driven discipline. The requirements on the data—in terms of granularity and scope—depend on the type of analysis you’re looking to do. When investigating a performance issue, you want real-time instrumentation with a wide range of metrics. For planning, identifying long-term trends and seasonality patterns requires access to aggregated historical data. You need a lot of data, but you also need the data collectors to be efficient, with a minimal footprint.

How does Vityl Capacity Management stack up?

  • It’s designed to provide several useful insights at a glance and fully automated:
    • Health: Are there any services or infrastructure components that has had a performance or capacity issue during the last 24 hours?
    • Risk: Are there any services or infrastructure components that are in danger of failing to meet service levels within the next six months?
    • Efficiency: Are there any infrastructure components that are not used efficiently, where resources could be reclaimed or repurposed?
  • Provides simple and intuitive user interfaces, guiding you with best practices from our 20+ years of experience.
  • It uses queueing theory to calculate processing times and delays, so you can predict the behavior of a system under varying loads and prescribe a solution. It focuses on performance impact rather than component utilization, yielding much more reliable results.
  • Allows you to manage, analyze and plan for new technologies and frameworks alongside your legacy platforms using a consistent set of user interfaces and capabilities.
  • Allows you to integrate data from existing third-party monitoring solutions. You can also integrate with data sources that bring context that allows you to better align your work with the business (service definitions, cost, forecasted demand etc.). You can integrate with third-party data sources in two different ways:
    • Data Federation - the data stays in the original location and is brought in on-demand. This removes the need to duplicate data in a second datastore. The federation is completely seamless and happens behind the scenes in runtime. We offer a wide set of integration mechanisms, including SQL-queries and access via APIs.
    • Data Centralization – the data is extracted then stored in the Vityl Capacity Management database. Once there, you have full control of aggregation, retention etc. You might choose this if you have transient data sources that don't provide enough history or if you want to have full control of the data.
    • If you have multiple third-party data sources, you can combine these two methods to compose your ultimate capacity management database.
  • Built on horizontally scalable components, providing a back-end that can manage the complexity and volumes of current and future IT environments.
  • Includes native data collectors that provide all the important performance metrics for common platforms (Windows, Linux, VMware, containers etc.) and public cloud services (AWS, Azure etc.). The collectors are very efficient with a minimal footprint. And they also provide support for collection down to one-second granularity for performance metrics and detailed process data. The collected data is saved in a scalable back-end datastore, supporting real-time access to thousands of systems with built-in mechanisms for automatic aggregation and retention of the data.
Ready to explore a new capacity management tool?
Talk to our team about getting starting with Vityl Capacity Management.