Resources

Guide

5th Annual IBM AIX Community Survey Findings

The AIX Community Survey, now in its fifth consecutive year, goes in-depth with IT teams to gain a unique perspective into how this platform is being used today and how teams envision using it in the future. Over the years, the respondents of the survey have expanded to include a variety of industries, geographies, and titles within IT. More than 100 IT professionals in North America, EMEA, and APAC participated in this year’s survey, and this input enables all of us to understand the role of AIX with new clarity.
Guide

A Guide to IBM i Message Management

This guide teaches you how to handle IBM i message management, including the fastest and more accurate ways to monitor for messages, filter critical messages, and escalate messages to members of your team as needed.
Guide

Robot in Modern IBM i Environments

Robot systems management solutions can improve processes and enhance the return on investment for new technologies running in modern IBM i environments. Find out how.
Guide

How to Do IT Cost Optimization

Our years of experience shows that organizations waste 30% of their hybrid IT spend, on average. This article identifies the five key components of a cost optimization strategy and how to be successful with each of them.
Guide

How to Do Capacity Planning Guide

Your business can’t afford downtime. But with ever-growing IT infrastructure, keeping applications up and running isn’t easy. Every CIO or IT manager has limited time, money, or personnel budget to keep IT running. Without a solution, your IT environment risks performance bottlenecks, outages, and an overall inability to predict future needs. That’s where capacity planning comes in. Capacity...
Guide

Continuously Optimizing IT in Financial Terms

CHALLENGES : Virtualization and increasingly complex agile computing environments are creating difficulties for IT financial controllers and for IT Financial Management (ITFM). Virtualization breaks the long-standing direct, one-to-one correlation between cost-allocated physical hardware and the IT services it supports. Increasingly dynamic, multi-layered applications have made it more difficult...
Guide

DevOps Development: Keeping the Lights On

Overview : The DevOps methodology embodies two core philosophies: decreasing the lead time of software deployment and the automation of delivery and testing. DevOps emerged as a practical response to the agile development movement, in contrast with traditional, phase-based or “waterfall” development, which is inefficient and labor-intensive. Traditional methods should be phased out, and companies...
Guide

Dashboards Don't Work (Unless You Have a Metrics Management Strategy)

Tech has had a tremendous impact on the way today’s businesses seek continued growth and improvement. No matter what business they are in, executives everywhere are investing in technology that improves their business processes, gets them ahead of the competition and widens their margins. Ultimately, the return on that investment is determined by how well technology supports a business’ ability to...
Guide

Health and Risk: A New Paradigm for Capacity Management

Capacity management, considered by top analyst firms to be an essential process in any large IT organization, is often so complex that in today’s accelerated business world it cannot be effectively implemented. Changing priorities, increasing complexity and scalable cloud infrastructure have made traditional models for capacity management less relevant. A new paradigm for capacity management is...
Guide

How to Manage IT Resource Consumption

At an application level with Vityl Capacity Management In this guide, John Miecielica of Metavante, provides a step-by-step example showing how he uses Vityl Capacity Management to analyze IT resource consumption at an application level. This key capability is especially important in today’s environments where multiple applications run on a server or multiple servers might be required to implement...
Guide

Commercial Clusters and Scalability

In this paper we present an introductory analysis of throughput scalability for update intensive workloads (such as measured by the TPC-C or TPC-W benchmarks) and how that scaling is limited by serialization effects in the software-hardware combination that comprises any platform.
Guide

UNIX Load Average Part 1: How It Works

In this online article Dr. Gunther digs down into the UNIX kernel to find out how load averages (the “LA Triplets”) are calculated and how appropriate they are as capacity planning metrics.
Guide

UNIX Load Average: Reweighed

This is an unexpected Part 3 to the discussion about the UNIX load average metric answering the question of where the weight factor comes from.