Beyond Virtualization: The Road to Software-Defined Computing
Software-defined computing is defined by Forrester Research as an integrated abstraction layer that presents computing infrastructure as pools of virtual and physical resources, allowing users to dynamically compose them into services. It is being made possible today by new data center technologies focused on reducing manual assembly and configuration. By letting the enterprise experience the benefits of data center abstraction, they respond faster to business spikes.
For example, one company’s mobile app created 10x the expected transaction volume. Only software-defined computing allows you to deploy and refresh new applications fast enough to deal with these situations.
Going forward, modern retailers will be required to capture their share of mobile commerce as it grows from a $6 billion to $31 billion market (2016). This will swamp their systems unless they move to a new paradigm in IT performance management which reduces upfront capital outlays and system operating expenses while optimizing resources for systems of engagement. However, Forrester data indicates that few firms are ready to fully embrace the software-defined vision.
Software Defined Data Center Vision
The software defined data center (SDDC) is not new. It was originated by TerraSpring in 2002 and followed up by HP with its utility computing initiative a little later. However, the stumbling blocks have been that such systems were not capable of encompassing legacy systems, lacked standardization, and were difficult to implement. Organizations needed a flexible composition, scalability, easy consumption, the ability to reuse legacy infrastructure, and encapsulation of legacy applications.
So where are we today? A large set of vendors is already involved including the likes of VMware, Microsoft, Cisco, Oracle, Dell, IBM and HP. As of now, the components are largely ready. Now a master blueprint/framework is needed.
The SDDC conceptual architecture has to overcome many barriers. It has to be inclusive of legacy or non-virtualized hardware resources, permit interoperability of multiple vendors’ converged infrastructure systems, and provide a unified software-defined networking approach. It has to be centrally managed based on real time data and a global view.
Forrester states that SDDC management will place a premium on new management models due to its requirement of real time data collection, embedded analytics and the ability to span multiple data source domains intelligently. Its goals include workload-centric optimization, global cost and energy efficiency, and global availability. But these are very tough challenges for management vendors.
When hearing the term SDDC, many senior leaders think of the consolidated data centers of the past. Not surprisingly, they are taking a “wait and see” approach. They tend to think it sounds nice, but how do we get there? Scepticism surrounds the idea that one company can do server, storage and network equally well. Executives are willing to make do with virtualization for now as that is perceived as being much more advanced and proven than SDDC. So they are not willing to go all the way until the storage and networking side catches up. Therefore, buy-in on the concept will require cross-silo strategy and collaboration. This will threaten established centers of power in the data center.
Booking a journey on the SDDC express isn’t without risk. Send infrastructure and operations staff to “cloud school” to learn a new way of looking at things. This will help staff identify and pilot the appropriate workloads for software-defined resources, learn from business units or developers that are using public cloud services, view the infrastructure from the customers’ – and developers’ – points of view, and evaluate converged systems that have (some of) these capabilities built-in. From this foundation, a global management strategy can begin to be established.
Those wishing to move towards an SDDC framework are advised to inventory IT processes and tools required for data collection, storage and capacity analysis/reporting. You have to know what you have and what data is actually being collected as a first step.
Follow this up by conducting an inventory of business KPIs and BI analytic initiatives to determine what metrics will be available. It makes sense to then identify “low hanging fruit” by service, business unit, platform, technology and silo.
Moving Beyond Traditional Capacity and Performance Management
Many changes have taken place in recent times such as cloud computing, virtualization and now the software defined data center. These innovations have shifted the demands placed on IT.
For instance, SDDC now requires far better asset management, smart devices, virtualized storage, virtualized servers and a virtualized network. In a call center setting, IT has to keep track of point of sales, demand response, campaign management, credit and collection systems, billings and more. The software defined data center must encompass business services and applications that span all facets of these systems.
Unfortunately, we are not quite there yet. There is generally a layer of blindness today due to the existence of application silos. One layer doesn’t know what is happening in another and one application can’t communicate well with another. The dynamic nature of these systems presents a challenge to IT performance and capacity management tools.
The requirement is an alignment of IT performance/capacity with business needs/processes that spans all technologies involved. Enterprise performance and capacity management can accomplish this by establishing metrics across all layers – services, apps, servers, OS, network, storage - aggregating it efficiently by correlating business process and IT performance data in order to provide insight into how business process changes impact IT.
This can also help IT understand its own costs by business unit and process, and provide insight into business process performance all the way down to the hardware component level. The result is aligned business and IT intelligence.
In an SDDC, we need answers to things like: what is our risk assessment, how efficiently are we using the infrastructure, how many of those do we have and need by when, what were the exceptions and do we care, what’s the capacity forecast and what are the recent changes compared to the last reported changes? In addition, we need to know what is the overall health of the infrastructure in relationship to providing these services and data.
Today’s tradition of capacity management uses analytics in a limited way for the monitoring of trends and statistics. It can look at specific silos of information and deal with problems within its planning focus such as optimizing server cost. While valuable, this orientation of capacity management is too technology centric and is largely done in isolation.
For example, we have multiple tools to look at our x86 servers, additional tools to analyze mainframes and even virtual tools to deal with virtual machines (VMs). Similarly, the storage and networking infrastructure has its own sets of physical and virtual monitoring and analysis tools.
While this kind of capacity management has major value, it is also a big effort. It requires highly trained staff with domain expertise, the building of a central, long-term repository (CMIS or PMDB), and runs into problems with the scalability of staff and tools, not to mention the politics that are often involved.
New Capacity Management Goals
A new kind of capacity management is needed, then. While it is built on its traditional value, capacity management must optimize far more resources than ever, accelerate the amount of value delivered to the business, increase business relevance, provide predictive analytics in a business and service context, and optimize the efficiency of the software defined data center.
Those wishing to embark upon this journey towards the SDDC must understand what it is. Virtualization and the cloud in combination are essentially the ingredients of the SDDC. This means you have to scale everything as the SDDC establishes many-to-many interrelationships which are all dynamically changing. Therefore, capacity management becomes more critical than ever. All it takes is capacity running out in one small element and it impacts everything downstream.
As a consequence, back-end performance is now more critical than ever. While all the attention will likely be on front-end devices and their performance, those applications comprise a number of services that penetrate every area of the infrastructure. All it takes is one slow server on one small part of a workflow and the related services are bottlenecked.
The new era of capacity and performance management will achieve infrastructure and financial optimization on several fronts. It will detect the least used (servers, virtual servers, storage, etc.), where the most power is being consumed (use, cost) and the areas of greatest expense. And it will do this by application over time.
But it’s important to realize that optimization cannot be related only to utilization anymore. It is really about true service performance. The real target is workload performance.
Making this occur requires the incorporation of a wealth of data sources. This includes the asset database, server CapEx, OpEx and licensing costs, the service catalog, mapping of applications to servers/virtual servers, power consumption data (kWh per server and cost per kWh over time), resource utilization, events, and more. This must be known by server, virtual server, application, storage, network and workload. This is an initial list of the types of data you should consider bringing together.
Once you have assembled this data, traditional approaches to BI won’t be enough. Data Warehousing, for example, runs into challenges in scale and scope. A huge “data mart” (i.e. PMDB) runs into complexity, compliance and vendor lock-in issues which can become costly and time consuming. Alternatively, general purpose BI analytics can be used. However, it is not focused on IT resource optimization, performance or capacity.
Introducing A “Big Data” Analytics Approach
The ideal solution could be called a Big Data type analytics approach. It consists of federating existing data into purpose-designed performance/capacity processes by taking data a variety of places:
- Technology (e.g. server, network, storage, etc.)
- Service (catalog, metrics, tickets, etc.)
- Business (Business Analytics, KPIs, plans, transactions, etc.)
This data can then be harnessed to automate analytics across all data sources. The advantages are that it is flexible and adaptive enough to deal with the dynamic nature of SDDC environments, it turns raw (commodity) data into actionable information, addresses latency and throughput (not simple utilization rates or availability), and provides a single-pane-of-glass management across the SDDC.
Take the case of a financial optimization report. With this modern type of capacity management, you can combine service catalog, asset database, and power consumption data and performance utilization data in order to generate a dashboard of projected costs by applications across all underlying servers, storage and networking components. This can also be projected out over time to show forecasted resources and which ones will cost too much by which time or application.