Top 10 Trends and Their Impact on Infrastructure and Operations

By Drew Robb

Gartner analyst David Cappuccio spoke at TeamQuest’s ITSO Summit 2014. Here are some of the highlights.

He told the audience to pay attention to things that people don’t know that may have a big impact four or five years down the line. For example, in the last minute, 204 million emails were sent, 47,000 apps downloaded, 135 new botnet infections appeared and YouTube had 1.3 million views. Five years ago, this was inconceivable. Today it is a given.

So what are the trends you need to watch?

1. Software-defined...

Software-defined is a means of abstracting the network and storage just as server virtualization abstracts the server. This transforms the network topology, gives programmatic control for the entire network, and provides an abstracted view to provisioning and managing the network connections and services that the applications and the operator requires. This allows simplification of how the network is designed, operated and optimized.

Similarly on the storage side, the software defined data center provides the ability to separate and abstract storage elements as well as combine storage elements and capabilities to provide storage solutions/services. This opens up the potential to enable heterogeneous storage resources to create virtual pools of resources based on application requirements rather than the physical storage characteristics.

2. IT Continuity Services

IT continuity and disaster recovery (DR) concepts are being integrated directly into IT. The focus is on continuous availability at the application level when needed as well as just-in-time availability for other apps (slower RTO’s).This means not just one recovery site for all, but workloads residing where they fit the RTO profile most appropriately. Further, they could be owned, colo, cloud, hosted or a combination.

Take the case of Storm Sandy a couple of years ago. Many companies had good centralized recovery plans and sites, yet they didn’t take into account how to get fuel into the sites to keep their data centers running. What it requires is multiple sites that give you availability.

3. Integrated Systems

Integrated systems have evolved from several directions at once. Fabric environments provide easier workload scalability. Integrated systems may include multiple vendor products, but are supported by a single management platform. And purchase decisions move up the food chain and are focused more on solutions rather than specific product types.

This is essentially an appliance and as it’s a more expensive buy, bosses are tending to make the decisions rather than IT personnel. This has been brought about by the new world of virtualization which has forced greater attention to speeding the processes of deployment and using resources more efficiently.

4. Hyper-Connectivity

The overall trend is toward everything being interconnected including applications, sites, partners, providers, employees, customers, any and all devices. Enterprises have to be able to cope with people with two cell phones, a tablet and a laptop, and wanting to connect with them all. In such an environment, intelligent apps have to be working together so you know where, time of day, what’s on the calendar and the context. This means integrating and synching five or six apps.

5. Bimodal IT

The strategic goal of DevOps is to improve the business value of the work done within IT. It is being used by organizations to provide constant iterations of web and cloud apps. But there is "velocity mismatch" between a development group using agile methodologies and an operations team emphasizing control via best-practices frameworks, such as ITIL. The bottom line is that you need both and an organization has to be able to accommodate both and figure out how to get them to work together, or at least figure out which discipline is right for which apps and workloads.

To capture digital opportunities, for example, CIOs need speed and innovation. Conventional IT doesn't do well under such conditions. CIOs must develop a bimodal distribution so both paradigms can be accommodated.

6. Internet of Things

A wide range of wireless technologies are maturing, offering a variety of compromises on the range vs. power vs. bandwidth, allowing the development of hybrid networks. What is emerging is the ability to transmit vast amounts of data over a very short range as part of the Internet of Things: a concept that describes how the Internet will expand as physical items such as consumer devices and physical assets are connected to the Internet. It is facilitated by embedded sensors that detect and communicate changes. These are being embedded not just in mobile devices but in an increasing number of places and objects.

7. Open Source Hardware

At a recent Open Compute Summit, Intel showed off a photonic rack built by Quanta, which separated components into their own server trays. When a new generation of CPUs is available, users can swap out the CPU tray rather than waiting for an entire new server and motherboard design. It also enables fewer cables, increased bandwidth, farther reach and extreme power efficiency compared to today's copper based interconnects. This could revolutionize hardware as we know it.

8. The Shrinking Data Center

As we see a higher proportion of virtualized instances, a reshaping of the IT infrastructure is manifesting. It is changing from a physical hardwired infrastructure sitting in a large data center to logical and decoupled applications in a more distributed architecture with IT becoming one logical system. We will still have data centers, but they will be smaller and they will shed many functions to the cloud. Core apps will run internally but many other apps will run externally.

9. Continuous Demand

With the increased awareness of the environmental impact data centers can have, there has been a flurry of activity around the need for a better data center efficiency metric. Most that have been proposed, including power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE), attempt to map a direct relationship between total facility power delivered and IT equipment power available. But these metrics don’t provide criteria to show incremental improvements in efficiency. A better alternative might be to analyze the effective use of power by existing IT equipment, relative to its performance. Pushing IT resources toward higher effective performance per kilowatt can have a twofold effect of improving energy consumption (putting energy to work) and extending the life of existing assets through increased throughput.

10. Organizational Entrenchment and Disruption

Business users expect the same level of IT performance and support as they experience with consumer-based applications and services. Efforts to move toward customer-focused environments must include an evaluation and evolution of the primary business touchpoint: the IT service desk analyst. IT organizations, therefore must invest in IT service desk analyst skills and attributes, and help increase IT's perceived value to the rest of the organization.  

Simplify Performance and Capacity Management

IT infrastructure is much different today than it was ten years ago. And your IT team needs to be smart about optimizing your infrastructure and managing capacity.

It’s easy to get insight into your infrastructure, plan for future needs with accuracy, and prevent problems before they occur.