On-Demand Webinar

How to Effectively Use and Manage Container Workloads

Solaris, Windows, UNIX, Linux, AIX
Vityl supports container monitoring
November 21, 2019


Whether you’re already using containers or haven’t even considered it, containers are becoming a popular way for organizations to manage workloads more efficiently.

When a new technology like this breaks into the market, many organizations decide to adopt without a plan for managing performance and capacity. But just like with other workloads, it’s essential that your Docker and other container workloads are monitored and managed as a part of your hybrid IT environment. 

Watch the webinar to learn:

  • Why businesses are using containers
  • How container workloads are different from traditional workloads (and what that means for performance and capacity management)
  • Why adopting a service view is essential for getting the most out of container technology



Hello, good morning, or good afternoon, depending on where you are which side of the Atlantic so welcome to this webinar. Today we're going to spend some time talking about how to effectively use and manage container workloads from a Performance Management and capacity perspective.


So we'll jump right into it. If you have questions throughout the webinars webinar, you can submit your questions in the chat window of gotowebinar and I'll pick them up at the end of the session.  I think we have time for some questions at the end.


That's my hope at least so you may wonder what I look like this is who I am. My name is Per our I'm director of technical services at HelpSystems for the Vityl Capacity Management Suite or product. I've been with TeamQuest before HelpSystems for many years and you know, I do several of these so there's a good chance that you've heard me talk about these kind of matters before so today's session.


We're going to spend on these topics. So we're going to start by giving everyone a quick sort of repeat or introduction to Containers level set and make sure that everyone is on the same page and then we'll discuss some of the changes that that brings to how we do performance and capacity management. So some of the considerations you'll have to look into before you move ahead and put them in production. And then at the end, of course as I said, we'll do a wrap up.


 and if there's any questions submitted by then will definitely respond to the or try to really to answer those as well. There will be a couple of polls throughout the webinar as well where I bring up the poll and you are you know, I ask you kindly to submit your your answer to that poll. So what is containers?


At the heart of it. I mean it's basically a form of virtualization. It's operating system-level virtualization. Meaning that the hypervisor is not separated from the operating system like VMware ESX, for example, which is a standalone hypervisor that manages operating system instances. This is inside the operating system.


So the operating system in this case Linux has a number of mechanisms like see groups and namespaces that is used to provide this Containerization concept it's been around for a long time, but it hasn't been productized or been used to the extent that it is in later years, but it's offers the same level of isolation and resource management and portability as virtual machines basically, so it's comes in you know, when containers was became a hot topic some four or five years ago. It was mostly around sent around docker.


Docker is a basically a contained container runtime engine or contained container runtime image, which was made very popular by the company Docker and it becomes the became the de facto standard for how to run run containers in recent year in the recent year. I would say there is a new initiative called oci open container initiative that is competes with Docker. Basically.


It's a open Format for for container images that is starting to get some momentum and you know, the future will have to show where it all ends up but darker nowadays is you know, basically it's part of the infrastructure or part of the the fabric but it's not really a product that is, you know, determines how you're going to run your containers or how you gonna manage your containers.


The difference between a hypervisor and Docker or a container based setup is to large extent is the resource allocation that is needed on the system. So a virtual machine is basically everything from the guest OS up to the application. Whereas the container is much more lean.


It has a container engine that is shared between all the different containers and then the container only contains the binary and And you know any libraries potentially that would be needed for that and then the app on top of that that means that you can save resources in terms of how much you have to allocate for this so you can save money by you know, provisioning less infrastructure. There's less overhead for running the containers, but there's also license cost.


Of course the commercial Alternatives around virtualization is, you know becomes quite a substantial part of your overall cost whereas containers You know all the products more or less that are associated. Most associated with containers are community-based or open source, so there's no cost for that.


So why are people using containers or why is there such a hype around it? It makes it easier to package and deliver services or applications same way as VM is dead. But it's leaner format. You don't have to have all that overhead with your own operating system image which provides the flexibility to run the services anywhere. So you can package them in containers and distribute you them much easier it also fits very well with this continuous integration and delivery process.


Where you move towards smaller components of software like Microsoft offices or you know, but yes something like microservices where you release more continues smaller piece of your software and by that you can control the impact of you know, those more frequent releases and you know work in a completely different way that in itself is a different movement but containers, you know happened to show up at around the same time.


It was a very good combination, you know with with the CI CD way of developing applications. It also speeds up development. But because you can use these predefined images. So if you want to have you know, standard components like nginx already is or affluent the ETC pushed out to a large set of different notes or host. It's fairly easy because you can find these predefined images. You don't need to build them yourself. You can use those prefab components and push them out.


Those are a number of reasons. I mean, there's probably more but these are the ones that that are most prevalent I guess. So as I mentioned, you know, this happened trainers you happened to coincide with the whole movement towards Cloud native architecture. So adoption of public cloud and private Cloud for that part means that a lot of applications are being refactored to become more Cloud native. So using micro services.


Supporting scale-out automation Etc. And that rhymes very well with containers containers is highly you can hide automate the the management of them to a very large extent and it provides all these, you know, small components microservices that scales out etcetera etcetera. So it was you know, the timing was absolutely right when it arrived when that's probably why it became so successful.


Since the early days of containers the shift the focus has sort of shifted from Docker as we mentioned as a standalone container image format that you were running your application in to orchestrations software orchestration and management software. So automating the deployment scaling in operations of the containers across clusters and hosts in production.


So anyone who runs containers in a production like environment are using one of those Oxidation software's and most likely they are using kubernetes kubernetes has surfaced as to sort of became has become the de facto choice for orchestrating containers Docker containers. Kubernetes is a Google project. It's a community-based open source project. So it's it's free and it's, you know, very powerful very, you know, the learning curve is somewhat steep.


But once you get the hang of it, it's really really powerful and it can  orchestrate really large implementations of containerized workloads. It's focuses on or its uses what's called often referred to as desired State Management, which means that you basically describe how you want your application to look like or how your architecture should look like. So how many different containers of different sorts puts in what they call a refer to as pods how many of those do you need? How many replicas do you want for?


High availability or failover purposes, etc, etc. And then the cluster service takes care of this for you. It has the number of nodes where it puts the software and then it manages that for you. So you don't need to care too much about where different containers show up etcetera. This also means that you lose some of the controls.


So once you've told the software to do this for you you leave it at you know to do this and that means that you let go of some of the control and that That makes you know from a management perspective makes it a bit harder sometimes.


Containers Frameworks or container services are available from all the public Cloud providers as well. So AWS is your Google Cloud. They all have out of the box either kubernetes based or native cluster management platforms that they deliver a service. You can certainly migrate your contain race-based container-based workloads over to Cloud without you.


Having to do any kind of reconfigure rewrite of your application.


Okay, so now it's time for the first poll. I'm going to see if I can.


Find it here and bring it up.


Just a second.


Okay, so now you should have a poll in your window asking. You know, how would you describe your current container usage? So I haven't started using it at all. I'm evaluating for future projects. We're currently use it use it in Dev test and then the last one already using it in production environments.


So I'll give you a few minutes to respond to these before I show you the Salts, so please vote while it's still open.


Okay, I think most of you have responded now, so I should be able to see it soon.


So I think you can see it by now.


So there's 40% haven't started using at all surprising 0% or none of you are evaluating for his future projects that must be a coincidence that we happen to catch you during a time when you know, you're either haven't started or you've brought it into Dev and test 40% are in development and tests you're evaluating it for future use and then 20% already using it in production environment this Is pretty significant or this this mirrors my perception of where the market is tool? I think the share of organizations that haven't started using it at Old 40% that is probably slightly higher than I would expect but the other ones, you know are in line with my expectation.


I would probably I would if I would have had to guess I would say that maybe ten to twenty percent would be in the first category and the rest of them would be E in the second, but you know you never know with these kind of pulse, but it's it's relatively, you know expected results, I guess so, okay. Thank you for participating. So we'll move on to the next section.


so we're talking about containers what it is what it means what how its implemented, etc. Etc. So, what is the then the impact on capacity management? It's in many ways. It raises the bar for what you need to do in New York best management efforts. There's a number of challenges that it presents to you. The first one is the obvious one. So observability it's a new set of components new set of Technology with new instrumentation and you need to pick up that instrumentation capacity management relies on data as well.


No, and you need to have that data in front of you. You need to scale up your efforts because you're going to have more moving Parts. You also need to be aware of the shorter lifespan of containers.


So, you know the frequency of your data collection and how long you actually save the data Etc me will have a name will be impacted by that since you have more components and it's more fragmented you need to aggregate things up to the level where it's it makes sense or where It can be be be associated with you know, anything else that aren't.


The poll made me unshare my screen. So I hope you can see it now.


So this is basically the first slide I did after the After the pole so you haven't missed anything you heard my voice. So so we talk about aggregate to simplify. So you need to bring things up to a level where it makes sense again, you need to provide context to that fragmented picture and then you need to you know, there's some changes your operating model most likely because you know due to the adoption of containers that you need to be aware of and you need to take care of so, let's move into those. So what do we mean by observability?


so You need instrumentation. You need data about how your containers are working containers as we all understand contained competes for host resources. So they are running on a host. There's nothing new there. Of course, you know, you need to understand what type of resources CPU memory disk Etc is being used by specific container or a group of containers or a type of container that instrumentation exists either in the operating system.


So the kernel of the operating system because No underneath the container there is actually a process running on the operating system that needs to be monitored over that's consumes resources. And then you need you have also get data from the container engine through the API. So whether it's Docker or it's another container engine, there is an API that provides some metrics around how the end of how the different containers are consuming resources and what your potential overhead of the engine in itself is etc. Etc.


So you need to understand how much it is used to and what resources are being used and by who are they used and you need to save this data either for long-term analysis, but most likely, you know, if you at this level at container level is more around sort of real-time or near real-time observation. So to find out you know, what is the trend what are the cycles? What are the patterns of behavior etc for those?


What this means? It depends on your platform if it's a physical host where you running this, which is mostly the case. I would say but it's not the only scenario you have to understand it. So, how much is the containers using what is the container engine? Seeing? What is what are we seeing from the operating system and what are we using in terms of physical host? All of that? We will bring you through to my class management. We have data collectors that that provides data form.


Those different levels in a virtual environment. There's a couple of extra levels or extra tears in this there's the operating system and then these the virtual machine and the hypervisor before you get down to the school host same there. We provide all that metrics.


If you run this in a public Cloud environment in a infrastructure as a service type of environment obviously won't see the hypervisor and the physical hosts that's been hidden for you by the cloud provider, but You need to look at everything down to the cloud instance Cloud instances. In fact of the virtual machine running on physical host somewhere in the cloud. So it's not enough to just focus on you know, the top level the containers and looking at how individual containers are behaving that may be interesting of course, but from a root cause analysis or from a true capacity and performance monitoring management perspective. You need to have this ability of the full stack.


then you need to do cluster monitoring because as we said before very quickly, you know containers became or transformed into deployment scenario where you have multiple hosts nodes running in a cluster and you distribute your containers across those notes using some sort of orchestration software. So you need to monitor the whole cluster of than the individual host to bring the pieces together.


So regardless of which orchestration framework using we talked about kubernetes before there's openshift from red hat or IBM nowadays, which is has Got a lot of attention a lot of popularity. It's based on kubernetes as well or it's using kubernetes as well. But it adds some extra features and some some extra capabilities around the CICD concept to kubernetes same with Cloud Foundry from pivotal does assembly of like openshift. So it's uses it provides a more development friendly layer 2.


They work station framework. So all of those are you know, they are managing multiple different hosts multiple different nodes that are running containers. So you need to get data from all of those you need to understand configuration metadata. So what is where is the master which are the notes that I need to look at and monitor and then in those nodes different type of objects, so they are not using roll containers.


So just simply single containers they are, you know putting Them together importance that comes from kubernetes in openshift. There's a lot of talk about projects and services. So you compose pods together into projects and services and in order to monitor your services for your applications. You need to be aware of that composition and that mapping of of container to pod important to project Etc in order to make sense.


So this is a crucial part in getting an understanding of an application and how its behaving in a containerized environment So what we provide here, so we have our native data collectors for you know on-prem usage. So we pick up data from the hypervisor being where if you're running containers on VMS running in VMware. We do the OS level monitoring through Linux and Windows data collectors. You have the container engine in Docker that we monitor. So we provide data from the docker API, and we also have out-of-the-box integration with goodbye.


Is an open shift to get the orchestration data and to to put the pieces together in that way?


If you run this in public Cloud we have Cloud native monitors for AWS morning Azure and for Google Cloud. There is a small asterisks on stackdriver for Google Cloud right now. We have a field developed integration with Google cloud in second quarter of next year. We will have an out-of-the-box integration with stack drivers who will have a fully it's already fully supported by the official.


Official out-of-the-box version of that stackdriver integration. So we provide you know access to the metrics that the cloud provider themself provides then you can also of course put our data collectors vzm data collectors inside a cloud instances. Well, if you want all the details if you want more granular data, if you want to you know, if it's a Linux instance, there's nothing keeping us from people picking up the docker.


Engine API stuff as well. So you can certainly put our data collectors there as well. But you have to evaluate, you know, what your needs are and to what level of granularity or you're looking for. There's also metrics or available or instrumentation available for the different.


Container management Frameworks that are provided as Services through those providers so cloudwatch. For example, if you using AWS with with the either the native or the kubernetes based container workstation, you will get some metrics in cloudwatch as well that you can use for that. And then of course as always we provide support for third-party data and that could come from anywhere could come from the public cloud or for an on-prem wear any combination of the two.


So we have a pretty good coverage of you know, the instrumentation that would be required to to manage this types of workout.


The next management consideration I would like to mention is the scale so more there's more of everything. There's more objects to monitor. There's the requirements in terms of real time is higher and there's more relationships to track because as you have more objects, there will be more.


More relationships between those of course and you need to aggregate data up to to you know, the level that makes sense more frequently as well.


So you need to have a scalable framework for data, retrieval and Analysis and this is how we do it. So inviting like request management. This is the architecture. So we're using components from Apache that are highly scalable. We can use our own late lightweight data collectors. We can also use third-party data sources as we know. We have a broker that supports real-time data producers.


We have a Framework that allows us to Cluster and scale out to support tie rates and volumes in your net will most likely be the results of your containerization. And then there's an API that provides access to all this data. So if you want to consume this data for any other tools than the are purpose-built user interfaces, you can do that too.


So having this framework that can cope with these new volumes of data is Is you know very important when you when you want to manage containerized workers?


next consideration is the lifespan of a container so The whole idea behind containers or the way containers have been positioned and the way it's being used is the primarily used for scale out basically, so the containers are more transient in their nature.


So you provision you containers or you you Start new container images as you need them and then you retract.


If the demand goes away, so the they're not going to live forever. They're not going to be always there. They're going to be you know, live and die very quickly.


So and if you then use these as we talked about these automated management Frameworks, like kubernetes and openshift Etc that will exacerbate that even further is because that's going to be you know, they're going to do this automatically and they're going to be much quicker than us as humans to to recognize these things and you know, provision or retract Containers as needed so container you need to be there to see them, you know, you can't sample data once every 5 minutes because there's a high probability that something happened within those five minutes that will go completely unnoticed if you do it that way.


So container in their containers in themselves and their management framework normally provides very good observability. There's a ton of data that you can extract but they are also geared more towards new real-time.


So for more for the Performance Management root cause analysis or troubleshooting type of of use case so they only cover a very short short period in time because the lifespan of a container Only quite short. So if you want to do long-term optimization, are you need to want to understand how provisioning of canoe containers or the number of containers varies in in relation to your business transaction volumes Etc. You need to store this data somewhere because otherwise you're not going to get this long Trend that you know shows you how you know covers your whole business Cycles Etc.


So there is absolutely need to do that in order to do that to get that. So this is an illustration of that. So basically this is what you get out of the box. This is this happens to be kubernetes. It could be one of the other Frameworks as well. So there's a native capability to you know, put the data there's an API for the cluster service and there's a an API to get data from the different worker nodes as well. And this is an in memory short-term circular data story.


So if you query for the you know, what happened the last five minutes, it's certainly going to satisfy that need but Not it's not meant for long-term. Use it may have gold a couple of hours of data. It's not deterministic because it's in memory. So as soon as you run out of memory on a on the cluster service is probably not going to save the data for very long.


So in order to do this, you need to have some sort of aggregated long-term storage where you you know tap off this data and put it somewhere that you so that you can see this long-term Trend you can see this is an ality and you know histogram data over all of the data for that. We're using either where you can use our own data store, but there's also a very popular alternative out in the market called Primitives, which does this for both for kubernetes and for openshift where we have a default integration as well. Good thing with Primitives is that it's it's not included in openshift and and kubernetes, but it's pre-prepared for it. So it's very simple.


To download Primitives. It's an open source product and then you can there is predefined configure pre-configured Integrations with those Frameworks.


So then it will start to populate this this primitive database and then we can use that to to report on performance capacity as well alongside with all the other metrics that we get from the operating system and the hypervisor and you know, God knows where so there's definitely ways to do that, but you need need to think about this because otherwise you're only going to get this, you know momentary what it looks like in the moment, basically.


Aggregation is another important factor. So if you look at how you do for costing or capacity needs one component of it is the organic growth. So looking at the long-term or the history of your your infrastructure or component or your service or your application and spot if there's any, you know growth organic growth that needs to be accommodated for you also have in this these days.


You normally have a lot of Activities going into this kind of environment because applications are being refactored and moved into two containers or refactoring steel containers. And then you have new initiatives going on. So new applications being developed. There is normally a project Pipeline with you know, new initiatives in your applications that are going to be launched etcetera. All of those needs to be put together in order to understand what your future need will be.


this you know, if this thing comes from multiple different workers within different types of container images in pods or in projects Etc in those management Frameworks, you will get this kind of information out so you can see how much CPU memory each type of of Container used for each container used and then you can potentially summarize it by by type of image and you know bipod but in order to do a you know, sort of a compound aggregation of everything that runs with other typical of one type of container image, etc, etc for modeling and trending purposes that you need to aggregate this data.


So again, you know aggregating this data up to Something like Prometheus database will make allow you to do this. Otherwise, you know, the native capabilities of just running kubernetes will not provide you with this capability.


Another consideration is the ability to bring context to things so, you know, this has been a challenge ever since we started doing capacity management. So understanding, you know translating component demand of to service demand to business demand because business demand is what we're going to get as a forecast from the business and that's what we really planned for.


But in order to break that down to components you need to understand which Opponents are used for which Services which Services underpins different business demands Etc in order to make this work.


So To get this, you know back in the days when we had perhaps grad virtualization. We had a cluster of hosts that were running VMS. It was hard enough, you know, you had to do Discovery some sort of automated Discovery and then you put that in and cmdb or service catalog and you had you know combination of non-virtualized and virtualized workloads. And yeah, you know that sort of mapping and itself was very difficult. If you look at this from a container perspective, it's going to get even worse, you know, there's a four to six times more containers.


Some VMs in and on average this may vary of course individually this spins out of control very easily because you know, you know, you have a cluster service where you basically just drop your applications and you define how many notes you want to run it or how many how many copies of this you want to do? How many what kind of replication you wanted etcetera apart from that the whole work the whole framework takes care of itself.


If so, it's going to be really really difficult to understand individual components. What are they actually consuming and how does that translate into service and business the mom? So at some point you have to give up, you know, you can't you can't connect the whole chain row all the way from the bottom to the top anymore, you know, another way of illustrating this if we look at complexity where you know physical systems used to be our pets and we were to care of them we moved over to virtualization. They became cattles we didn't really care.


Out individual VMS necessarily as much anymore and you know, we treated them in a different way. We move them around where we found that they were, you know up to optimize their placement, etc. Etc. Containers are more like a flock of birds. It's really a completely, you know, it's a unit of work that you can't you can't distinguish the different components inside of it almost it's becomes very gets out of hand very quickly. That's exactly why we have these orchestration frame.


Works and they do a good job at taking care of this but from a management capacity management perspective. It puts a new perspective on things. So basically you need we need to learn to settle for good enough in rather than rather than trying to keep track of everything. So maintaining correct up-to-date metadata becomes a challenge. There's a scale out aspect is, you know number of shared components. There's a lack of affinity between an an application and the host etcetera.


So you need to sacrifice some of the details in favor of the wider scope and you know, aggregate data into meaningful components. So let go of some of the the minut details and aggregate things up to meaningful level. That's very important.


Otherwise, you're going to spend all your time trying to connect the dots rather than actually focusing on the results, which is, you know, the forecasts and the risk mitigation and efficiency aspects of capacity management, so some of Key capabilities for for doing this so from a Performance Management perspective, you know, obviously you need to do unit testing of key components to understand the general resource consumption of a typical, you know type of container or type of container worked out. You need to provide real-time and historical performance data aggregated by container image and components for Diagnostics and troubleshooting. But that's as far as you need to go there from a capacity planning perspective.


If it's more about, you know, correlate the aggregated component data with business activity metrics. So understand, you know how much if our business activity goes up.


How much does that mean how many new containers are automatically provision by my organization software to keep up with that and how quickly are they retracted as the demand goes down etcetera to understand that that dynamic between those two you also need to You know automate the forecast based on expected demand and apply that to that observed Behavior.


So once you have an understanding of how your organization software handles business activity variability you need to use that to model and then you can do detailed planning using workload agnostic capacity modeling techniques, like vital capacity management capacity plans, for example To model those individual components and then turned it to get a better understanding of how they work together and how how they are using resources. But to do this, you know, a full-blown simulation of a whole containerized work code becomes very very very complex and it's almost impossible to accomplish.


And then hit you know on top of this you need to provide this as a self-service.


because another thing that we also need to take into consideration when we work with with containerized workload is this, you know change of operating model so over time, you know in the Long ago there we most organizations were operating in it more of a siloed approach where you had, you know technology teams that technology different technology silos. You had siloed teams around applications and then you probably had a project management office of some sort that that were trying to coordinate those different groups and to you know, really those projects and applications that were results of those projects that shifted to more plan build and run.


Approach around ITIL 3 that  was sort of introduced and then it got a lot of traction over the coming ten years and that's how most organizations have been running things in the last. Yeah, 10 years. Probably we have specialized teams focusing on plan build and run rather than dividing them into technology and applications. This allows you to do independent sourcing etcetera.


So it allows you to Outsource or put things in in public Cloud Etc. So it's a it's a better way of organizing yourself what Gartner and others are expecting or you know for costing that will happen or predicting that will happen in the future is with dig digital business. There will be a even more involvement from the business in how things are being operating. So you need to have a more business embedded operating model. So, you know we talked about this.


Before this eicd movement of continuous integration continuous development forces you to push out some of the responsibility for Specialized or special, you know, what used to be Specialists we're doing now needs to be pushed out to the product teams and people out in the business or closer to the business. So you need to have more business embedded approach to things this means that you need to provide your tools as a self service components.


So it can't be just you know, highly specialized tool that if you peep view selected people in their organization understands and can use it needs to be made available to a much more wider audience. And that means that you need to focus more on prescriptive advice because they are not going to be the specialist they need to do to have you know specific advice around how what to do. It needs to provide more Automation and it needs to provide.


Of course these self-service capabilities that We talked about so that is also something it's not necessarily tied to Containers as a technology, but it's happens to coincide with that whole movement and those expectations are will be if they're not already there. They will be prevalent relatively soon and it's you know, it's important to start to work in this direction as well, you know for capacity management.


So what is the incremental value of using analytics? So this is the sort of decision making model. So you didn't have data you analyze that data you make a decision and then you take an action on that. If you pull that apart, there's a number of different options. So basically with with analytics, you know, the way we do it you can provide more.


Support for this so you can take your you know, your data avoid the human input and move all the way up to decision or even action much faster. So taking a descriptive approach, you know requires a lot of human input and a lot of human experience in order to take you through two decision and then an action and as you move up the stack, you know prescriptive is basically telling people what they should do based on a certain problem that has been presented.


So the problem has been Define or identify door forecasted and based on what we know about it. This is what you should do. So taking it all that way up to to action, you know, that's really what you need to strive for in this kind of business embedded type of capacity Management's environments.


So the required skill is higher, of course, so you need to automate it because you can't expect that skill, but also the business value, of course, you know, if you can push this out to wider set of people people Are more aligned with you know what the business actually needs etcetera that is going to need a do better for you. So you can have more people doing doing that kind of, you know, qualified analysis. And then of course, you know, this requires you to have some level of operational maturity this you know, we have a maturity concept that we use for this where we take customers through these different steps of maturity in order to get to the point where you can actually, you know, have all the different sources and all the different information.


Available in all the different processes available to do prescriptive and predictive and prescriptive analytics in your organization.


Okay, so we're going to have another poll here.


Hopefully I do better this time and not get back into the presentation afterwards. So this poll is around.


How are you managing or planning to manage your Containerized environment. So first one is containers managed using native controls. So basically using Docker and you know operating system scripts Etc.


Second one is kubernetes first third one is openshift and fourth one is the you know classical other one that probably entails things like Cloud Foundry pivotal, etc. Etc. Could also be container management work jobs available in public Cloud ideas. Okay, so I'll give you a few minutes more a few minutes a few seconds more.


Okay, I'm going to close the poll here item.


Get any more response now looks like it's stable.


Okay, so I'll share this one.


Okay, so I hope you can see the results now.


So 40% says that container is managed using native controls. So we make it controls.


I meant as I said, you know any type of script or Operating system features that would come with Dockers or with with the Linux operating system. Third of you said kubernetes another third of you said openshift.


So that's goes to show how popular openshift has become and then the third of you said other so What that includes? Okay. So this is sort of the expected result. I would say note that you know openshift is built on kubernetes and fair amount of the others are probably also built on kubernetes or uses kubernetes under the hood. So kubernetes is definitely a major component in any type of containerized container management approach, I guess. Okay.


I'm going to hide this and I'm going to Hope you can see my screen now. Yes, looks like you can do that still.




so to summarize this we're running out of time so few key points, so if anyone thought so It's not the case of shifts, you know container Technologies is more than just another layer in the software stack. It's not just another technology that you're picking up. It has a big impact on how you need to how you develop your applications how you run your applications and how you manage and manage risk and and efficiency of your applications.


It means a lot of The races the bar in terms of speed scale and complexity. There's more components. There's quicker components components that lives for a shorter time and those components still needs to come together in order to make sense. You know, no one in the business is going to tell you that we expect container type A to grow this much next year.


They're going to tell you, you know about your business applications business services, and then it's your job to Break that down to combine two types of containers and or or pods or projects Etc. And then manage those two more understand how the impact of that demand will be for those containers are not going to sweep the floor and completely remove a need for other Technologies. It's going to be part of your hybrid it management. So you still will have physical systems or virtualized environments or public.


Would based environments etcetera etcetera. So it's one another pie to that or piece to the pie sort of the whole pie of different types of technology that we needs to manage and that's why it's important to as far as possible use the same management framework the same tools Etc to do this.


So I have a single pane of glass that can see your container workloads side by side with you know, the rest of your it stank and most likely, you know, you'll applications that covers both multiple different parts of this. So the full stretch of an application may go from a physical hosts over to you know, VMS or containers or public Cloud instances, etcetera Etc.


So it's important to bring all that together this whole thing happens to coincide with other Trends in the business or in Industry where you know, the adoption of you know, the or the digital transformation or You like to call it means that you know, you have to push out some of the responsibility for the things you're doing to people that are closer to the business or that are in the produce different product lines or different owners of the different applications that they will expect or they will they are expecting to be able to do with more on their own. So provide some sort of self service around capacity management becomes a key aspect as well.


So this is something that definitely will we see a lot with our customers and and this is something that needs to be, you know, if you're doing this transformation or this overhaul of your solution stack based on containers, you know, you should take this into consideration as well to make sure that you do you provide for that. That's neat as well.


Okay, so with that.


Thank you for your attention. So I still believe we have a few minutes here.


So I'm going to see if I can find any questions in the chat window. If you just give me a second here. I'll pull up the chat window.


Okay. I have a couple of ones here.


So Yeah, so are you using native apis to monitor openshift? Yeah. Okay. Nope.


We are not we're using as we said, I mean there's there is an API, of course openshift that you can use to pull out some instrumentation and metrics but we actually use this I mentioned Primitives which is a open source monitoring project for among others openshift also openstack kubernetes a lot of other We found that very useful is very powerful is very simple to integrate with openshift. It comes, you know be shift as a predefined integration with with with Primitives and we then use the primitive data. So we don't use the native apis. We actually use Primitives to get to the data.


And then there's one more question. So do you use sidecar containers in your solution? No. No, we don't a side car is basically a when you put your monitoring software in a container by itself. And then you use that to instrument the notes in the clip, you know in the in the cluster for example, or monitor other containers know the way we do it and we have the native.


Data collection from the operating system and the metrics that that provides to us together with the you know API of the container engine and then we use the different orchestration or management Frameworks to to get the data. We don't use sidecars at all actually in our solution.


Those were the two questions. I had, I believe.


Yeah, there is one other one. Sorry. I find that now so.


What doesVityl Capacity Management provide that I won't get from me during provided by the vendors so Docker kubernetes Etc. Okay. Yeah. So what we do, you know what we provide that you won't get from those obviously you get a single pane of glass, you know, if you have one of those sources or couple of those sources, you still need to bring it together you need to be able to to correlate those.


That data, etc. Etc. None of that is simple. None of that happens by itself.


So there's a lot of work going into Bringing that data together and make it synchronizing it and make it appear like you know or allowing you to correlate cross those different. So basically what we do we have this full stack monitoring approach where we monitor everything from the operating system while the hypervisor or hypervisor operating system container engine container orchestration software all the way up whether it's on-prem or whether it's in Cloud. I guess that's the big difference.


So it's Only one single source of information where we can integrate with all those and then of course on top of that we have our Advanced Analytical capability so we can do predictions and forecasts based on on you know, what if scenarios etcetera that you you won't be able to do with us.


So that's how I would position as compared to those, you know serve built-in metering that comes from from the vendors.


Okay, that's all we had then. I guess there's no more questions. So with that I would like to thank you for your time. And if you have any other questions or any, you know want to discuss anything else in detail with me or anyone else that helps us. Mm. I encourage you to reach out to us, of course as usual and you know, I hope I'll see you back in one of these webinars in the not-too-distant future. Thank you very much.

Manage containers with Vityl Capacity Management

Whether your organization has already begun using containers or is just looking for a capacity management tool that can manage all of hybrid IT, Vityl Capacity Management can help. Start a free 30-day trial today.

Stay up to date on what matters.