Managing Disparate Systems Beyond the Cloud: How to Monitor Power Systems in a Virtualized World

On-Demand Webinar

Managing Disparate Systems Beyond the Cloud: How to Monitor Power Systems in a Virtualized World

Windows, UNIX, Linux, AIX, Mac OSX

 

Have you consolidated disparate operating systems on to one or more Power servers? Do you store some or all of your data in the cloud?  As your enterprise grows and implements a cloud and virtualization strategy, it is critical that your infrastructure provides seamless workload scheduling and management to support the new dynamic workloads across disparate systems.

Watch this webinar to hear T.R. Bosworth of IBM and Pat Cameron of HelpSystems discuss the best ways to manage workloads across Power platforms—on site or in the cloud. They share with you the current challenges and trends in the management of workflows running in virtualized and cloud environments on IBM Power Systems. You’ll also learn:

  • How to effectively manage multiple platforms running on Power Systems

  • How to consolidate and centralize data on disparate systems: in the cloud, on IBM i, AIX, Linux, and beyond

  • How cross-platform automation ensures greater control of disparate environments

 

Hello, everyone, and welcome to today's presentation. I'm Tami Deedrik, the Managing Editor of IBM's System Magazine—Power Systems Edition—and  I'm the moderator for this event. Today's webinar is sponsored by Skybot, a division of HelpSystems, and is titled “Managing Disparate Systems Beyond the Cloud: How to Monitor Power Systems in a Virtualized World”. Our featured speakers today are T.R. Bosworth and Pat Cameron. T.R. Bosworth is the worldwide Offering Manager for IBM Power Systems Virtualization and Systems Management Solutions. T.R. joined IBM Global Services in 2007, as a result of IBM's acquisition of soft pack storage solutions. He has more than 30 years of enterprise technical experience with systems management and server and storage virtualization solutions for a variety of companies, including Fujitsu and Amdol. Pat Cameron is the Director of Automation Technology for Skybot Software, and a 15 year veteran of Help Systems, its parent company. Her background in IT spans more than 25 years and includes implementation planning, operations, and management. At Skybot Software, Pat oversees customer relationships, gives technical product demonstrations and field enhancement requests for development.

Today, T.R. and Pat will discuss the best ways to manage workloads across power platforms, on site or in the cloud. Following the presentation, we will have a brief Q&A period. So please feel free to enter your questions in the question panel on your screen, any time during the presentation. Now, without further ado, thanks again for joining us today. T.R., I'll turn the presentation over to you.

Why Businesses Need Cloud Technology

Thank you, Tami. So here's our agenda for today. We're going to talk about the business needs of cloud technology, a bit about how the cloud is leading to greater efficiency and why people are adopting cloud deployments. We're going to talk about some of the characteristics of effective cloud deployments. We're also going to talk about why it's important to understand or think about consolidating your job schedules to potentially a centralized job management solution. Also, we're going to chat a bit more about some additional tools around the space of forecasting, auditing, and notification. So I want to stop for a minute now and just do a short survey question. We basically want to get a picture of what platforms you're running on Power Systems now. Which of the OS's are you running, whether it's AIX or IBMI or Linux. So if you would fill out the survey, that would help. You can also see the results when it's done.

Pat: So T.R., when you're working with your customers, do you see that they're running a combination of all of these or?

T.R.: Yeah, typically the Power servers do consolidate quite a bit of resources, either in their traditional virtualization or in a cloud deployment. So you would typically see a mixture of the various operating systems running on Power Systems. Absolutely.

Pat: Interesting.

T.R.: Sometimes people are very focused on one particular category or one particular operating system like AIX, or IBMI, or Lenix, but typically, there's a mixture of operating systems running on your power systems virtualization. So it looks like our survey is back. Looks like the greatest majority is AIX, but there's quite a bit of IBM i as well and some Linux. Our go forward strategies are very much Linux-focused. I would expect that to accelerate in the near future.

Pat: Cool, thanks everyone.

What's Driving Cloud Deployments?

Thank you. So thanks for the feedback with the survey question. So let's get started. So I want to give a basic reason or some market facts of what's going on in the industry, what's really driving cloud deployments. Obviously, everybody knows about the mobile revolution. I mean, typically, people are using smart phones and tablets versus what they used to do typically with PCs. A lot of these facts on this slide have to do from some of the IBM CEO studies that we've done. From these studies, there's roughly about a trillion devices out there now, about a billion smart phones. The CIOs that we have surveyed, really viewed cloud as critical with dealing with a lot of these changes in the IT industry. Obviously, everyone is familiar with Facebook, LinkedIn, Twitter, and various social media explosions. This is really driving huge amounts of data growth as a result of all these things that are going on with mobile and social. It's really driving also a lot of interest around doing analytics to mind the state and to understand it. So pretty much 69% of this cost is server management and administration. So if you can really drive down the cost of your IT in that area with cloud deployment, it really is attractive, from a cost reduction perspective, for various enterprises to deploy clouds and adopt different cloud models.

And really, a lot of the overall infrastructure now with consumers is really relying on social networking and this whole thrust really plays well into the economies of the cloud. So this gets back to just the overall idea of the cloud technology. We're really improving the overall economics of IT because we're able to efficiently really deliver new services and products faster in the cloud. Some of the characteristics of the cloud are repeatable deployments. The clouds typically have some sort of a self service portal, which allows you to control the requesting of resources and speed up the provisioning of resources, as well. You typically have some sort of a set of images that are saved away, that can be quickly deployed as a result of these requests to basically push out a web server and a database, as an example, very quickly. Customers are really taking advantage of, initially, private clouds, which is a cloud built within the inside of their IT infrastructure. And also, public clouds. There's plenty of public cloud offerings out there in the market. So what typically happens is people have a private cloud, and they also have some workloads potentially running in a public cloud. That's managed as a hybrid, which is a combination of the two. Many of our managed service providers are using IBM infrastructure to build out their private and public clouds. We've helped over 10,000 clients deploy trusted cloud solutions that are built on our products and open standards as well.

The Effects of Cloud Adoption

So this slide gives you a picture of how it really affects people that are using cloud deployments. From a perspective of what people are purchasing, it's really allowing early adopters of clouds to have basically 2.5x higher gross profit than their peers, and almost a 2x revenue growth compared to people that aren't adopting this cloud model. In addition, 50% of enterprises will have full blown hybrid clouds by 2017. So a lot of this research has been done internally with IBM and other analyst groups by doing surveys and talking to our customers around what they're doing with this cloud technology. So if you see there, 50% of our new servers that are being purchased are around cloud deployments. So people are really embracing the cloud because they can get huge cost savings and better profit margins, as well as a lot of the early adopters are really prioritizing and picking open source cloud solutions as their platforms of choice. So this is an overall picture of what people are spending in the cloud and how it's affecting their bottom line.

So at this point, I'd like to ask another question, a survey question. It basically is: is your data center in the cloud? There are four choices here. One is, "No, we're not anything with the cloud." One choice is whether it's public, one is whether it's private, or whether you're using hybrid cloud computing. So if could answer that question, it will be helpful and it might be interesting for you as well. So I'll take a minute and let you answer that.

Pat: It's a pretty interesting statistic that you have there, T.R. About 50% by 2017, that's coming pretty fast.

T.R.: Yes, it is. So people typically do have a private cloud inside their infrastructure now, and they're looking at utilizing public cloud offerings and having that hybrid management between a private and a public cloud. So it's very interesting that people are taking some of their work and farming it out. Some of it they keep internally, but they want to continue to manage it from a central source.

Pat: Exactly. It's a big change for IT, so I'm sure a step at a time.

Traditional IT Spending

Absolutely. So are we getting close to where the poll will be closing? Very good. Interesting. So half the people, half the respondents, have “no, nocloud." Their data center is not in the cloud. And then, 36% have private, and 12% have hybrid. So that's very interesting. Okay, thanks for your feedback. So let's go to the next slide. So we talked about what initiatives are driving cloud, why it's interesting as far as profit and overall revenue. This slide really talks about some information that we've gathered, both from IBM research and also IDC research, talking about where the traditional data center tend to spend their money. Right now, 70% of the IT budget is really devoted toward operations and maintenance, so that's taking care of your systems and making sure everything's running. And basically with the cloud data center, more than 50% of the IT resources are really allocated toward new projects. So they're using the cloud to streamline the operations and maintenance side of things, and really work on deploying new workloads and new projects in the cloud. As you can see, 35% of the IT staff is spent on new projects with the traditional data center, and 50% of the time with the cloud data center is spent on new projects. So the overall maintaining old stuff is somewhat depreciated, and the data cloud center is really picking up new workloads and really spending the time to develop new business and new workload versus maintaining older things.

If you look at the value delivered. This is really important, because it gives you a way to really accelerate your change. So typically, your change management, test provisioning, and installing various workloads could be something that takes months. But with the cloud, you're going to a model where it's more in days or hours. The same idea with test provisioning. You can easily take something out of your inventory catalog and push out a new workload very quickly in minutes. So the acceleration of this really gives the business more flexibility and really allows the business to meet their business objectives much faster because of the agility that a cloud deployment allows you to do, to produce. You can see installation of the operating system, typically one day with the cloud. It's very quick. It's between 30 and 60 minutes. Provisioning equipment, is a real cost saving in the cloud because it's pretty much pre-provisioned and ready to go. And then, just overall, the design and deployment of business apps is much quicker in the cloud. So based on these considerations, the overall responsiveness of the cloud is really effective to allow the business to basically take advantage of their business needs much quicker as a deployment model.

Characteristics of an Effective Cloud

So here are some characteristics of an effective cloud. Obviously, clouds are built on virtualized infrastructure. They really need to be provisioned as the demand occurs. You need to have the ability to have elasticity so that you can scale up the cloud or scale down, based on the workload. You need to have the ability to have scaling of resources and also pooling of resources. So these are typical characteristics of an effective cloud. Now, Power Systems basically has those capabilities. So Power Systems is a very efficient and effective platform to build your cloud on. It's very dynamic. It's open. We have open power foundation, which is completely open from the firmware all the way to the processor. Very cost effective, you can put a lot of density, a lot of VMs within a Power Systems cloud. Very scalable, either scaling up with a set of servers or scaling out with a set of servers. We have extreme amount of reliability built into the overall server design, including memory, CPU, and also in our IO characteristics. It's very secure. So it's a perfect choice for building out a secure cloud.

So we talked a little bit about the economics, the market, why people want to use clouds, a little bit about the deployments, also a bit about Power Systems and why it makes a good cloud. So here are the various characteristics of cloud workloads. Potentially, you're going to have more systems and VMs to manage. You're going to have shorter VM life cycles because you can effectively push out a VM very quickly and maybe it stays around for a little while, then it goes away. You potentially have different hosting locations. If you have a private cloud, it's probably all inside of your infrastructure. If you have a hybrid cloud that's using public, it's somewhere else. And just the overall faster deployment. So how will you schedule your workloads in this new environment? You both have your traditional systems and your cloud based systems. Obviously, you could use cron. You could use operating systems, job schedulers. But maybe there's another way. HelpSystems is one of our good IBM partners. They have some solutions. At this point, I'm going to pass it over to Pat Cameron. She's going to talk to you a bit about HelpSystems and Skybot, the Skybot Scheduler. Pat.

Introducing HelpSystems

Great. Well, thanks, T.R., for that great information about moving to the cloud. I love some of your statistics. As T.R. said, we've been a partner with IBM, as you can see, for over 20 years. The HelpSystems as a business started back in the System 38 days, if any of you remember back that far. Unfortunately, I do. And then, moved to the AS400 and then now with Power Systems and no matter what operating system you're running. So you can see, we have a good relationship with IBM and have for many years, and are part of their early release program so that we can mask sure all our products are tested to make sure that they run compatibly with IBM. A little bit about HelpSystems. Like I said, HelpSystems has been around for about 30 years, and we have a number of products in our portfolio. We concentrate on systems and network management. Any of you in the IBM i world—I see we had a few of those on our survey—might be familiar with robot. We may have some customers out there. I'm sure that you've seen a lot of these logos and literature throughout the years. So we have a group of products for systems management and network management, business intelligence, and also security and compliance. We're adding to our portfolio just about month lately. We've really been growing.

The Benefits of Enterprise Job Scheduling

I think we might have another poll coming up right now. Great. So are you using an enterprise scheduler now? As T.R. mentioned, if you do go into the cloud and when you do, private or hybrid or public, you probably will have multiple VMs that you're managing across multiple operating systems. So do you have an enterprise scheduler that sits on top of that now? He mentioned cron and a built-in scheduler like the work job schedule entry. Just wondering how many of you out there have an enterprise scheduler now that you can bring to the cloud with you, if and when you move over that way. Like I said, we have a number of products that do scheduling and enterprise scheduling, depending on your environment. We're going to be talking about Skybot Scheduler today. Skybot is compatible with any of the operating systems that run on power. I've got a couple of slides that will introduce you to Skybot. Then, I'm going to go online and show you Skybot live. So I can see 40/60, we've got some people that are using an enterprise scheduler and some that are not yet. So maybe you can pick up some good ideas from today's information. So thank you for your feedback.

Why would I want an enterprise scheduler? I think whether your data center is in the cloud or on traditional systems or a hybrid that's mentioned, these might be a few of the reasons that you would need an enterprise scheduler so that you can automate those business processes. Automation provides less errors, faster run time, and also documentation for audit purposes. In my previous life, I was an operations manager, and one of the things that we did miss was the transparency into our schedulers. Difficult to look across multiple partitions, or VMs, or different systems, and see what's running, or what might be dependent on a job, or a file, or some type of a resource on another system. Those dependencies can include all different types of events, such as job completions, file transfers, or job failures. Maybe I've got some type of error recovery I want to run. So how do you handle those types of dependencies now? I think one of the statistics that T.R. brought up was: in a traditional area, 70% of your IT budget is on operations and maintenance…could be writing scripts and programs to manage some of those dependencies. So one of the important reasons for having an enterprise scheduler is to free up your staff so that they can work on those new projects, instead of working on da-to-day tasks and building your own scheduler.

Pain-Points of Manual Scheduling

So any of these familiar? Have you ever missed an SLA? That can be very painful. How do you manage all those disparate systems? If you've got cron running across multiple VMs, how are you able to manage those systems and manage those schedules? It doesn't really matter whether they're in the cloud or on premises, how much time do you spend managing those or managing those processes across a platform? SLAs need notification. You shouldn't have to be sitting, monitoring, and watching those jobs run. There should be a way to have that computer monitor them for you, and then let you know when there's an exception. Help Systems has always talked about managing systems by their exceptions. You should be able to watch them. But if you do get some kind of an error, or some kind of a delay, that computer will let you know about that right away so that you've got time to fix it. So that you're not doing any kind of re-runs at month-end because jobs ran out of order because some type of a pre-requisite wasn't available.

Sometimes your audits can get painful, looking for the documentation that you need, and that exceptions reporting. So make sure when you do look at an enterprise scheduler, whether it's Skybot or any other scheduler that's out there, make sure that the documentation is there for you so that you've got some kind of a central repository and it's very easy to get that audit information directly to your auditors, instead of you having to go gather that up. Another thing to keep in mind are what are the functional requirements that you have for an enterprise scheduler? So before you buy one, before you look at one, determine first what are your requirements. Along with your business and budget requirements, you need to determine what your functional requirements are. These are such things as hardware requirements. Am I going to be able to run this scheduler in a cloud? Security requirements for those MSPs. I've got security if I'm running a multi-tenant type of environment. How am I going to be able to separate those jobs by client? You want to make sure that the scheduler that you purchase is able to support all of those types of needs. And then, also what types of schedulers are there? Time-based schedules, just about anybody can do that. But what if you've got dependencies? What if you've got exceptions? Maybe you run some of your jobs on fiscal periods instead of a month. Become familiar with those types of things. Somebody in your operations area knows all of that information. So make sure that you can get that all in one place. And then, what are your audit requirements? What do you need to report on? What types of exceptions? Again, make sure that you can get that information quickly and easily from your enterprise scheduler.

Looking at Skybot Scheduler

So we're going to go and we're going to take a look at Skybot Scheduler, and I'll talk a little bit about what it can do, and then we're going to go online so that I can show you. T.R. talked about just the fact of moving from systems on premise, individual systems. Moving to the cloud is going to be a money saver because of the management and the operations area. In addition, an enterprise scheduler is going to help drive down costs even more, because you won't have that day to day operations management that's required when you're running in a production environment. So enterprise scheduling is going to give you control over your schedules and over your systems across the enterprise, instead of just at the individual VM level. Skybot Scheduler can do some monitoring as well, and also notify you of any of those types of events, as far as failures or errors, delays in the schedule, pre-requisites weren't met. Again, we don't want you to have to go hunting for those problems. We're going to notify you immediately if there's some type of an error. One of the things we've added to our scheduling capabilities, is a built-in file transfer function. We've noticed in the more open systems environment that there's a lot of file movement that's going on. Files coming in from clients or vendors, files going out to customers, etc., and so we wanted to make that easy for you, too, so that you don't have to do a lot of scripting, writing a lot of programs in order to manage those files. I'll show you an example of that, too.

Role-Based Security in Skybot

We include role-based security in Skybot. So what you can do is you can set up different groups of people that would have maybe view-only access to the schedule in general. For those MSPs that have multiple clients, you can exclude groups of people from any of the objects within Skybot, actually, and give other groups access to them to change or just view. Our security is very granular. We'll interface with an LDAP or an active directory server as well, so that you don't have to manage users specific for Skybot. You can create a group over in your old app server, and then we'll map that group over to Skybot for you. And then, auditing and reporting. Again, it's difficult to have reporting on a bunch of disparate systems. Skybot allows you to pull all that together and report on such things as audit, any changes, job history, all of your exceptions. Forecasting: what's going to run next weekend? What's going to run over the holidays? If there is a holiday coming up. And allow you to be able to pull all that information into one place.

Skybot Scheduler's Architecture

So a little bit about the architecture of Skybot. So Skybot uses a hub-and-spoke type of architecture. We have a host server, and Skybot can be hosted on either AIX, or Linux. Once you install the software, it's about a ten minute install. We install an http server so that you can access it through a browser. All of our users interface is a browser, so there's no installation of any kind of a client on your work station. You can access it from any browser. And then, we use a Postgres database as our back end. So those three pieces are bundled together and installed on that host server, and then what you do is you install an agent on each of the VMs or servers. Again, it doesn't matter if they're in the cloud or if they're on premise, as long as we've got a port available from the agent back to that central server. So we just use IP for that communication. That communication is secured. So everything is encrypted when it goes across the line. We use TLS encryption for that communication. So what happens is the central server, he knows the status of anything that's running over on any of those agents. So as soon as the process finishes over on this AIX system, it sends its status back to the central server, and then it can trigger the next job in that process. So you could have an example, maybe I've got a suite of jobs that runs first thing in the morning. As soon as that suite completes, I want to run an ETL process. And then, maybe when that's finished, do a file transfer over to my data warehouse.

So any of these jobs can be running on any of the servers or VMs within the Skybot network. Makes it very easy and simple to set up those dependent jobs. We keep our products pretty generic, as far as we can run anything that has a command line interface, call any program on an IBM i, any kind of a script or executable or a Web services request as well. But we do have a few interfaces into some of the more popular ERP systems that are out there. And so we like to look at Skybot as the hub of your operations. So it can manage any of those jobs on any of those applications, whether they're in the cloud, or again, locally on premise. We can interface with those and then make your operations run very smoothly. So we have an easy to use interface. We want to make it easy. When we developed Skybot, Skybot was introduced in about 2009. So it's been out for about five or six years. But we look at it as just the latest generation of our schedulers. Our first scheduler was released in 1982, so it's been a while. This was all written in-house by our developers here. We're located in Minnesota, outside of Minneapolis. So it was our new code, we didn't bring anything over from our legacy systems. But we wanted it to be light and easy, so we kept that in mind while we were developing, and as we're enhancing it.

Job Scheduling From a Central Console

But one thing, I guess the most important thing that I think it gives our customers is that central console. It's going to give me a central view of things that are running, jobs that are running, tasks that are running across all my servers or VMs. It allows you to create jobs across multiple servers. History is going to be across all your servers. We can create these flow-charts that's going to be great documentation for where your dependencies are. I've got one. I'll show you in just a second. We've included SMTP, so you can send out email or text messages on any kind of delays or errors. We also include SNMP. So we can send a trap if there's a problem. A lot of companies will use that SNMP trap to automatically open up a ticket. So we'll interface with your help desk ticketing software, either with email or SNMP. We monitor the jobs for failures, or make sure that they complete normally. We look for exit codes, and make sure that we have a successful completion before we go on to the next step. So we'll monitor all of those different statuses. We also can set up monitors for an overrun, maybe you've got a job that tends to get in a loop. I want to make sure if it runs over ten minutes, I know that's a problem. So I want to stop that. We actually can cancel the job, and/or we can just notify you of that. We've got a number of monitors that we keep an eye on your systems for you. And then, we also can monitor for different events that occur on your systems. Now, probably the most popular one is this file add. As soon as I see this file come into my FTP server, I want to trigger a process, something to process that file.

We can do this at the file or directory level. We've got a lot of different monitors that we can do for that. We can also monitor processes starting or ending. So if you've got a database that needs to be up and running, some type of a never ending job that's sitting in the background, we can monitor for those processes too. If one of them ends, we can notify you. We can also trigger a job to try to restart that process. So like I said, we want to keep an eye on your systems for their exceptions, and take care of them for you. And then, I always like to point out the security features that we have. So we do have role-based security, as I said, so that your access can be to just specific agents, or just specific jobs or other options. Again, our MSPs use that feature so that they can limit what their clients can see within the Skybot network. You can have the authority be whatever it needs to be, change, execute only, view only, or be excluded from some of the more sensitive jobs that are running on your system. That role can be based on whatever your needs are. Like I said, we interface with LDAP or active directories so you don't have to maintain a bunch of Skybot users.

Navigating Skybot's Interface

So I'm going to go online here quickly, and show you Skybot. So I just brought up my desktop. You should be able to see my dashboard. Hang on, I'll bring up my chat window in case I get a message from anyone. So this is a dashboard that we have for Skybot. This is the interface. As you can see, it's browser-based. So you would give your users the URL to log in. They do have to have a valid log in in order to sign on to Skybot. Everything is managed through these drop down menu options that we have here. This is where jobs are set up, and you can create individual jobs or suites. I'll show you some examples. We also have a command line interface, or Web services interface as well. So if you do want to do some batch updating or trigger some Skybot jobs from another application, we make it easy for you to do that a number of ways. We have a number of objects you can create once and use multiple times. Calendars are important, holiday calendars, fiscal calendars. You define your STP servers here, Web service definitions. So here's where you can define different objects that you might use by multiple jobs. And then, down in this section, we've got our third-party integrations. So we have a number of them. We're adding those all the time. That's where some of our development is going. So one of the first things I want to show you is for those of you that are using AIX, or Linux, and you're using cron for scheduling jobs, I'll just show you that we have a quick way to take those jobs and bring them all into Skybot.

Scheduling Cron Jobs in Skybot

So this is a crontab that I've got over on one of my AIX servers. When you install the agent lead, we ship a shelf script that you can run. And what it does is it just takes a copy of that crontab file and copies it over to the Skybot server. Now, we don't change it. We don't do anything with that cron file. Those jobs are going to continue to run until you either delete them from the file or whatever you want to do. But we can just input them into Skybot. And doing that, we can bring them in on hold. So again, you don't have to turn a switch and all of a sudden start running your jobs in Skybot. You can bring them into the system, and then depending on your project and the timing of it, when do you want to release them within Skybot. I'm going to give them a prefix, then I'll show you what will happen. I can just click, select all of these. So these are all the entries in that crontab file with the schedule. And I just run a bunch of echo commands. But this would be where your shelf scripts are.

Monitoring Jobs with Skybot

Then, the environment type, we do need to log into that server when we run these scripts. So bear with me just a second. So you'll see a list of what you want to import. I'm just going to click import here. Yeah, I'm really sure. And so what Skybot does, it then goes out and reads that file, and it creates individual Skybot jobs for each of those entries. So here I've got my jobs created. I'm going to go ahead and click here. That's going to show me. Now, I've got a bunch of individual Skybot jobs. I've just named them one through nine. You can see the next time they're scheduled to run over here. We'll bring in that schedule. But these guys are all on hold, so they're not going to run. So I'll just show you, right clicking always gives you a drop down list of options. You can run a job, hold it for a certain amount of time, look at the history, etc. But I'll just show you quickly. So I would schedule that job, this is the agent that it's running on, one of my AIX servers. It's using the standard calendar, but I can edit that if I've got a particular calendar I wanted to use. And then, we just use the cron expression as a schedule type. You guys are familiar with that, so you can just leave it that way. Or you can use any of the other scheduling options we've got with Skybot. I won't get into that now.

Then, this is the commands. So we put our descriptor here. This is the entry that was in that crontab file. So this is just a descriptor. We can go ahead and delete that, and then we're just going to run this echo command. So very easy to move those jobs. We'll capture, standard there, create a job log out of those, save it with the job and archive that for you. Just to show you a couple of other things. Now that you've got these jobs in the Skybot, I'll just show you our monitors. So here we can add some of our feature to this job now. Here's our overrun, so you can either put a maximum duration. Or if you've got an SLA, you can make sure that it's finished by a certain time. And then, this is our notification. You can send a trap and/or I can send an email to whoever needs to know. Same thing for an under-run. As your backup only runs a minute and it normally runs thirty, that's a bad thing. Even if you don't get the error, backup might not have completed.

Then again, a late start. So this job is scheduled at three o'clock. It may not run if it has a pre-requisite and that pre-requisite is not met. So you want to know about that. So you just want to set those monitors up, so that you know ahead of time so you don't miss any of those SLAs. The other notification that we have here are statuses. So these are all the statuses that a job gets when it runs in Skybot. Submits to a queue, maybe skipped based on a condition that you've set. Here's a normal completion and a failed. For any of these, I can send a different group of people and then we can attach that job log along with it. So again, we want you to find out about those errors before they grow to be big problems. So these are pretty simple time-based cron jobs, but let's take a look at a job flow that might be a little more complicated.

Running Jobs on Different Servers

So what I've got here is a flow chart that shows the number of jobs that are running on different servers. Some of them might be in a private cloud that we've got. Some of them might be out in the public cloud. Some of them might be just stand-alone servers. So it doesn't really matter where any of these VMs or servers are located. We can very easily set up those per-requisites across all of these servers. So here I've got an IBM i job. This is dependent on a file, actually, that comes into the ISS on that system. I'll just show you how we can set up those file dependencies. So this is the name of my IBM i. And again, if somebody needs to know when that file comes in, we can let them know. So this guy just sits there and it wakes up every five seconds to see if there's a new file in this path directory on my ISS in that IBM i. We can also monitor for files to be removed, or changed, or a size threshold. Again, we can do that at the directory level. But this is probably the most common. Now, if your files have some type of end of file marker, we can scan for that marker. And then, we know the download's complete if you're getting a download from another server. Mine don't. So what I can do is I just tell Skybot, "When you see this new file, don't do anything until it hasn't changed in, let's say, three seconds." Then I know that that download is complete. The whole file is there and it's ready to go.

I also can use a wild card in here. This file has a date dependent to it every day, so the file changes but I can use that. So we just set that file monitor up. And then the job that I want to react to is this IBM i job. The way that we set those pre-requisites up, again, go into the job, always have the same list. You can either work from a list or you can work from this flow chart. I just build a list of pre-requisites. So this one happens to be that ISS file. If that file occurs any time, I want this job to run. I can add other pre-requisites as well. We've got a number of objects that you can use as pre-requisites, other suites or members. I would just pick from a list of jobs that I've got and set that up. Again, I can react to any status. I can group these together. So I can say, "I need both of these events to occur or one of the other." So all of these jobs are across multiple services. As soon as this finishes, then it goes and it runs a job over in my AIX server.

Manipulating Your Jobs

I'm going to run a job that creates that file over here for me. So I'll just show you in history. So you can manipulate your jobs as well. So if you want to run it outside of your schedule, you certainly can do that. If I didn't want to run any of the downstream jobs, I could flag that to not perform reactivity. I'm going to go ahead and run it. We'll go back in a minute and look at job history to see those run. So if I click on my job history, you can see I ran them earlier today. But now, I can see that this job that creates the file is running. It's already finished, so I've triggered the next job. We're going to keep history for you. If somebody does run a job, we're going to let you know who did it. So here's the command that it started on and I'm the one that did it. So we've got good history on that, again, good exceptions for your auditors. Now the CRM order that's running over on my AIX servers. So this is my IBM i. I ran the jobs on that. Now it's triggering user, which is my AIX server. Ran a couple of jobs there, and now it's kicking off the jobs on my Power Linux.  

So again, it doesn't matter where those jobs are running, you can just trigger them one after another. It'll take two more minutes here until we go to questions, but I wanted to show you a couple of things. So we have the agents that are registered to Skybot. As soon you install the software, it will link over to the Skybot server. So it knows where the agents are. We have an option here for agent groups. So if you have, for example, a cluster of servers for disaster recovery, what you can do is you can create a group. Now you can schedule the job that you want to run on this group. They will always try to run on the first one in the group. But if it's down - you can see this one is down. So right now, it's failed - then my job will run on the one that's available.

So this will be done automatically in the background. Just install the software on each of the nodes, and it will pick the one that's available. So I like to talk a little bit about disaster recovery. We also have our audit history. So we're going to track everything that happens within Skybot. So if we look at the audit history now, you can see here all of those jobs that I just created. So when I did that import and imported all those cron jobs on Skybot, it created a bunch of records for me for all of the different objects that were created. So here's my cron job number nine. I did it, and this is when. We have every field in our job setup that was created. We do have reports for that audit history, job history. We have a good morning report that will show you your exceptions. I always tell people, "If you've got a lot of information on your good morning report, it's not a very good morning." Because those are your schedule exceptions. But all of our reports can be scheduled. Again, it's across all of your systems. It's not just per system. It's across your entire enterprise.

Forecasting with Skybot Scheduler

Then, the last thing I want to show you is the forecasting. So we have forecast models that you can create over certain time periods. If I want to see everything that's running for the next three days. What I've got here is what’s scheduled to run this weekend. So I just picked Friday or Saturday I guess, and Saturday and Sunday. I generate that forecast. What Skybot does when we generate the forecast, is it looks up the schedules, looks up the reactivity, all the history, and pulls that altogether into a file. And now over here, you can see I've got that group of jobs that I've run, and I've got a few other IBM jobs as well, all the different agents. This is the time that the first job is scheduled to run. Even though these don't have a time because they're reactive to a previous instance of a job, we're going to determine when that job should run. If I click over here on this reactivity tab, it's going to show me how those jobs are related.

So that's Skybot in a nut shell. It should help you to be able to manage all of those power systems, whether they're in the cloud, on premise, a private cloud, a public cloud, a hybrid cloud. Hopefully, we can help you manage your jobs wherever they're located. Now, I think we have one more poll before we start taking questions. So are you planning for a cloud implementation? Maybe this year, or next year, or not yet. I should probably have a few more options here. So based on what you've heard today, thank you, T.R., for information on why business needs that cloud technology, and the time and money saving that it can be. Lots of flexibility, quickly be able to spin up resources if you need them. Then, hopefully some information on how you can manage that workload with an enterprise scheduler that's going to be sitting over all of your systems. So we should get our results in just a second. All right, we have 38% that are planning for a cloud implementation this year, that's awesome, 19% next year, and 44% not yet or no. Hopefully, it's only not yet. All right, Tami. I'm going to pass control to you, and we can see about any questions that we have.

Conclusion and Questions

Tami: That's great. Thank you so much. I'd like to take just a few minutes to answer some questions now. Again, if you do have a question, you can use the Q&A panel on the right side of your screen to send it in. Okay, first question. If I have servers in different locations, would this scheduler work in my environment?

Pat: It certainly will, as long as the central server has an IT connection to the agent, wherever it is. So we do need a port open between those two servers, between the host server and the agent. Well, if there's a firewall, we're going to need to have a port open between the two.

Tami: Got it. So does the partition, either AIX or Linux that supports Skybot enterprise scheduler have to be dedicated to that app only?

Pat: It does not. The scheduler does not take up a lot of resources. You can absolutely run other applications on that server. The scheduler sits there and waits for something to trigger, either by a time or an event. It doesn't take a lot of resources on its own. So the short answer is no. Sorry about that.

Tami: Okay. Can you tell me how Skybot is licensed? Is it by job, by user, by server?

Pat: Skybot is licensed by server, or by agent. So there's a license for the central server, for the host. And then there is a license for each of the agents, Linux or AIX, or IBM i, which ever. So you license those. We do have a high availability option for Skybot, that would be licensed as well. Then, there'd be a license fee for each of the interfaces. So if there was an SAP interface involved or Informatica, or one of those applications that we've got an integration with, there's a fee. But there's no limit to the number of users. There's no limit to the number of jobs. There's really no limit to the number of agents. It's going to depend on the resource power of that server.

Tami: All right, great. Thank you. Well, thank you T.R. and Pat for sharing your expertise with us today. Just a quick note that we will be sending out a link early next week to our recording of today's presentation to everyone who's on the call, as well as anyone who registered for the event, but for whatever reasons couldn't make it. So that concludes our webinar. I want to thank everyone for attending, and have great day.