Build an Integrated Job Scheduling System that Works for Your Entire Enterprise

On-Demand Webinar

Build an Integrated Job Scheduling System that Works for Your Entire Enterprise

IBM i

 

Do you have independent teams managing different platforms, but also have job dependencies within your shared schedule? If your operating systems are not working together, you are dealing with inefficiencies that can cost you time and money, not to mention the risk of human error. Automate Schedule, the cross-platform job scheduling software, now has an interface for Robot/SCHEDULE, so you can include IBM i jobs in your enterprise schedule. 

Join us! In less than an hour, we’ll show you how to schedule jobs across multiple platforms and applications, without needing to share system authority. We will work with the following interfaces:

• Robot/SCHEDULE
• SAP
• Informatica
• SQL Server
• Oracle E-business Suite
• Easy to add additional interfaces

Pat Cameron: Welcome, everyone. Thank you for joining us today. My name is Pat Cameron. I am the Director of Automation Technology here at Skybot Software. Today's webinar is about building an integrated job scheduling system. I think most of you out there are probably Robot customers. What we're going to show you today is a distributive system scheduler that we have that interfaces into Robot schedule, so that you can have one scheduler and schedule jobs across all of your enterprise. I'm here today with Dennis Grimm. Hello Dennis, welcome.

Dennis Grimm: Hi Pat, hi everybody.

Pat Cameron: Our office is located here right outside of Minneapolis in the deep freeze of Minnesota, but Dennis, I'm afraid you're in the snow belt of Minnesota. I think you guys have gotten even more snow than we have up here. It seems every time it comes through, at least it slows down when it goes through Rochester, doesn't it?

Dennis Grimm: Yeah, a little bit. It's starting to pile up quite a bit.

Pat Cameron: I bet it is. I’m working on my 15th year with HelpSystems. I worked with Robot the first 10 or 11 years; I did training and implementation. I'm really familiar with Robot and have been working with Skybot over the past 4 years. Dennis works with both the products, too. You work with everything don't you?

Dennis Grimm: I try to dabble here and there.

Pat Cameron: There you go. We wear a lot of hats. What we're going to be talking about today is enterprise scheduling. Also, a couple of housekeeping things. I will be recording this and we'll have a link on our website later on when we get it back from WebEx, as well as probably a follow-up email from you. So, if you want to share this with anyone else, or if we talk about something you miss, you can certainly take a look at the link when you have time to do that. First we’ve got a presentation; we’re going to talk about Skybot Scheduler and how it interfaces with a number of applications as well as Robot. I probably have about 15 minutes of slides and then we're going to go online and show you Skybot Scheduler live. We do have a number of Robot costumers that are using Skybot along with Robot for a number of reasons. We'll kind of talk about those as we go.

A little introduction to Skybot Scheduler. Skybot is a scheduler for Windows, Linux, and various Unix platforms. I was an operations manager in the years before I joined HelpSystems, and I know that data centers are just getting more and more complicated. You've got a lot of different business applications you're running, a lot of different platforms, and what we're trying to do with the Skybot Scheduler is bring automation to those distributed systems like we did with the IBM i. What you can do with the Skybot Scheduler is automate the jobs on those distributed systems and it doesn't matter what the platform operating system is with the application.

With Skybot, we can set up dependencies across those applications. Just about every application you can buy now—if you buy Windows—it has Windows Task Scheduler. Linux has a cron scheduler you can use, but they’re all kind of individual on those individual servers. What Skybot allows you to do is centralize the management of those jobs. It gives you history all from one place, and job setup from one place. We’ve got built-in notifications just like we do with Robot. So if you’ve got job monitors or are monitoring jobs for a late start overrun, we can now do that with jobs that are running on Windows and Linux as well. Creating a cross-system job stream is rally easy. We’ll show you how to do that today and show you how to react to Robot jobs.

We’ve got specific interfaces to a number of popular enterprise applications, one of them being SAP. We have a huge customer base for SAP; we’ve got a number of customers running that on Windows. Oracle E-Business Suite is another one that’s really popular. So we want to make it easy to run those jobs, because what happens is that even if you have SAP and it’s running most of your ERP systems, you probably have backups that you run or other applications and file movements going on. When you have jobs that are going on outside of SAP, you want to be able to schedule those in one big job stream. Skybot Scheduler is kind of a hub for those business processes.

We've got a couple different interfaces in Informatica. We’ve got Informatica Power Center and then Informatica Cloud for those of you that are running that as your ETL system. Then, Microsoft SQL Server. A lot of people maybe have some home-grown applications as well that they’re running with sequel server. We want to make it easy to include those sequel server jobs within your job streams. Like I said, a lot of those applications come with their own scheduler but they're not really robust. You can't schedule jobs on other servers in there. The value of Skybot Scheduler is the ability to centralize that management and create those event-given job streams.

We also interface with Robot. Robot is a great scheduler for the IBM i. A lot of times there may be dependences between jobs that run on the I, and as soon as the job finishes, I want to run something on one of my Windows servers, or vice versa. We can do that with our interface between Skybot Scheduler and Robot. Skybot is mainly in enterprise scheduling: that’s what it does best. We also do monitoring. So, we can monitor jobs for failures and other statuses as well as monitor for files.

One of the things that we found in this distributed world is that, many times, what's triggering a job stream to run may not be another job but a file that's coming in from a customer, a vendor, or a client. As soon as that file gets put on one of my servers—maybe on my FTP server—I want to wake up and run a job stream to process that file. We can monitor for file events, directory events, and other processes that are starting and stopping. If you've got some critical services that you wanted to monitor on your servers, we can monitor for those as well and notify you if there's a problem. If a job fails or if a job runs late, we want you to know right away. The HelpSystems philosophy has always been to be able to manage your systems by your exceptions. You should be able to set up your monitor and application and let it run. You shouldn't have to check in a number of times a day and make sure that things are running okay or that your month-end processes are running. You want to be notified if there’s some type of exception, but if everything is running okay, you want to have enough trust in your scheduler that it’s going to run things on time. We'll notify you if there are any kind of delays or failures.

Like I said, in the distributed world, file movement is huge. We’ve got a lot of transaction-heavy customers and financial customers who are doing a lot of file movement. We did a built-in file transfer function so you don’t have to write any scripting to do FTPs, file pulls, or pushes to or from your FTP servers. Those of you that are familiar with Robot Schedule Enterprise know that it has some of those same functions. It can schedule from the IBM i. What we found from the customers that are running Robot is that ometimes they don't want their Windows group to be able to schedule their jobs through the IBM i. If you do have a bunch of file transfers to set up with schedule enterprise, you’d have to go in through Robot, which is great—you have all those dependencies and can run those jobs on the other servers or set up FTPs—but if you have a separate group that doesn’t have access to the IBM i, Skybot Scheduler would be a good solution for that. It can set up their jobs and their FTPs without having to have a user profile on the IBM i.

We do have good security built in to Skybot Scheduler. It will interface with active directory, so those people that will be managing jobs or setting up monitoring can just log in with their network login. Then, you can create a group in your active directory server and map it to a role on Skybot, and it takes care of setting up those users. We have 30 years of experience in scheduling and systems management, so we tried to do a lot of things within the Skybot product that means you won’t have to do a lot of maintenance.

We also do a great job of auditing. Just like with Robot, we’ll audit all the changes to the schedule—all the objects that get changed. Then, you can schedule a report that goes right to your auditor who can keep track of all those changes. Between our audit history report, our job history report, and our security report, I think most of your Sox requirements would be taken care of, as well as other audit requirements and regulations you might be governed by.

In the distributed world, you've got a lot of different types of tasks you want to schedule. What we can do in Skybot is schedule a dequel server job to run on a package I’ve got and when that completes, maybe it’s going to create a file that I want to FTP to another server—maybe to a client or a customer. I could have a job that finishes on my SQL server that then triggers a file transfer and then that could trigger an SAP NetWeaver Job. We can also run process chains in SAP; that's kind of a recent enhancement that we just added to the product. Along with running Abbot programs, you can also schedule process chains using the Skybot Scheduler and interface into SAP. Again, these could all be taking place on different servers, but they could all be within one job stream if you needed them to be. Then, you could trigger an Informatica workflow. Tasks or workflows that are created either in the Power Center Informatica or Informatica Cloud can be part of that job stream as well.

We can also set up Robot reactive jobs. So, once all those processes are done, I might want to trigger a job over on my Robot system to run. I can set up that reactivity very easily between those other systems and Robot. One of the other options that we have with Skybot Scheduler is a high availability option. We use kind of a hub and spoke architecture for Skybot. So, your center server can either be on a Windows, Linux, or an AIX system, whichever one is in your data center that you want to use. It does not have to be a standalone dedicated system; other applications can be on there. You've got your central server and then you have an agent that's installed on each of the VMs or servers that you want to manage. They all communicate back to the central server. They send the statuses when they're done running, whether they’re successful or not.

You can create a standby server, where you install the Skybot software, and then instead of having its own database, you replicate the database from the production server over to your standby server. That replication takes place in real time. Your history, job setup, and rules are all up-to-date. If this communication is broken—if the server goes down or if you want to take it down for maintenance—some customers will do a role swap every few months to make sure they can run on either system. If this goes down, then you can bring the standby server up in production mode and these agents that were formally communicating with this central server will just automatically start sending their statuses over to the standby server which is now in production mode. It’s an easy swap to go from one server to another. It takes a few minutes and you can be assured then that your jobs are going to keep running.

Any jobs or scripts that are running on these agents will continue to run and then report their status codes to your new server when they complete. Here we have an example of a job stream. I think we’re going to take a look at this one live that we have on one of our demo servers. Here I’ve got a file event that’s triggering the whole process. As soon as I see this file in one of my directories on one of my servers—it happens to be an AIX server—I want to trigger a job to run over on my sequel server. The point of this job flow is just to show you that this file is coming in on an AIX server and will trigger a job on one of my Windows servers—the sequel job.

When that completes, I’m going to run an Oracle job on one of my Linux servers. When both of those are completed, I’ll run SAP, then down to Informatica, and then I’m going to run a backup when it’s all done. When that backup completes, I’m going to trigger a job over on Avalon, which is our robot system. When that backup is complete, it’s going to come back to one of my Skybot servers. It’s very easy to set up these tendencies or reactivity across different applications or servers. We’ve found that customers will find that streams run a lot quicker because you don’t’ have to schedule this sequel job hoping that the new file is there. You can be assured that this job won’t run until that file appears, and will only run when that file is there. You’ll have a lot fewer errors because jobs won’t try to run.

A lot of times people will try to schedule them at a certain time because that’s when that file usually gets there ,but maybe there was some kind of delay at the bank and it wasn’t created on time. This job will always run when that file is available and only run when that file is available. There is a little setup to do for each of these interfaces for the Robot interface. We didn’t make any changes to the Robot user interface, so those of you who are Robot users won’t notice anything on your system. You would need to update to the latest version in order to get the interface to work, but there aren’t any changes. We didn’t want to upset our Robot client base and put some changes in there.

We made it really easy to have that reactivity between a Robot job and a Skybot job. It's the same reactivity that you use between Robot and a user job. In Robot, you can either have another Robot job be a prerequisite, a network, a remote Robot job, or a user job. We’ve just defined the Skybot jobs as user jobs to Robot, so nothing changes in your setup on the Robot side. When this job finishes, completion codes are sent over to the Skybot system or sent from Skybot to Robot, and we’re going to talk a little bit about that in our demo. Dennis has got the scoop on how that communication works.

This is the Skybot interface, and we can setup reactivity to a Robot job. This is my react-to-Robot backup. This is the job that is going to react. Here's where I build my list of prerequisites, and I have an option here to “add a prerequisite.” It could be another Skybot job or event on the Skybot server. We’ve got a button here for Skybot schedule prerequisite. You would just select the Robot system and select from a list either an individual job or a group job, and pull that into Skybot. Then, down here, you can see when those jobs run, we send those status codes over to Skybot. So, this is my react to backup, this is running in Robot Schedule, and we send the submitted code and then it's active and running. Then it completes, and this is the remote server it came from. We have that history over on the Skybot server.

The SAP interface adds some functionality. CCMF is a scheduler control management system that is included with SAP, but we add additional functionality as far as notifications and different types of prerequisites. You're kind of limited, so we've increased the types of prerequisites or start functions that you can use. You can define your SAP system to Skybot and you can very easily create jobs that will run over on those SAP servers, and now you can include those SAP jobs along with jobs from other applications, one of those application being Informatica. A number of our customers are using both Informatica and SAP. You define your system—your Informatica system—and the drop down boxes to the workflow that you want to run from Informatica within Skybot. It’s the same thing with Informatica Cloud tasks; all we need to know is where that server is.

We'll use web services to communicate with those servers, then we can run those jobs over on the remote servers for you. It’s the ame thing with Microsoft Sequel Server. We need to know where that server is: we need to have an agent on there so we can run those jobs. Then you create a sequel agent job and we’ll run that. You can schedule some sequel server jobs, but if you want to schedule those jobs in with other applications, you would need a scheduler like Skybot.

Cron, like I said earlier, is a scheduler that you can use in Linux or Unix, it’s part of the operating system. A lot of customers use cron on those servers. It’s free and has a lot of good scheduling options. It's a little bit cryptic if you're not a programmer; it isn’t the easiest thing to schedule jobs in cron. So what we’ve done is create an import—we’ve got an option for you to import a crontab file into Skybot and we’ll create individual jobs for each of these entries.

This is one of my crontab files and I just do a bunch of echo commands, but now what I can do is import those jobs into Skybot and set them up as Skybot jobs. I can use this same schedule type; this one runs every day at 3:00, I think that's what 015 is. Now I can add notifications to it and I can add dependencies to it very easily. Month-end jobs are a little bit hard to schedule with cron as well. Now you've got all the different options with Skybot.

If you move your jobs from cron to Skybot, let's take a look at Skybot. We'll do a little overview and just show you some of the screens. Then, we'll talk specifically about that interface between Skybot and Robot Schedule and point out a few of the nice things about our application interfaces as well. So, I'm going to bring up my copy of Skybot and the first thing I'm going to do is I've got to share with my friend Dennis here. All right, I'm going to give you control for a minute, Dennis, and then I'm going to take it back. So, we may be fighting over the mouse before this is over.

Now, I know we're always fighting over something. So I've shared my desktop. If you want to send a chat message, if you have any questions while I'm doing this, if you just move your mouse up to the top of the screen you'll get a little drop down box from WebEx. Just click on that chat window and you can bring that up—it just sits in the corner of your desktop. If you have any questions, send a chat to me or Dennis, or both.

This is the interface for Skybot scheduler. As you can see, it's browser based, so you don't have to install any kind of client on your work station. Here's all the various history screens. Here's our job setup that we'll be taking a look at and a number of different scheduling objects. Calendars are just like they are in Robot. Date lists, which are kind of like data objects, and your FTP server. Here's where you define objects that you want to set up once and use multiple times. SAPs got its own little chunk here because it's so big. This is just to show you that we have some analysis tools here. We've got some great reports that you can run, we’ve got commands for all our reporting, and system administration as far as setting up security and system settings. There’s no client to install on your workstation since everything is browser-based. This is our cute little dashboard that tells you what’s going on. Just to show you what the interface looks like, we’re going to bring that up. This is the same flow chart I had on my slide and this is a job that’s running or a number of jobs that are running one after another on different applications. I'm going to let Dennis talk about what the heck are we doing here.

Dennis Grimm: Okay, thanks, Pat. As Pat said, we're at our flow chart, and the other thing I'd like to bring up also, since we are based off of a browser, is our tab feature. So, for each of the different things that you can do in Skybot, you can open its own tab. It's easy to get from one part of the product to the other just by going to the different tab that you might already have open.

As Pat said, you know we have our flow chart showing how jobs run. There are different types of jobs—SQL, SAP—running on different platforms in specific orders that they need to run in. The other thing with job flow we can do is get into the job and edit it. If we need to change something specific in this particular job, we can do that right from this screen and not have to go back to any other screens. It’s a very nice flow we have going on here for each of these that we're running.

So, for the SAP, the SQL, and the Informatica, we need to define them ahead of time so we can actually run jobs against them. We actually do that. For SAP, we do it here in the SAP NetWeaver section, otherwise under “scheduling objects.” We have a spot for each of the different applications: our Informatica setup, our SQL setup, our system definition. We'll go to those. We'll take a look at the Informatica Cloud setup. If we're going to be running some jobs against the Informatica Cloud System, we need to define in Skybot where that system is located. We just need to define it once on the system.

So, here we've put in all the information we need to access our particular Informatica Cloud server. Once this is defined, we can use this inside of a Skybot job. What's nice here is that if we ever do have to change something in here, possibly our user ID that we're using to connect or the IP address, we can just come to this one location, update it here and it globally updates any of our Skybot jobs that were using that information. We don't have to go into these hundreds of jobs and update them, we can just go into one spot, update it, and it will change it for every job using it.

So, that's the Informatica setup and it's going to be the same for looking at the SAP setup. Let’s take a look at that. It's going to be the same type of information that we're taking a look at. For setup, you just need to give the name of your application server, instance numbers, system IDs, and all the general information that anybody that works with SAP will know. So, we can use it within Skybot. That's our SAP and we can take a look at how we define our SQL systems also. In here, we can bring up one of those. Again, we're just going to put in the information that we need to access that particular SQL instance, so we can run the SSIS packages on that system. That's how we define those within Skybot.

Then, with that type of information, to actually put this into some Skybot jobs, we can go back to our flow here. We'll take the SQL one, we can edit that to find out where inside of Skybot we actually put that information. Here's a Skybot job setup. This is just standard information of the job names, what system we're going to be running it against on this system, etc. We've made it for simplicity in here, so that any job can run any type of program. You just need to tell us what type it’s going to be running. When we go down to our commands, you can see we’re going into a FTP transfer. Like Pat was saying earlier, we have also built that into Skybot. We can do a quick FTP transfer. Then, within that we’re going an SQL-type job. Here’s the little drop down where we chose the definition that we had previously setup on the screen you saw. You just choose the one you want to actually run against, put in the name of the agent you want to run, the SSIS package, and save that. Then we’ll be running that agent job via Skybot. We make it easy to add any other type by just clicking on the little carrot down here. This shows all the extra definitions we can do. If we need to build an Informatica Cloud task, workflow, Oracle concurrent request, or SAP-type jobs, they’re all handled from adding this command in. Add in the information you need, save it, and then we’ll run those jobs.

Pat Cameron: Something that you brought up is nice. If you’ve got schedulers or some kind of production control area, they don't need to learn all different kinds of interfaces, they can just schedule those all from Skybot.

Dennis Grimm: Correct, you don't have to go learn the Informaticas UI, and the SAPs UI. You just now have to know Skybots UI, and that's all you need to get into.

Pat Cameron: Beautiful.

Dennis Grimm: Alright, so those are our different types of Informatica. Now, let's go ahead and get into connecting with the IBM i and Robot Schedule. With that, “under scheduling objects,” when you install it, you'll find a remote servers listing. I've got that set up here. Here's the list of the different remote servers that we have on the system. These are each of the IBM i’s that we want to work with. With Robot Schedule, in this particular instance, Skybot Scheduler, I'm going to be doing training on this one.

The way we communicate between the two is by defining what system we want to go to. Then we define the information and make the connection over on the IBM i side—we’re creating a data queue on there. Then, Skybot talks to IBM i and Robot through the data queue and Robot Schedule will go through the data queue and talk to Skybot Scheduler. Everything’s being handled through a data queue on the IBM i, including the statuses and stuff going back between the systems. All we need to do within Skybot to define these systems is put in the IBM i server, then the user name and password of who we want to do the communications through. Then it’s just a simple thing of testing the communications and making sure it can talk to the IBM i system. We just got the little status down here that the connection was good.

Once we have that in place, then we just need to register the system. Once we register the system, that's when we'll be able to actually run the jobs between the two for reactivity. You can see we have a couple here that are not registered, so we couldn't currently run any reactivity between the two right now, but this one is registered down here. To register, you just do a right click and say yes, I really do want to register this system to work with that IBM i system. It's as simple as that.

Alright, so now to set up reactivity between the two, we're just going to be setting it up all on the Skybot side and then just a little bit on the Robot side. So, depending on what type of reactivity we're doing, if it's a Robot-type job reactive off of Skybot, we do a little bit on Skybot and we'll do a little bit on Robot. If we're making a Skybot job, we're reactive off a Robot job. Then, there's just a little portion that we need to do in Skybot. To have Skybot be a prereq to a Robot job we want to kick off a Robot job when a Skybot job has finished. We can pick on our training system and have the Skybot event be our remote prereq. We currently have one called backup DB. When this job called backup DB that runs in Skybot completes, we’re going to be kicking off a job within Robot schedule.

In here, we just kind of set it up: dependency, backup DB, we do have an extra field in here for a user job override. We have this in here because we're going to be tracking user jobs. If there's any chance that you might have a job called backup DB on your IBM i that's already running, you can come in here and do a user job name override and give this any name that would be very specific just to this particular Skybot job. We have that option in here also. Once we get that set up and save it, then we can go into Robot Schedule and here we have a job called Robot backup; this is the job we want to run after this particular job in Skybot has finished.

We'll just go ahead and open up this job. If we go to our reactivity, you'll see that all we're doing is having a user type job and we’ve got the backup DB name in here, so whenever the job in Skybot runs we're going to pass that information into the data queue on the IBM i. It’s basically saying that this job is running, so are there any jobs that are using this as a prereq since this job is set up? It'll grab the information and say hey, my Skybot job ran, so now I can go ahead and run. This job in Robot Schedule will go ahead and kick off. That's the chain that we have going on there between the two systems.

Pat Cameron: Excellent.

Dennis Grimm: Cancel out of here. Let's go to our flows. So, in here when we look at our job flow, here are our Skybot jobs. It says hey when this is done go run my Robot backup job. Then when that Robot backup job finishes, run this job back on my Skybot job. We can take a look at how that is set up. Just by going into our prerequisites for this particular job, you can see it's set up just by clicking one button. So I need to add a Robot job in here from our training system and click on this and it gives us a list of all the jobs that are already set up in Robot schedule on that system. You just pick what you want from the list and it adds it in here. Now it’s all set up, so this job will react whenever that job runs over on Robot Schedule.

Pat Cameron: Nice.

Dennis Grimm: You can run things right from here also, if you need to do so. I’m going to start here: we’ll assume we’re going to run this backup database job on the system. Then we can do the reactivity if we go to our history here within Skybot.

Pat Cameron: Here we have history from all different agents. You can see all the different servers in our Skybot network under the agent column.

Dennis Grimm: There are all the jobs running in one spot, so you don’t have to go from system to system to see what’s been running. It’s all in one application for you. It was a 15 second job, so we’ll just let that go. So you can see that our backup database job did complete. Then we can check on the Robot site here to show that it ran, and when that ran and completed it kicked off a react-to-Robot backup job. We see that it was a reactive job and then also ran on the system. So if you take a look at this, there’s the job that just got kicked off just one minute or so ago. There’s our react on the chain system.

Pat Cameron: We can see how quickly those react from one to another. Of course, our jobs don't run very long, but their reactivity is immediate. If you ant to look at that reactive record you can just double click on that reactive Robot job. It’ll show you what it says is reactive and then down in the detail it shows you the native code is reactive, and it says “reacted to job backup remote server training.” So, we'll keep that in history too: exactly what caused that job and where it came from.

Dennis Grimm: Yeah, tons of info in there. Just quick and easy.

Pat Cameron: Excellent. A couple questions have come in, they might have just come to me I think. Frank's asking "does Skybot has a green screen app or is it strictly GUI?" It's strictly browser based, Frank. There's no green screen because it's not really running on IBM i, it's running across multiple servers. The application is browser based. You don't have to solve any graphical interface or any kind of a client on your workstation. It's all accessed through the browser. Don's asking about Windows 2012, and I believe yes, we are certified for 2012?

Dennis Grimm: Yes, we're certified for 2012.

Pat Cameron: All right, perfect. We need to make sure to get that on our website, if it's not.

Dennis Grimm: Yeah, we go back to 2003 and up. We actually did certify against 2012.

Pat Cameron: I just have a couple of other things, then. If we go back to our job flow diagram, as far as a notification, you can add status notifications if a job fails. Here we've got all the statuses. We have kind of a queuing function. Each agent has a job queue and you can limit the number of jobs in that queue. You may skip a job based on some condition that you’ve set. It might go to that skipped status, running, completed, and then failed. What we’re doing here is we’re sending out notifications to a list of email addresses. We use SMTP or SNMP and send the job log right along with that notification in the email. We capture standard out and standard error, and create a job log out of that and send it off. We want to make sure you have good troubleshooting tools.

Also, for job monitors, like Dennis said, we can monitor job overruns, under runs, and late starts. So, that same type of functionality that we have in Robot we have with these jobs over on your Skybot server. I think we looked at all the interfaces here. We're going to show the imports. If you are using cron on either a Linux system or a Unix and you want to move to Skybot, we have an import center. We've got a couple different types of jobs that you can import. The Skybot import would be if I'm moving from, let’s say, a test environment to production. I can export all my jobs and other objects and import them into my other server. For cron I can export those crontab files. You run an export command over on your server and put a copy of that file on our Skybot server to bring it up. You can bring them all in on hold. It's not like you have to turn a switch there and stop running them in cron and start in Skybot all in one swoop. We can bring them into Skybot on hold, then decide to release and delete them from the crontab file, or simply comment them out. I can bring all of them in by just clicking on there.

I just run a bunch of echo commands, but yours would be a script for sure. Or I can select which one of these entries I want to create a Skybot job out of. Let me import those. What we’ll do is take those individual entries in the crontab file and create individual Skybot jobs—all to them. Now you’ve got them over in Skybot. These are on hold and have schedules. They’ll show you the next time they’re scheduled to run—there’s that 3 o’clock job. Now you've got all the other options as far as added prerequisites, job monitors, status notifications, etc. and a bunch of other options as well. So, I think that is our story for the day. Anything else you can think of Dennis?

Dennis Grimm: There's a lot in there. We could go on for hours.

Pat Cameron: We could go on for hours, exactly. But for today I think that's where we're going.

Dennis Grimm: Yes, I think we've hit everything for today.

Pat Cameron: Alright, let me take a look. We've got a couple questions that came in. Do we have a built-in for an N4? We do not. We don't have an interface for that, but if it's got a command line interface, typically what we can do is schedule those jobs on the command line interface that they have. It was something that we could test. Brian wants to know if I had examples of using the web surface integration. I don't right now, but we are going to do a webinar in a couple of weeks on the new features in 3.5, and web services is something we're going to cover then. I'll send you some information on that, Brian. We've got a restful interface to web services so we can receive web services requests. Recently we began being able to consume web services requests. So, we can send web services requests out to web servers. Greg wants to know what level of Skybot Schedule do you need to have this interface with Skybot. I believe it was released like in August. Is it 11?

Dennis Grimm: 11.16 Schedule and then, I believe, 3.4 of Skybot.

Pat Cameron: Of Skybot, yes. Cool, any other questions? Alright, I don't see any coming in. Thank you for joining us today. I appreciate you taking the time. Greg wants to know how it's licensed. We could talk about that a little bit. The licensing for Skybot works in this way: the central server is licensed and then each agent is licensed. It depends on the operating system and cost of the agent, but there’s a central server cost and then a cost for each individual agent and interface. But I know some of our competitors out there license by the number of jobs that you're running. We didn't want to do that. We wanted to make it easy. It is a pretty easy licensing program that we have. I'm sure somebody will follow up with you, Greg. Alright, well thanks for joining us. Thanks Dennis, that was great.

Dennis Grimm: Thank you Pat, and thank you everybody for joining.

Pat Cameron: We are going to wrap it up for today and hopefully we'll see you next time. Thanks everyone. Have a good one.