On-Demand Webinar

3 Steps to Integrated Business Process Scheduling

Windows, UNIX, Linux


You use cron or the Windows Task Scheduler for scheduling on your Windows, AIX, UNIX, and Linux systems, but the program's limitations frustrate you. You have difficulty coordinating schedules among multiple servers, or you're tired of creating complex scripts to deal with dependencies in your schedule processes.

In this webinar, we show you how to:

  • Schedule a job based on a system event
  • Set up cross-server dependencies
  • Monitor for file changes and trigger corresponding processes

You'll also discover:

  • Simplified server management with a central console
  • Security settings and auditing capabilities within Automate Schedule
  • The reassuring power of instant notification


Pat: Thank you for joining us today for our webinar on “Three Steps to Integrated Business Process Scheduling”. Today we're going to be talking about how you can integrate different applications, different servers into your enterprise scheduler by using Skybot Scheduler. My name is Pat Cameron, I'm Director of Automation Technology here at Skybot Software. I have been in the IT, Operations, and Administration business for more years than I'd like to admit to, but the last 14, I have been here at HelpSystems and Skybot Software. I work with customers, helping them automate their systems and set up monitoring and enterprise job scheduling across all different types of platforms, all different types of applications. So it's a great fun job because we make people's lives a little bit easier. I'm here today with Dennis Grimm. Dennis, hello, how are you today?

[Pause] Oops, I'm playing with your mute, sorry about that. Are you here?

Dennis: I'm here, Pat. Thank you very much. Welcome, everybody.

Pat: Sorry about that. Dennis works with customers, too. How many years have you been with HelpSystems, Dennis?

Dennis: I've been here for six years, and I've been in IT for close to 20 now.

What is Integrated Scheduling?

Pat: All right, so you've got a couple of long-timers here, lots of experience in Operations. So what we're going to do today, is I've got a few slides I'm going to go through. Talk a little bit about what is integrated scheduling when we talk about that. What are some of the problems that you might encounter trying to integrate your schedules if you don't have an enterprise scheduler? And then three steps to cross-system reactivity. We are going to go online. Dennis is going to do all the work today, and go online and show you within Skybox Scheduler how we can easily set up jobs that are dependent on one another across multiple applications, across multiple servers. So we're going to show you a live demo of Skybot when I get done talking. Over on the right-hand side of your screen, if you have any questions for us throughout the presentation today, go ahead and type your questions over in the chat. I think there are a few welcome messages out there, I'll send one out to everyone so that you can see where that chat window is. You can just type it in the window and then send it off to all panelists or to everyone. We will sure try to get you questions answered today. If we don't have an answer, we can find someone that does have an answer.

Talking about integrated scheduling, what are we meaning by integrated scheduling? If you run any kind of an IT shop and you talk about business processes, no doubt you're running multiple applications that probably have to interact with other applications on other servers or other platforms. Sometimes that integration is just maybe files that are going back and forth from one application to another. Sometimes you're actually triggering jobs one after another from different applications. What we view the enterprise scheduler as is the hub of that operations area, one place that you can go to easily, set up those dependencies across other applications, trigger jobs, maybe based on the starting or ending of another task, based on the creation or the change in a file—that seems to be something that's driving a lot of schedules these days. You want to be able to do that without having to create a bunch of complicated scripts and paying developers the big bucks to do that scripting. The other thing that happens when you do that kind of dependency scheduling with scripts is it's not very transparent, you can't really see what's going on. You have to dig into that script to see what those dependencies are. So we want to keep things transparent as well.

Business Processes Running Across Multiple Servers

I've got a couple of examples of business processes that might run across multiple servers. The one that's probably near and dear to all of us is payroll. When you look at a payroll processor, a lot of different pieces that need to occur in order to get that payroll to run, and to run on the right time, and to run correctly. You might have timesheets that are coming in from developers, sales reps, anybody that's keeping track of their time, so maybe you can bill it to customers at the end of the month. So you've got to get those timesheets in. They could be emailed or they could be FTP’d to an FTP server, and then you need a way to pick up all of that information, get it complied, pass it on over to the actual payroll processor. Once that payroll is processed, you need to make sure that that money gets deposited into bank accounts, it could be for hundreds or thousands of employees. In my previous life I was an Operations Manager at a hospital here in Minnesota, where we are located, and payroll was always at the top of our list. We wanted to make sure that that payroll ran correctly and ran on time, because everybody was depending on it.

You want to make sure that you can set up those jobs across different servers so they run without any kind of delays, without any kinds of errors, and then eventually maybe email out to customers that will be billed for that time. That's a payroll process. The other type of process that's very common these days is order entry. From your website, you've got customers that are putting orders into your website, placing those orders, and that might trigger a couple of different processes that need to run. In this example, we've got two concurrent streams of jobs that need to run. One of them needs to happen right away. I need to get that order over to the distribution center. If it's an item—if anyone cares, I'm waiting for the movie Charlotte's Web to be delivered so I can share it with my grandkids this weekend, it's an oldie but a goodie—somebody had to take that movie off the shelf and make sure that it got up to the loading dock so I could get it in just a couple days. The other piece of that goes maybe to the ERP system. We need to do some inventory updates, sales updates. These might be done during the day in real time, they might be done at night, off hours, if there are any off hours these days. And then we've got some report distribution that needs to occur.

Central Control Over Business Processes

So what we've got is we need something in the middle that can manage all of these processes, make sure that they run on time, and they are running on two different schedules. So that would be an enterprise scheduler that would be able to manage all of those processes. So what are you using for scheduling in your environment? I've talked with a lot of customers and prospective customers that are using individual schedulers that are associated with certain applications, maybe operating systems. There are a number of free schedulers out there, and we'll take a look at some of those, and the advantages and disadvantages of using individual schedulers and using the free ones. One of the most common schedulers out there, if you're using any kind of Linux or UNIX systems is cron. Cron is part of the operating system, it's pretty much built in. It's free. It's pretty good at scheduling tasks that need to run at a certain time on certain days. A couple things about cron though, it's kind of cryptic. You almost have to have a programming background in order to enter jobs into a crontab file. You lose a lot of transparency, because you might have a lot of different crontab files out there. Different people setting up different jobs to run, and nobody's got the big picture.

Free Schedulers

Nobody has any kind of control over when those jobs are running, to make sure that they don't run when they might interfere with one another. It's not a very intuitive scheduler. You can only have one command per entry in this job, in this crontab file, so it's hard to keep track of when jobs are running. You don't have any kind of a way to schedule dependencies across servers, that would have to be done within the script. I know a lot of times customers will schedule something on one server, hoping that that prerequisite process is completed on the other one, and kind of trying to time it just right, but sometimes that timing doesn't work so well. So what you end up with with cron are multiple schedules that might be processing at the same time, no dependency checking, and again, you're only able to schedule on one server. So again, great for very simple ones that are not time based, not so great if you've got any kind of complexities across your servers and applications.

Windows Task Scheduler, that's another one that's free. Built into the Windows Operating System, it does a nice job of creating a basic schedule for something that's got to run on a certain time. You can also build some dependencies through the Windows Task Scheduler jobs, but those dependencies have to be on that one server, so you don't have any kind of cross-system or cross-application dependency scheduling. So again, you're back to that single system, not a lot of control, no way to see the big picture. Some of the other schedulers that are out there—and they're pretty common - come with the applications. Just about every application that you buy these days will have its own scheduler built within it. Microsoft SQL Server has its own scheduler, it's separate from the Windows Task Scheduler, it's just for SQL jobs. That's one of the advantages and disadvantages included, but it's just for SQL jobs. So again, it doesn't have any kind of dependency, if you've got multiple servers. It's great for scheduling SSIS packages to run on certain days of the week, certain days of the month, etcetera.

Again, it's one server, so if you only have one server and you run SQL on that server, you're good to go with the SQL server. Difficult to find out if that job completed normally so it can go onto the next step. There's really no way to build those types of dependencies in there. So it can cause some big problems, if you're prerequisite runs long, or it runs late, and it isn't completed when you expect it to be, things are running out of order. It can really give you some headaches. Another example that we have—and we're going to show a couple of examples of applications that we have interfaces to in the Skybot Scheduler –is SAP. SAP is a huge, complicated financial accounting, distribution, manufacturing package. We have a number of customers that use SAP. It does have a built-in scheduler. It's got the CCMS Scheduler component: Computing Center Managing System. Again, it works fine for scheduling SAP, ABAP steps, or ABAP programs to run at certain times, but it really isn't very robust for any kind of complicated scheduling.

That makes sense, because the people at SAP, the developers there, they are experts in financial software. They are not experts in automation and job scheduling, so they don't want to spend their time and their money building a scheduling tool. So they give you one that's basic, but they also interface to third parties, because we're the ones that are experts in automation and job scheduling. So the CCMS Scheduler, again, is great for scheduling SAP, scheduling on one server, but doesn't do the job if you've got multiple applications and you're running schedules across multiple servers. So those are some of the other individual schedulers that are out there. They will do the job if it's a single schedule and a single application on a single server. But how can we schedule jobs that have dependencies on other servers or other applications? It seems that a lot of times what's driving the schedules these days are maybe other processes outside of your application—incoming files, orders, etc., file changes—and it's difficult to schedule those dependencies, especially if they are on other servers or other applications, without having some very complicated scripting that needs to go on.

Integrating Different Job Schedules

With a solution like Skybot Scheduler, it's very easy to schedule jobs across those different servers. Skybot Scheduler uses kind of a hub and spoke architecture, so we have a central server where the database is stored. Then we have agents on all the individual servers, and we can just link all of those jobs together. So we're going to show you how we do that in just a minute. So this is an example of SQL, and then an Oracle job over on the Linux server, and then back to SAP on another Windows server. Informatica is another interface—a lot of customers use Informatica for their warehousing applications. And then when it's all finished, I want to run it back on another server. So we want an easy way to be able to step those jobs up, make sure that the prerequisites complete before those successors run, because you don't want things to run out of order. So instead of writing a big huge script that will cause those jobs to run on multiple servers, if you can break it down into smaller pieces, the other advantage that you get is if there's a failure in one of these pieces, we're not going to run those jobs that are downstream of it, and we have a great restart point.

We don't have to rewrite that script to take these two pieces out that were successful and just run this. You want to have a good restart point, in case there are any types of errors as well. One of the other problems you can have is waiting for those files to arrive. A lot of times, that's what's driving the schedules. In my payroll example, I don't want to run the payroll until those timesheet are all in and they are all compiled. If they are late, I want to be notified of that, because I want that payroll to run on time. A lot of times if a job or a job stream is dependent on a file coming from another server—maybe an FTP server—or if it's changing because of another process on one of your servers, those scripts can get very complex if you have to include all of those different type of resource dependencies in your job stream. Also if you've got a script that's out there looping and waiting for those files and waiting for those dependencies, it can add a lot of unnecessary overhead to your system. It also can affect that restart point. Again, things are going to happen, and we want to make sure that we have a good way to restart if we need to.

So with Skybot Scheduler, we have a lot of different types of objects that can be prerequisites for a job stream. In this little example, we've got a job that runs—that’s one of the steps—and then we have a file that comes in. So we've got a little file watcher—or event monitor, as well call them out there—that’s just sitting and waiting for that file to come in. We're going to run step 2 when step 3 is completed, when this file comes in. You can build a lot of complex dependencies, depending on what it is that your job needs. You don't have to worry about this job even trying to run until this file comes in. You can also put some monitors on it to make sure that it starts by a certain time or ends by a certain time. We want to make sure if you've got service level agreements that you're able to keep those.

If you've got a bunch of individual schedulers on your different servers, if you want to schedule some kind of downtime, or if you want to see what's happening across your servers, there's no easy way to do that. I've talked to customers who have got a bunch of spreadsheets that they manually document when jobs are running and check them off from there, but they don't have any way to see the big picture. You might want to just see what's running, what scheduled to run this afternoon, or you might want to schedule some downtime on one of your servers. Maybe you've got to do some maintenance on one of them, so you want to know when that server is going to be available to take down. It's very difficult to see that if you've got dependencies, you're not going to see that if you're just looking at the schedule on the one server.

Understanding the Big Picture

With an enterprise server, one of the things that you've got to look for is the ability to see that big picture. In Skybot we've got an activity monitor that shows you what's happening right now—these are jobs that are forecasted for the next 24 hours—any jobs that might be queued and then any jobs that are running right now. You want a quick view of what's happening right now, what's happening in the next couple of hours. We also have the ability to forecast what's going to happen next month, what's going to happen next year. If you've got some holidays built into your schedule, you want to see what's scheduled around those holidays. You want a fast way to do that. That's another type of a forecast that you can build. So any enterprise job scheduler needs to include the ability to forecast and also the ability to see quickly what's happening right now.

One of the bigger problems is how do I get notified if there's an error? I don't want to come in in the morning and find out that nothing ran last night because there was an error on one of my programs that failed and none of the successors ran, or one of my servers went down. We need a quick way to be notified. We need to make sure that we notify the right people, and we need to make sure that we notify them in the right way. So your enterprise server needs to have a way to monitor jobs, or maybe an overrun or a late start. If you've got any service-level agreements, we want to stay on top of those. Again, we don't want you to be surprised when you come in in the morning and find out that nothing ran. We want to notify you immediately. HelpSystems and Skybot Software, our philosophy is to be able to monitor your systems and manage them by their exceptions. If everything is running, great. Don't bother me. But, if there is some kind of a problem, I need to know about it, and I need to know about it right away. Skybot also includes a couple of ways that we can interface with your Help Desk Ticketing Software. So maybe we can notify someone and we can automatically open up a ticket so it gets into your ticket processing, so that problem is going to get solved quickly.

So, those are some of the problems, and we're going to talk about the three steps that you can do within Skybot to get those tickets created and get them up and running and get them running at the right time. This is an example. Same example of the job flow. Dennis is going to create one kind of like this, using the same type of examples. We've got a file watcher out there. As soon as that file comes in, it's going to trigger this SQL job to run. We've got another little loop here, I'm going to run something on an Oracle server and I'm going to run an SAP. You can have and/or logic. When that finishes, I want my Informatics workflow to run, and then I want to do a backup at the end. We are going to show you how easy it is to step those jobs up, and we'll talk a little bit about the interfaces and how we interface to the applications that are out there - some of the common applications - create the jobs that will actually process those programs. We'll show you how to create those links so those dependencies, a few job monitors you can monitor for late and errors. Then you can create a flowchart that will show you those dependencies, so you have a nice visual of what is going to run when. If we have time, we'll go ahead and run those jobs so you can see what they look like when they run.

I'm going to make Dennis the presenter and put him on the spot.

Dennis: Thank you, Pat.

Pat: You're welcome, anytime.

Dennis: I'll open up my sharing here.

Pat: Do you want me to share? Do you want to use my desktop? You can go up to Share at the very top of the screen, and you should have the option to share your desktop.

Dennis: Got it.

Pat: Oh, beautiful. That looks familiar.

Setting Up Integrated Jobs in Skybot

Dennis: This is Skybot Scheduler. Currently we are looking at our job screen. I do have it filtered there. For our little demo session, we are tagging everything with integrated names. So we are currently just showing any jobs that have that particular tag in it on the system. The first thing we want to go over is defining our application servers. How do we interact with SAP? How do we interact with Informatica? How do we interact with the SQL server? For those, we have to make the definitions for them so we can go talk to them, and then schedule jobs on them. So for SAP, we have a little section here—the SAP NetWeaver—and we have the system definitions section that we would go to. I have that set up over here. So here is one of the SAP systems that we have defined on the system that we want to be able to run the ABAP-type job against. I'll open this up and we'll take a look at what's inside that. In here, we are setting up system definitions. We just need to give it a name that makes sense as to what SAP system that we're connecting to, you can give a description of whatever makes sense for you.

The tags, so you can search for it later on. We can put tagged info in there. Down here, we actually pick the application server name that's being used for SAP, ours is called Animal. Instance number—ours is just 00—if you have different instance numbers, different system definitions, you just put that information in there. Your system ID, if you have a router string, you can put that in here. Down below we have the default system environment settings, so we are just using our client code 001. English language. Looks like we are going to be signing in as Pat when we are running these. We can do a poling setup for SAP jobs. Currently we are just going to keep that disabled for right now. You can set up intervals for poling out there. Audit levels and SLD registrations—if you need to get those—we have those built in so you can do those also. That's how we define our SAP information, so we'll be able to create an SAP job. When we create that job, this is the information behind the scenes that it's going to go connect and run the jobs.

Pat: This is a one-time setup, isn't it?

Dennis: Yes. Just have to do this once, and then we just keep reusing this information later. We can also do the SQL servers. We go under Scheduling Objects and we have the SQL server definitions here. We can take a look at those, we have a few of them already set up out there. We'll open one up and take a look at how those are defined. We are giving it name—a name that makes sense as to what SQL server we are actually connecting to. The description, again, we can use tags. If your SQL server has a certain server instance name, we can put it in here. The one that we're going against does not have an instance, just using the default, so we can leave this blank, since we're using the default. Then we can use a trusted connection to the SQL server, or use the username-password. It all depends on how your SQL server is set up, as to what type of connection you're going to do. We're going to be doing the username-password connection here so we don't have the trusted, and we're going to be logging on as the SA user on that system. Again, we just set this up once, and then we'll be able to set up multiple jobs using this information on the system.

Scheduling Objects and Agent Event Monitors

Then we also have the Informatica, so again under Scheduling Objects we have our Informatica system definitions. We will bring this up to take a look at what info it has in it. Again, we just give it the name and description and any tags that we need to. Then down here in the actual system definition information, this is the info that we need to reach the web servers and the Informatica servers. The server name that's out there. The port that's set up to listen for the information coming in, and whether it uses HTTPS or not. Your repository domain names, repository name, and then the user that you want to sign on as. Then if you're using a security domain name, then you can put that in here also, so we know what security domain name you're using. So that's the simple setup to define these business systems so we can easily create the jobs to go run against them.

The other thing that we will be setting up in here is the Agent Event Monitor, so when something happens—a file comes in—that’s what we're going to trigger off of, a file arrival. So under here, we have different Agent Event Monitors. We can do a manual event, you can actually write something into a BAP file or script—that you create—would actually kick off a Skybot job. Or we'll do a file event—which is what our current one here is—when a file arrives in a certain spot that we're monitoring, I want to be able to kick off the job. So that's what the file does. A directory event just watches a whole directory. Has my directory grown? Has it gone past a certain threshold? Has the date or timestamp changed on it? Something like that. Then the last one we have is a process event, so we can actually monitor the processes down on some agents and if we see a process end we can trigger off that and we can actually go ahead and restart that process that ended on the system. So that's how our Agent Events Monitors work.

We'll take a look at this one right now to see how we set up a file added event on the system. I bring this one up, and again, we're giving it a name on this system—whatever you want—with a description, we have tagged it with our integrated. Currently status is enabled so you disable these if you don't want them to run, coming up for a certain time I don't want anything to trigger if this runs. So you can disable these and then reenable them easily enough if you need to without deleting and recreating. The agent group is the system that we actually want to monitor for this file on. Currently we are going to be monitoring something on our Ether system. We can set the time zones and the days to keep the history. If there's triggers, we can send email, so when this actually does trigger it will go ahead and send Pat an email right now.

Dennis: I know, you get lots of stuff.

Pat: Mm-hmm.

Monitoring for Files With Skybot Scheduler

Dennis: We can set event valid from, so if we only want this to actually monitor for file and trigger during the normal business day, we can put in here, hey just watch this from 9a.m. to 5p.m. That's the only time I want this to trigger and cause jobs to react, so if it comes at 10p.m. at night, I don't want anything to react off of that. We can set the time that these monitors actually work. Then down here is where we are actually monitoring. So we are actually monitoring for a file added, and we have different options in here too. File removed, file changed, or file threshold, different options there, but right now we are just looking for a file added in this directory. Again, it's one of Pat's. Pat gets picked on a lot.

Pat: Yes, she does.

Dennis: If we ever see a daily_tx_file and then whatever inside of /home/Pat, if I ever see something coming in there with that name at the beginning, and it has not changed for two seconds—in a little option that we have—then go ahead and trigger that. I saw a file in here, there's something I want to go do. So that's how we have our setups for Agent Event Monitors for file types in particular. So there's our applications setup. That's our event monitor setup. So now we can go in and we can go ahead and create some jobs. We'll put those together in a reactive chain, show you how that chain gets worked, and then we can run the chain.

Pat: Cool.

Dennis: So, we'll go over here to our jobs, currently we have the one job out there. We're going to first create an SQL job on this system, we'll start with that. So we click on the Create Job button, and again, we just go ahead and give it a name out there. We'll just call it SQL.

Pat: Sometimes that's the hardest part, picking the name.

Dennis: The Demo Job. Right. Picking out names, I know. You can spend 10-15 minutes going, "What's the best name for this?"

Pat: What's the best name? Exactly.

Dennis: We're going to run this against our SQL system, which for us is called Test SQL. We'll give it a tag of integrated. That's how quick it is. We can put in tags and you can make tags on the fly, or you can add them predefined if you want. You can do it either way. We're going to leave this unscheduled because we want a file event to kick this off. So we're not going to put a schedule on this one. We'll come down here to our agent environment, and this is just saying, who's signing on to this job, to sign on to that system. We have some already pre-setup info out there. So we're going to go... user windows, and when I do that, again you can see that the user name and the working path and all that stuff is already filled in, and we're going to pick on Pat again. We're going to run this job under her. So there's where we'll sign on, and then we just come down to commands. Here's where we add what business type of job to we want to put into this. We're going to be doing the SQL one for right now, so let's click on the little carrot next to add and we have SQL server job. So we click on that, we need to pick a SQL server definitions.

This is where we have already defined our SQL server in there. If I click on this, it's just the list that's out there. We're going to take the Test SQL one, and then we need the job name—the SSIS—the job name that's on the system. In this case we just have one that's called Demo SQL, that's what we want to go ahead and run. You can check whether we got the verbose logging or not, just want to check it for that one. Now that job is set up. We've currently got the job set up, we're going to be running this SQL job down in our SQL system. So we'll go ahead, we'll say hit Save a Job Log. We'll click Save on that, and now we've just set up our SQL job. That job is now set, we can go ahead and run this. Either go back and put in on a schedule, we could just right-click and run it now if we needed to. But what we want to do is, now that I have this job set up, I want it to run after that file has arrived in Pat's directory on that AIX system that we have. So we can do that just by going ahead and doing a right-click on the job, we can go to edit job, and then here we have a lot of different options that we can do.

Creating Jobs and Viewing Job Flows

For right now, what we want to do is the prerequisite, saying something else needs to run before this job can run. So we are going to go ahead and add the prerequisite, and here's all the different options that we also have in here. In this particular case, we want to pick the Agent Event Monitor. Then the monitor name was Sales, so we'll go ahead and we'll pick that one, and it occurs anytime, that's when I want to run. So now we have that in here and we can add more of these as needed, you know, if we needed five or six different jobs or file events or something, to run before this job, we can just keep adding them into this list. For now, that's all I need so I'll go ahead and save that. So now I can see we still have the two jobs, but this one now has a term that it is reactive type job. So we've got that set up and ready to go out there. So now we're just going to go create our other two jobs, we'll create an SAP job, and we'll create an Informatica job. We'll tie all those together, then we'll take a look at how we tied those all together and what that looks like in a job flow.

So we'll go ahead and we'll create a job again. This time we'll create our SAP job. In this one, we're going to run against a different Agent out there, so for this one, we're going to pick our Skybot Agent. We'll tag this one, too. We'll keep this unscheduled because we are going to make it all into a reactive chain. So we'll go down to our Agent Environment and say who's going to run this. We have the pre-defined one out there.

Pat: There she is again.

Dennis: Yep, it's always Pat. We have our user on there, so now we can go back down to our commands, and we're going to be running the SAP type one. If we do the little carrot for add again, and we have a few different SAP type things we can set up, but for this one we're going to do a Netweaver job setup. So here's just the three things that you need to fill out; the system definition which we went over, so we have our SAP system defined, our system environment that we want to use, and we'll just pick the default for this run, and then the job definition run, which what steps and stuff you want to go out there. We'll pick this ABAP step here. So we'll go ahead and we'll save that. That job is now commands have been set up. We'll pick that too. We'll save this. Did I spell my—Oh, I did, didn't I, Pat?

Pat: Maybe integrated isn't right.

Pat: I think you've got another one out there, though, did you?

Dennis: No, I think I missed a D.

Pat: There you go.

Dennis: Let's go, so there it is. So let's fix that really quick. So they show up in the last.

Pat: Love those tags.

Dennis: Boy, I'm having a hard time spelling integrated today. There we go.

Pat: Bingo.

Dennis: So now there's our SAP job that's out there. It's currently not reactive. So we want to tie this together. I now want the SAP job to run after our SQL job has completed. So we'll just go to the SAP, we'll do the edit job, and we'll go ahead prereqs, add a prereq, and this time we wanted a job to complete. So we'll leave that a job, come over here, choose our SQL Demo job. Click OK to that and we'll save that one. And now it is reactive, so we've got those all tied together. We'll just go create the one last job here. We'll do an Informatica setup job. We'll run this against another system that we have out there. Make sure I get my tag here right this time.

Pat: There you go.

Dennis: Always helps. And we'll go down, again we're going to leave this one as unscheduled. So we'll go down, back to our environment type, we already have one pre-defined out there. Guess what? It's Pat again.

Pat: Guess what? I've got to get some service accounts up there.

Dennis: We love you, Pat. We love you.

Dennis: So we've got that user in place.

Dennis: It does. So come here, into our commands, and here we'll have the Informatica workflow. And again, it's just a bunch of pull-downs that we come in here and select. We need to go back and choose what we saw from our earlier system definitions out there, which one we want to use. So we pick the PC Demo system. The integration service name that you want to use, we'll pick that one for this one, and any folder names that you have out there that you want to use for this one. We'll just pick New Folder 0. Then the actual workflow you want to use for this, and we're just going to stick with the Demo workflow on this. If there's a certain task instance path that you want to run as part of this, you can put that information in here. If you have an operating system profile, anything else that you need to fill in, you can if you want to make changes. But that's the main info that we need to go ahead and run the Informatica workflow. So we'll go ahead and we'll save that. We'll mark that and we'll save this, and hopefully it will show up here. There we go. We have our Informatica out there. It's not reactive, so let's go ahead and we'll make this run after our SAP job.

Pat: I just love that we schedule jobs in three different applications, using the same interface. So I would think that a production control area, this would be really helpful for them. They could schedule any of those applications straight from Skybot. You wouldn't have to learn all different schedulers.

Dennis: Yeah, all in one spot, multiple platforms. Yeah, it's not even that it's a different system, it's different platforms that we're going to. AIX, Linux, Windows—we’ve got it all and it all looks the same.

Let's see, let's go in and add one more thing to our SQL job here before I do a job flow. So let's say I needed to be notified—there’s something that goes wrong with our SQL job, it fails out there, or it's running too long—how can I get notified of that? We can go here and first we'll do a job monitor. If this one runs too long, I want to be notified. So we can take a look at our job run, and let's say it should not run more than one hour. If it does, notify me somehow. So this is where we can go ahead and we can send out an SNMP trap. We can send to a notification list. That's a list that has a pre-defined set of users already in it. So if you have a certain list of users that should be notified when this triggers off, we can use that, or we can go ahead and we can just pick single users on the system. I'll email myself and guess what, we'll pick on Pat, too. Now Pat and I will get an email if this job fails. And we can do some custom emails for this too, so you can set up a custom subject and some custom text with some pre-defined variables specific for this actual job.

Setting Up Custom Notification

If it does fail, somebody can get an email, and that email can actually have some good information in there as to maybe what steps they should take, or who they should call or something like that in their per job in the system. And if this actually does end up going too long, we do have the option to say we'll kill this job, it shouldn't take this long, just end it out there. So that's how we can set up some job monitors, and again we have underrun and late starts that we can monitor for. So we'll save that. Now the other thing is: what if this job fails? Then what? I want to be notified if the job fails. So we can do the right-click. We'll go to edit, and then we have status notifications. We have all these different statuses that we can notify you on. When the job gets submitted we can notify you. If we got skipped we can notify you. Run we can notify you, completed. The biggest one is the failed one. If a job fails, somebody really needs to know about it. So we can go ahead and we'll edit this one, and it has pretty much the same info that we just saw, on how we can notify.

So we can go ahead and we can send out the traps. We can send along a copy of the job log with it, and this time maybe I will take the list. So in here, this job fails, send this note out to the help desk. It might be five people in that notification list of help desk, so all those users will go ahead and get it. Again, you can do the advanced email options in there also. So we'll go ahead and save that. So now for the SQL, we have some extra stuff on there, notify me if this job fails, if it runs too long let me know also. So that's how you can set those up. So we have our three jobs out there, all running three different business processes. How can we visualize that better? Before we run it, we can do what's called a Job Flow Diagram. So we'll bring that up, and we'll go ahead and create a new one here. This all gets started by a file arriving. That's the big thing that's kicking all this stuff off, so we can start there, with this particular file arriving—that event that we had. So we'll drag that over there and we'll start with this.

Here's what kicks everything off. When I hover over this, you can see that something pops up, a little green arrow pops up. With this, we can click on this, and it starts building out how this job is actually set up. So click on that and there's our SQL job, right there. When this runs and completes, that's what gets triggered. So if I go and say, well is there anything that gets triggered after this? Just click the button, and yeah, there's our SAP job that's there. I'll move these up just a little so we'll have some room. Then what if I keep going, is there anything after the SAP job? There's our Informatica job. Is there anything after our Informatica job out there? If I click on that again, no. We get no more dependent jobs. So there's our stream that's currently built out for this job. You can visualize exactly how your job is going to be running in the system.

Pat: Excellent. Good documentation.

Documentation and Job History

Dennis: It's very nice, and it gives you some details; what system it's actually going to be running on, and what platforms and stuff. So it makes it very easy to see. Let's go back to our job, and what we can do is we can go ahead and we will run this and we can watch the chain. So I'm going to kick off this create sales file job. What that actually does is, this is the job that's going to go create that file inside of home/Pat on that system that makes her daily text file. So we'll be able to watch her Agent event history and we'll see that this will come in with the line triggering that, hey I saw that file arrive. When that file arrives, we'll be able to come back here into our job history and then we'll start seeing our three other jobs kick off from here. So, we'll go ahead and we're going to create the sales file, so we can just go ahead and we can run this right now. There's the command that we're using to create that file in Pat's directory, so we'll run that.

We'll come to our Agent Event Monitor history. Actually we'll go to the job history, and we'll see the create sales file kicked off. We'll go to the Agent Event history, you'll see that the Agent Event Monitor history did kick off. There's the file name that came in. You see it has today's date in there, so that's just using a variable. This information here, we can actually use in other jobs. We can pull this information and reuse it if needed, to take that exact same file and move it somewhere else if we needed to. So we'll go back to our regular job history, so you can see that our SQL job did kick off and it was running. You can see that initiation code. I kicked off this file as a user, so I did it by hand, the system didn't do it. Our initiation code tells you how jobs got kicked off. This one got kicked off because it was reactive. It's nice to see those out there also. So just refresh this quick and see what else we got, and there's our whole chain. Started off running our create the sales file, so that went and triggered this one out there. Once that got triggered, that kicked off our SQL job, so when that one finished it kicked off our SAP job. Finally it kicked off our Informatica job out on the system.

That's how easy it is to chain all these different jobs together. Easy to create and easy to get the process flow out there. With all of this going on, what happens with changes? How do you track these? Who did what and who did it when? Everything that we did has been audited, so we can go into our audit history on this system, and if we take a look at it, here's everything that we've been doing so far today. I've created this SQL job, and if we go in and take look at parts of it, it tells you all the steps that I actually took to create this job. If we needed to, we could actually recreate a job, based on all the information that we have in here. It tells you who created it, when they created it, and when they edited it. Here's that SAP update one, most likely going to be my tag.

Pat: There it is.

Dennis: I'm the one. Tells you exactly when I did it, what the original value was, and what I changed it to. Everything on the system goes into auditing on there.

Pat: Excellent, and we can report that as well. It's one of our built-in reports.

Dennis: Yep, lots of reports on there.


Pat: Excellent. Great. Well thanks, Dennis.

Dennis: You're welcome, Pat.

Pat: Nice job. I'll take back control. If I can. Any questions from our demo today? I think one of the things that I've learned just watching that today is to realize how you can schedule those jobs in different applications from a single interface. It's going to really help that learning curve, also make it easy to have that event there in scheduling without having to programing for it, without having to write any scripts. You get a central view, so that Skybot Scheduler can be the hub of your operations, and you can manage all of your production jobs and other environments as well - test and development - from a central location. We make it easy to use. Personally, I love that interface. I'm an old green-screener from way back, but I do like our interface. I think it makes it easy, and it makes it easy to navigate.

Dennis: Very modern, also. Very nice.

Pat: I think so. Yep, very nice. So that is our webinar for today. We're still showing a few minutes until the top of the hour, I don't want to run over too much, but if anyone has any questions, you're welcome to send us a question over in the chat window on the right-hand side of the screen. If you don't have any questions, you are certainly free to go and enjoy the rest of your day. I think a follow up email will be coming with a link to this presentation, if you wanted to show it to anyone else in your company that wasn't able to join us today, and there might be some other goodies in that email as well. So thank you for joining us, all. Dennis and I will hang out here for a few minutes to see if there are any questions. If not, you're free to go and have a great day.



Stay up to date on what matters.