Are you running multiple systems on Power? Do you have islands of automation throughout your IT systems? As your enterprise builds and implements an infrastructure strategy, it's critical that you provide seamless workload scheduling and centralized management to support all of its workloads. It's also critical that your IT department has a solution to help them manage workloads across many platforms and respond to the needs of your growing business.
In this video you'll discover the best ways to manage workloads across multiple platforms—on Power or in the cloud. Get insights into current challenges and trends in the automation and management of workflows running in all environments on IBM Power Systems.
You'll also learn how to:
- Effectively manage multiple platforms running on Power Systems
- Consolidate and centralize data on disparate systems: IBM i, AIX, Linux, and beyond
- Use cross-platform automation to ensure greater control of disparate environments
Janine: Hello, everyone, and welcome to today's presentation. My name is Janine Donnelly. I am the Manager of Webinars for IBM Systems Magazine and I will be the moderator for today's event.
Today's webinar, entitled “Manage Workloads Across Multiple Platforms on Power or in The Cloud,” is sponsored by HelpSystems.
Our featured speakers today are Pat Cameron and Chuck Losinski. Pat is Director of Automation Technology for Skybot Software and a 16 year veteran of HelpSystems. Her background in IT spans over 25 years and includes implementation planning, operations, and management. At Skybot Software, Pat oversees customer relationships, gives technical product demonstrations, and fields enhancement requests for development. She's written numerous articles on job scheduling and other workload automation topics.
Chuck has also been with HelpSystems for 16 years. He has over 30 years of experience in IT, including over 25 years on the IBM i platform. Chuck's background covers a wide variety of technical responsibilities including system implementation, programming, operations, and support. He's certified as an IBM System Administrator and as a HelpSystems Robot Certified Specialist. Currently he's the Co-President of the local IBM user group QUSER.
Today Pat and Chuck will discuss the best ways to automate and manage workflows running in all environments, be it on IBM i, AIX, Linux, and beyond. With our introductions complete, Pat, I will turn the presentation over to you.
Pat: All right. Thank you, Janine. Thank you very much for having us today. And thank all of you for joining us. And thanks, Chuck. It's always a good time when you and I can do a WebEx together. So today, we're going to be talking about automation, like Janine said. Centralized scheduling across multiple platforms. Whether you're running on the cloud or some type of a hybrid or on premise, it's important to be able to automate and centralize.
So we're going to talk a little bit about why consolidating your job schedules is important, how it's going to change your operations. I was an operations manager in a previous life at a hospital and I thought that our schedules were too complicated to automate, but I have found out since HelpSystems that I was wrong.
So then we're going to be talking about some of the solutions that we have from HelpSystems that will allow you to automate and consolidate across all of your VMs, all of your platforms, and all of your power systems as well.
So why an enterprise scheduler? Why automation? So whether your data center is in the cloud or running on traditional systems or some type of a hybrid, these are a few of the reasons why you would need an enterprise scheduler so that you can automate your production business processes:
Automation provides less errors, faster run times, and also it will provide documentation for audit purposes. Many businesses run multiple applications across multiple servers and multiple platforms. So when deciding on the best applications for a specific business need, the business unit doesn't take into account the type of hardware that it's on or the platform that it's on. That's kind of up to IT to handle.
So with the ability to run in the cloud as well, you might certainly have multiple operating systems. Every customer that I've been at in the last 16 years has got multiple operating systems. So because of this, you need a centralized scheduler or automation tool that can sit on top of all of them.
Do you have some cross-system dependencies that you have to manage? These dependencies can include a lot of different types of events. Some of those are other job completions, some are file transfers, or file completions, file creations and changes, as well as job failures. Maybe you need to trigger some type of an error recovery automatically.
How do you handle these types of dependencies now? I think we found that a lot of times, those dependencies are handled manually. Somebody needs to watch for that file, or unfortunately you might be using your expensive and rare developer's time to write some scripts that will handle some of those resource dependencies for you.
This isn't the best use of their valuable time and it should be used for improving and maintaining your business applications, not your scheduler. One of the most important reasons for having the enterprise scheduler is to free up your staff, so that they can work on new projects instead of the day-to-day tasks. We've got computers that can automate those day-to-day tasks for you.
So how do you manage all of those systems? Does it take up a lot of time during your day? Whether they're in the cloud or on premise, how much of your time is spent managing them or managing those processes across those different platforms? These are just a few of the reasons why you need an enterprise scheduler--a centralized scheduler--so that you can automate those business processes.
Without an enterprise scheduler, it's difficult to avoid errors. And if there's no central scheduler, jobs or tasks might run before the prereqs are finished. They might run in the wrong order because of timing issues across those multiple servers, sometimes causing downtimes which can cause you to miss an SLA. You might need to rerun or do a restore because you've got to rerun last night's batch. Your business people consider that downtime because they're not able to access their application when they need to.
And then there's always a need for process documentation and exception reporting. And if you've got a bunch of disparate systems, it's very difficult to pull that all together. So with some type of a centralized scheduler, you've got also a centralized documentation repository that's going to help you with that documentation that you need for your auditors.
Regulators always require some type of audit and exception reporting. And you don't want to have to monitor those schedules for delays or errors. That's why we've got computers, to do that for us.
Once you make the decision that you need an enterprise scheduler or consolidation of all of your schedules, some of the things that you need to keep in mind, in addition to your business requirements for your applications and your budget requirements certainly, you need to determine your functional requirements. And these are things like hardware requirements or security requirements that you might have. What types of schedules are you running? Do you have daily, weekly, monthly? Holidays, fiscal periods affect the schedule, etc.
So you need to do a lot of research to determine what those schedulers consist of and how those schedulers are running. What are those types of dependencies? Somebody knows those and sometimes it's difficult to find the person that knows what those dependencies are. There's a lot of tribal knowledge out there that you might have to research, so that you can get your requirements set up before you even start looking at any of the schedulers that are out there.
What are your audit requirements? What are your reporting requirements for your IT processes? So determine all that and make sure that they're documented well, so that you can make sure that the scheduler that you purchase will meet all of those requirements.
So here at HelpSystems we have a number of different scheduling solutions for you. And we're going to take a look at kind of three different groups today. And the first one that we're going to look at is Robot, Robot SCHEDULE and Robot SCHEDULE Enterprise. And Chuck is going to kind of take us through that and show us how the Robot products work.
Chuck: Excellent. Thanks, Pat. So if you do have an IBM i partition in your Power Systems configuration, you really want to consider a scheduling tool for that unique environment and the powerful work management features that the IBM i uses. But additionally, with an eye on your IBM i-based scheduling tool, you want to look outside of course at your Linux and AIX Power Systems partitions, or possibly the Windows servers that Pat mentioned everybody seems to have outside of the IBM i, and that most likely are part of your application solution. So if you haven't taken a look at the Robot scheduling solution lately, I hope you're going to be impressed with what you see today and the value that it brings to the table.
So the Robot Scheduling Solution has been available on the AS/400, the i series, as well as the IBM i and System/38 prior to the AS/400. So since 1982, Robot has been scheduling and automating processes in those environments. But over the years, the tool has morphed into more than just a batch scheduler. We've built interfaces to some very popular ERP applications that are available. And we also have a tool called Robot REPLAY, and Robot REPLAY is a tool that allows those legacy applications that run in the infamous green screen to be automated, and then to be added to workflows as part of a batch job stream.
Now, for scheduling multi-platform from the IBM i server, we're running something called the Enterprise Server in Robot SCHEDULE. And the Enterprise Server is Java technology in a server built around a subsystem on the IBM i. And that's talking to each of the agents that are running a JVM. So we're taking advantage of the new Java interface that we've created, as well as the background communications in Java. Everything that's happening here is all encrypted between the IBM i and your external servers. And the technology has been available from HelpSystems for almost a decade now.
So one of the questions that you'll want to consider is, why would you choose Robot, the scheduling solution, on your IBM i? Well, first of all, possibly most of your workload is on your IBM i in your data center. So it just makes sense that you're going to choose, at the very least, the base Robot scheduling tool for your IBM i.
Maybe you're in banking or finance, manufacturing or something like that, and the IBM i is really the cornerstone of your business. So the ERP application may be based on IBM i. There may be some ancillary servers processing transactions. Maybe hosting a website outside of the IBM i, but still feeding the transactional data back to the IBM i.
So your IBM i may be centric to your data center. You may have those AIX or Linux partitions external to the IBM i. Of course, you'll probably have some file servers external to the i and maybe you need to transfer data files back and forth between those platforms. So that's something else to consider. Maybe you already own Robot SCHEDULE, but just aren't aware of the fact that Robot SCHEDULE also does multi-platform: Windows, Linux, and AIX. So Robot SCHEDULE from your IBM i could be your enterprise scheduling and automation solution.
So what does Robot do? So first of all, your Robot on the IBM i is tracking all of the changes that are taking place to your job streams and to your jobs. So if there's a change made to dependency, if there's a change made to a command inside of a job, all of that is tracked for auditing purposes.=
Do you want to know how your jobs relate to one another, what the dependencies are, and see it from a visual standpoint? That is available. And we've recently released a web interface for the Robot SCHEDULE tool as well. So you can now manage your IBM i as well as your multi-platform workloads using mobile technology.
Many built-in reports are available from within Robot SCHEDULE, so you can report on your IBM i workload. You can report on your agent workload. We've also got a number of inquiry screens that we will show you, so that you can keep your finger on the pulse of what's going on with your workflow on IBM i.
Maybe you're concerned about some down time that's going to occur sometime in the future. Today is Wednesday, maybe you've got some maintenance that's going to take place on Saturday and you need to know the jobs or the processes that are going to be executing between 6 a.m. and noon on Saturday. Robot has the ability to forecast what is going to be processing in the future, so that you can take action. So that you know exactly what to run or rerun based on that downtime.
Pat mentioned that there are workflows that are very difficult to automate. And so what we've done with the Robot tools is we built in a scripting language that allows you to be very flexible with your automation. And that scripting language can check system resources and check other objects on the system and it can actually make a decision whether or not your process should run or possibly be delayed. So that increases the viability of your automation.
And of course, we don't typically run a schedule all the time. Much of our scheduling is event-based. So built into the Robot scheduling solution are event triggers. So, for instance, if an FTP process takes place, places a file into the integrated file system, library, or possibly even a directory external to your IBM i server, that can trigger a process within Robot which can in turn trigger a batch process, an interactive process, or another process running on another server.
And then of course, there's service-level agreements. Whether they are actual contractual agreements or just implied service-level agreements, they are a fact of life. So built into the tool there are options for tracking job overrun, job underrun, and late start. And duty notification and reporting are on that, so you will know if, for instance, your day-end process does not complete on time.
Now the other factor that you want to consider is, you may have multiple IBM i partitions. So there is a component that can be added into the solution that allows all those partitions to be displayed in a single interface.
So what I'm saying is that we can take a look at the job streams and the job flows running on one partition, and maybe another partition that you may also have on the same footprint or another footprint in your data center.
So not only will you have visibility into all of your IBM i partitions, but you'll also have visibility into your Windows, Linux and AIX environments, all from a single graphical interface.
And as Pat mentioned, if you do have legacy applications that are green screen based and have interactive processes that must be executed from a command line, whether they're menu based or command driven, Robot has an additional component that can be added into the scheduling solution called Robot REPLAY.
And what Robot REPLAY does is it basically records the interactive steps that you take to process a particular feature in your application software. So whether it's a day-end process or possibly executing some kind of a file transfer or file update, that can also be recorded, and then executed inside a Robot SCHEDULE batch job. One of those options that maybe has not been able to be automated before and there's no way to rewrite it, we do have a way to automate that.
Okay, so let's take a quick look at the solution itself. We'll be in a live environment. Okay, so first of all, this is the graphical interface for Robot SCHEDULE. And you'll see that there are two IBM i partitions listed here. One is called academy and one is called listener. I can get into the job schedules of either of those systems through this single interface. Now by expanding the Wisdom system, you can see for instance, first of all, I've got many jobs on this particular system. Now these jobs are batch jobs. Some cases are interactive processes, and other cases are actually jobs that execute on agents.
Now there's a built-in sort that is available within the job scheduler that allows you to look at just your workloads for each agent. So for instance, on this particular server, these are the only jobs running, versus if I look at the old jobs sort of where you see lots of activity. Now as mentioned, we are targeting just certain servers and those are included in the Enterprise Server branch here, in the graphical user interface. So these are the servers we're talking to. It's a combination of AIX, Windows, and Linux servers.
Now if one of these servers for some reason becomes unavailable, one of the things that we can do is offline notifications. So we could play some message in the message queue. We can send a Robot alert message, for instance, to Pat Cameron saying, "Hey, Pat. This server is no longer available and it's a critical piece of the automation equation. Here's something you could do about that."
Okay, another great way to look at this activity is through our job flow diagram or what we call a job blueprint. So this shows you all the dependencies that are part of the Robot automation process. In this case, this is a nightly accounting job stream. You can see we start out with a daily backup and we trigger a nightly process. That process had multiple steps. We also have another process here that's color-coded slightly in a different way, and these are jobs that are actually running outside of the IBM i. So we're targeting both batch processes, internal as well as processes external.
So first of all, let's take a quick look at a job or a process that's running external to the IBM i. Whether it's running internal or external, all of these processes have a name and some kind of a description. We also have the ability to add additional documentation to these processes. So there's a notes field. We also have a job text area where you can more completely document when the job was created, who created it, what's the purpose, even what are the rerun procedures, if that's necessary.
As far as scheduling goes, a lot of different scheduling options. I won't go into the details of all the scheduling options, but I guarantee you that they're flexible enough to meet your needs. Plus, we have something built in called reactivity, and reactivity means we're going to react to some kind of event.
In this case, we're going to be reacting to an agent event. So this is a point of sale file that's arriving in a Windows directory on one of our servers. It could be arriving in an AIX or Linux directory as well, and that's going to trigger the process. And what process is it going to trigger? That's in the command entry area. So in this case, we're triggering a batch file. Okay, and that batch file is going to be executed on the server that it's located on. We've actually noted that on the job, on what server that that process is going to run on.
Also built into this is a file browser. So we can actually navigate directly to the directories inside of the target server. We can find the batch file or the executable or the script that needs to be executed and copy that data directly into the process. We're also executing a file transfer process and we're even adding variable information, so that we're actually changing the name of the data file as we move it. So when that data file gets placed into the appropriate directory back on the IBM i, we're actually renaming it, adding the date to the file name. So here's the point of sale file being moved into a directory in the IBM i and we're appending the date and the time to that file name.
As mentioned, we do event-based processing as well. And that's the process up in the upper right-hand corner here. And we can see from the agent event history that this event was actually triggered on September 2nd at 11:38, just prior to this event, and the file name was called pointofsalewednesday.txt. So that was my file trigger that triggered both of my processes, both on the i as well as off of the i.
As mentioned, security has to be a component of the solutions. So as part of the Robot tools, there is security built in so you can lock the tool down very carefully. And also there's monitoring technology built into the tool. The monitoring technology will both send out exception notification, and with the schedule activity monitor, we show you all the jobs that will be executing in the next 24 hours. We show you those jobs that are currently running or waiting to run or might be waiting on a message.
So here we have a job that has a message associated with it, and we could drill into it to see exactly what the problem is. So here's our completed list for the last one to two days and it's pretty easy to see here what jobs have not completed. Normally red is bad, green is good. And finally, the Robot tools are moving into the browser world and this is the same schedule activity monitor lead through our web interface.
As you can see from this interface, you can see all of your jobs. You can see all of your group jobs, completion history, event history, reactivity chains, or dependencies. And you can drill down into these processes very easily. So for instance, if you wanted to see how your dependencies were associated with this particular job, it’s very easy to do inside the web interface. And imagine doing this on tablet or a phone, it works exactly the same way. Okay, Pat, I'm going to turn it over to you.
Pat: Cool, cool. Thank you, sir. I appreciate it.
Chuck: You're welcome.
Pat: So I'm going to talk about another scheduling solution that we have, Skybot Scheduler. The main difference... so all that stuff that Chuck said, it all works within Skybot as well, as far as all the different types of scheduling, reporting, etc.
The main difference between these two products is that Robot is hosted on the IBM i. Skybot is hosted either on Windows, Linux, or AIX. So you can use either of those as your host system for Skybot. And then you can trigger jobs on all of the other platforms that are available. We do have an agent for the i, Linux, various flavors of Unix, AIX, as well as Windows.
So all of those platforms might be hosting some of your business applications, and in order to be able to bring all of those together into one scheduling package, you can do that with Skybot. And we've got customers that are using Robot on the i and they're using Skybot on maybe a Windows network. That's you know, maybe totally separate from the i. There aren't a lot of dependencies between those systems. If there are, we can handle those. But we do have customers that are using both of our products. It just depends on where it fits in the best.
So Skybot is an enterprise scheduler. Like I said, it's got all of those scheduling options as far as dependencies and reactivity, monitoring for events. So you can do that all centrally from one location, which is the point of all of this. We've got notification built in as well. Chuck showed a couple of ways that you can notify an error or a delay and we have that same type of notification.
So we can send an email or a text message. We also can send an SNMP trap and that way, we can interface with your help desk ticketing software as well. We can automatically open up a ticket when there's an error and somebody can get troubleshooting that right away. Skybot includes role-based security. One of the differences again between Robot and Skybot is that for Robot you'll need a user profile on the IBM i in order to do the scheduling because it's hosted on that i.
We can interface with Active Directory. So you can build different roles, create a group over on your AD server and map that group over to Skybot. And then users that are scheduling jobs or monitoring or looking or running reports can just log in with their network login. And we have all those auditing and reporting options that are available in Robot as well.
So this is just kind of a picture of it. Architecture looks the same. Actually we're using that same Java technology for our agents that are running over on the VMs or the partitions where these jobs are processing. We have a central server. All of this communication between the server and the agents is encrypted, so you can very easily set up a job suite that runs in the morning. And as soon as that finishes, trigger your ETL process, maybe, over on your i. And when that's finished, do some file transfers to move those files either out to a client or a vendor or to another server within your organization.
We do have interfaces into some of the popular applications out there. ERP packages such as SAP, ETL, Informatica. So we've got some built-in functions, but we also kept Skybot generic. We'll allow you to run anything that has some type of a command line interface. Anything that can run in a batch mode or a web service. We've got a REST API that you can use for a web service request, if you need to include that in your workload as well.
So you can create a job flow. Jobs that are running across multiple servers might be triggered by a file event like the one that Chuck showed. So you can manage all of those systems. You can create jobs on multiple servers from one place. You don't need to log into those servers. You can set up objects that you have on your Skybot server and use those objects to have the right authentication and the right credentials to be able to run those tasks over on those other servers or VMs. And so the production control people don't have to know passwords, user profiles, etc. They can just pick from a drop-down list.
Notification, like I said, is built into the product, and we have the same options for notifying on statuses, failures or successes, monitoring for late starts, and overrun. And these were put into Robot years ago to keep an eye on SLAs and make sure that you don't miss one of your service-level agreements. So we have that same technology in Skybot as well. So that we want to make sure that you know well ahead of time if something is running late, so that that problem can be solved before you miss that SLA.
And then we can also notify on event statuses. Somebody needs to know when that file gets added to a directory. We can notify them with email or a text message. And we can do some monitoring of services or daemons that are running on those servers in VMs, and if a service ends unexpectedly, if we can notify the system administrator, we can also trigger a process to try to restart that. So we can kind of keep an eye on your servers as well.
Like I said, we've got role-based security built into Skybot, and we do interface with Active Directory or an LDAP server. So you can have different groups of people that have different types of access to the jobs that are running on your servers. Some may be able to change them.
Your operations area or the help desk might be able to run a job or restart one after a failure or put some jobs on hold with types of execute functions. Business services may be able to just view the job or look at history. And then, if you do have some critical jobs that people should not have access to, you can certainly exclude them. So you can be very specific, right down to the object level, as to what type of access people have.
So I'm going to pop online now and just show you a quick demo of Skybot. So the Skybot interface is a browser. We don't have any kind of client that you need to install on your workstation. So this is the interface and this is my dashboard. We can keep statistics for jobs that are running for the past week or jobs that are running over the past 16 hours, and I can always click on any of these data points that will take me into the history records, and I can see the detail for whatever is behind that data point.
And here I can see jobs that completed successfully, failures, and always from any of these lists I can download the log. We'll capture the job log from an IBM i job. We'll also capture standard output from any other server and see what the problem is and then restart that job from here.
So kind of what you're looking at here… these are the job names over here. These are all the different agents that these jobs are running on. So as you can see from here, I'm able to manage jobs across all of those different servers. What I have here is a job flow. Kind of looks like the job flow that Robot has. Again, it's the same functionality. We took all of the good stuff. Skybot was only released about six years ago, so it's kind of our latest generation of scheduler. But we took all of that good stuff and all of that functionality from Robot and we put it into Skybot, then we added some other stuff as well.
So here I'm waiting for a file to arrive over on the IFS, and so I can monitor for that file. And here I've just got a test file that's going into my directory on my IBM i. And as soon as that file comes in, then it's going to trigger a job that reacts on the i. It's also going to trigger jobs that exist on my cloud servers that I've got within my data center. An AIX server is going to trigger a job here. When that one finishes, another SAP job is running on AIX, and then I've got a Windows job over here, a JD Edwards job. And when they finish, then I've got some PowerLinux VMs in IBM's cloud and I'm going to trigger some jobs over there. So it really doesn't matter where anything is processing. We can link them all together and manage them from one place.
Just to show you a little bit about the monitor: so here I can set up those monitors for late starts and overruns. This job needs to be completed by 6 a.m. If it’s not, I'm going to send a trap to my enterprise monitor, open up a ticket, and I'm also going to notify the system administrator. So that's for an overrun or late completion, and then we've got a late start as well. If that file that I'm waiting for doesn't come in by 5:00, then I'm going to have a problem. So I can be notified of that as well. So we want to be able to keep an eye on those jobs to make sure that they're running and there aren't any gaps in the schedule.
We also have a schedule activity monitor. It's laid out a little bit differently than the one in Robot. This one is laid out here. I've got all my jobs that are forecast for the upcoming 24 hours. And up here in this preference tab you'll see that this screen is refreshed every five minutes. So Skybot is constantly updating that forecast. So if you're adding jobs to the schedule, it's going to be updated immediately. We don't have that concept of having to load the schedule. The schedule is always up-to-date and jobs will always run immediately once they get scheduled.
So you can determine what you want to see on the screen. This is the time that it is right now and this is what refreshes as timeline goes across the screen. And as you can see here, I've got some jobs that were forecast earlier today, it looks like noonish and they were missed. And so, Skybot will tell me when it's expected to start and the fact that it didn't.
So it's kind of easy to see here that I've got these jobs that didn't get run. I can right click on it and typically the reason the job wouldn't run when it was expected is because it had some type of a prerequisite. So my SAP inventory job is waiting for the CRM orders to be completed and it has a status of none.
So typically that job runs at noon, but it didn't today. It may run later or I can just check it off the list. They didn't run today, but that's okay. So you have a number of different options here. Also you can see the jobs that are active and running, and then this graph tells you when they started based on history of how long they normally run. So anyone looking at this can see when they are waiting for a report or some kind of a month-end job that takes a long time. They'll be able to see based on history how long this job normally runs.
And also, we can get in and look at the job log even while the jobs are active. So we can see what step it's on. It might be a multi-step program. And so you can get in and see what steps it's on and how much processing it has left to do. And then on here, we've got recently completed. These are kind of all my exceptions.
Failed, unanticipated, needs that kind of ran outside of its normal schedule, so I can follow up and see why that ran. Failures. Again always from these lists download the log, make a change if I need to and then restart the job from here. So this is a great screen for operations at a help desk, anyone needs to kind of be managing those jobs.
We also have different roles that you can create and I've got a few of them here. Just to show you quickly that here I've got a help desk role and you can create as many as you need. Here's my link back over to Active Directory. I've got a group on that AD server called Skybot Helpdesk. And so anyone that I put into that role or into that group will have these types of access to the Skybot Scheduler. And you can set the authorities for exactly what it is that they need access to. Here you can see they can run reports, but they are excluded from creating any objects. And then down here at the object level, I can even set so that they have kind of overall. They have view access to job, but they've got some jobs that they can execute. So you can be very specific. If I want to add some other jobs and change the access, I can just pick from a list and make that change.
And then the other thing that I wanted to show is the audit history. So for reporting, we've got lots of reports built into Skybot. And you can schedule any of these. They'll have commands, you can schedule them to be in your inbox in the morning if you need to. And as you can see here, we're keeping track of all the different actions that are occurring. What type of action is it?
And I was doing some work earlier this morning with some agent groups for a customer that I was doing some testing with, and we can see that I created a new group called All Windows. This is the date and the time, and then these were the field values that I changed. So we keep track of all that information, and you determine how much history you want to keep for all of our different types of objects.
One last thing I just wanted to show you in one of our reports. So for all of our products, we have what we call the Good Morning Report. But I would tell people that if you got a nice Good Morning Report, you are not going to have a very good morning. So this is kind of a summary of all your exceptions. At HelpSystems, we always talk about managing your systems by exception. So here I've got abnormal jobs, late starts, and overruns. And then this report has detail. So I can go find that report, find that job and job log based on the run number, the date and the time, and find out what happened with that job.
Here are my monitor events, late starts, and overruns and underruns, and then this is what we did. So again you're going to get the details so you can go and find the problem and solve it before it gets to be a big problem. And these are my offline agents. So hopefully you don't have a lot of exceptions going on in your schedule.
So that's kind of a quick view of Skybot Scheduler. But as you can see, it's very robust and has a lot of functionality as far as the scheduling options. And even though it's kind of new, it seems like it's been around for a long time.
Now the other scheduler that we want to talk about is AutoMate. So AutoMate is a business process automation tool. It's Windows-based, there are a couple different versions of it, and you can run it on a Windows Server. You can run an individual copy of it. You can run it with agents as well, but it will run on Windows. And what it does is it allows you to automate those business processes that you have to do manually. It has built in drag-and-drop actions so you don't have to do any scripting. And I've got a few examples that I'll show you.
I've been working with the AutoMate tool only for about a year, but everybody I know needs it. So what it will allow you to do is drag and drop some individual actions: FTP, log into a website, monitor email, trigger something based on something that an email that comes in from a specific user or has some text in the subject line, and then perform some type of an action based on that.
So the reason that you would need AutoMate is if your Windows is your primary application server and you don't have a lot of automation needs for the other platforms. But if you do with Skybot, we've got an interface to AutoMate. So you can take those Windows tasks and you don't have to write the code. You can automate those tasks and we can run them in a Skybot job.
So I'm going to go online and show you that because I think that's the best way to see AutoMate. So let me bring it up. So this is the Manage task. So these are all different tasks that I've got set up within AutoMate. It can create users in Active Directory, encrypt and decrypt files. I've got an example here.
So this is a task in AutoMate, and what you do is you record all the keystrokes. It's kind of like Robot REPLAY. You record all the different key strokes and over on the left hand side, we've got all the different types of actions that you can perform within Skybot. And so for this one, I'm doing a terminal emulation and I'm logging into 20 of 400 to IBM i actually. So you drag the action over onto the screen, and then will open up a template and you just fill in the fields. And then, behind the scenes will write the code that's needed to perform that action.
So here, the first thing I'm connecting to Wisdom, which is that same server that Chuck was on. And then, the type of emulation that you want to do. And I've got some delays in here just so people can see it. So I'm entering text, pressing the Enter key, and what I'm going to do with this is I'm going to bring up a menu on a green screen and run a report.
So once I've got all of the different steps that I need, I'll go ahead and run this and just show you what it looks like when it runs. So this is something that can be scheduled within Skybot actually, to be triggered as part of a larger job stream. So it's entering all of the options for the menu and it goes into select the report that is going to run, and then it signs off when it's completed.
So I create this little task, and then I can go over to my job flow diagram over here. And I can run this from any of these servers because it's going to go use that terminal emulation to go log into the IBM i. So if it needs to be part of the CRM order step, here are the commands that I'm running and I already put it in there. I'm doing a file push, so I'm doing an FTP.
As soon as that finishes, I'll add it in again. I'm going to add in an AutoMate task. And so I select the AutoMate server and now I've got a list of all those tasks that I've got created out there. And I'm going to select that IBM i emulation, and that's going to run as the next step in this Skybot job. So that's how we can integrate with Skybot and AutoMate.
Couple other tasks that we've got: some of the things that AutoMate can do is create, populate your Active Directory with new employees. Bring those new employees online. It can read an Excel spreadsheet. So it's going to open the spreadsheet that I've got my new hires listed on, it's going to read through that spreadsheet for all of the information that's in there, and it creates a data set, kind of a local database within the AutoMate product. And then I'm going to have it loop through that database, generate a temporary password for each person and IT. And then create an Active Directory user first name, last name, full name. Here's their login credential. And then, here's their temporary password, and then they're going to have to change that the next time they login.
So what you can do is give it to your system administrator or HR. They can create a spreadsheet of all the new employees and determine what group that they need to be in and what access that they need. I'm going to run this. It's going to fail here because I haven't done that. I've been having some access problems to the Active Directory Server, but you can see here what it's trying to do.
So it opened up that spreadsheet and it read through and I've got the first and last names, phone, address, department, etc. And then it tries to create an Active Directory user here. If I look at the variables that this job is using now, here is where I've got this data set and so it's taking all of that information from that spreadsheet right at end of this local task, and then it’s going to populate that, those active directory entries for me.
And then, just one more monitoring email. We've got requests over the years to be able to trigger a job based on an email coming in. So we have a number of different email options that you can use over here: sending messages, getting messages. What I'm doing is monitoring an inbox. So here's my exchange server. I'm monitoring my inbox and I've got a filter.
So I'm just monitoring the inbox and the filter is… I want to bring in anything that's got “subscribe” in the subject line. I'm going to create a data set and loop through that, and then I'm going to pull all those emails out and put them in a spreadsheet, and then email that spreadsheet on to marketing or accounting or whoever it is that needs to have that information.
So it read through my email, pulled up all of that information. I don't have my email open. Shoot. Sorry about that. And then, it will email that spreadsheet or that CSV file or whatever text file it is directly to you. So here's the email, and then here is the CSV file that it created. And I test this with this from my other email account.
So keep in mind those types of things that are business processes, that you think may not be automatable, absolutely it could be automated with the AutoMate product, and then along with Robot SCHEDULE and/or Skybot, automate all of your processes across your business.
So thank you for joining us. We have a few minutes for questions. I don't know if any questions came in.
Janine: First question. Yes, sorry, we have some great questions in the queue. "If I don't have any IBM partitions, what solutions would I choose?"
Pat: I would say--and correct me if I'm wrong, Chuck--but if you don't have any IBM partitions, that Skybot would probably be a good solution for you. Robot SCHEDULE requires an IBM i because that's where it's hosted. But if you don't have any IBM products, then Skybot would be a solution for you.
Janine: "Does a Skybot scheduler IBM i agent interface with the IBM i Advanced Job Scheduler? Does that make sense, Pat?”
Pat: It does make sense. I'm certainly aware of the IBM i Advanced Job Scheduler. What we would do is we would probably convert those jobs that are in the Advanced Job Scheduler to Skybot jobs. We have a couple different conversion tools that we can use. You can export that schedule into a saved file for us maybe or some type of a text file, probably a saved file from the i. And then we will take it, run it through our conversion, and convert those jobs to Skybot jobs and replace the Advanced Job Scheduler. Sorry.
Janine: Here's one that's kind of along the same line. "If you are currently running on Robot and you may have a need to move to Skybot, but you're keeping your IBM i, is there a conversion tool available that would migrate your Robot jobs to Skybot?"
Pat: There is a conversion tool. I'm just working with another customer that's looking at doing that. They're not moving off the i immediately. It's probably going to be in the next few years, but they're going to move some of their jobs. And for customers that are doing that, we have dependencies between Skybot jobs and Robot jobs.
And a lot of customers put it that way. They'll put Skybot for their Windows and Linux. And then they'll keep Robot on the i. But we can set up the dependencies very easily. To move from Robot to Skybot, we would save Robot live. Send us a saved file with Robot live in it. We'll convert it and send it back to you, and then work with you to import those jobs into Skybot.
Janine: Great. Okay. I'm going to decipher this line. I think the first part is just a statement, “So you don't use the default job code or advanced job code anymore.” And then it says, "Robot is installed on i system." And I think you've established that, correct? That's for IBM i. Okay. Then it says, "Do you need a client for i?" I don't know if that makes sense or it's certainly something...
Chuck: Maybe I'll answer that one.
Chuck: Yes, if you're going to use Robot SCHEDULE on I, Robot Schedule call runs from the i, so there's no agent required. There are agents required for the Windows, Linux and AIX environment when you're using Robot as an enterprise scheduler. On the Skybot side, if you do want to execute an IBM i job without interfacing into Robot on the IBM i, there is an i agent available for Skybot.
Janine: Thanks, Chuck. I think you decipher that better than I could. Okay. I already have Robot SCHEDULE installed on IBM i, should I have Robot Enterprise or Skybot?
Pat: Well, you could add either one. I guess to me--and again, Chuck you can correct me if I'm wrong--but the main difference is, if you add Skybot, the enterprise server, with an agent so you can schedule on Windows, the scheduling is all done through Robot. So whoever is scheduling those Windows jobs needs to have access to the IBM i. They all need access to Robot SCHEDULE, but they do need to have a user and login through the i.
If you had Skybot, then it's hosted on Linux or AIX or Windows. So they wouldn't need a user profile on the i in order to schedule jobs on those other platforms. Like I said, we could still react to jobs over on Robot. So if you've got dependencies back and forth, you can do that. It's not an easy decision. I guess we'd like to have a discussion with a customer when they're trying to decide what's the best way to go, because it kind of depends on their environment. What do you think, Chuck?
Chuck: Yeah, generally if your core server is your IBM i server, that's where you would choose to put your scheduler.
Chuck: And I'm speaking generally.
Janine: Excuse me. And Pat, do you have a slide with contact information on it?
Pat: Oh, I'm sorry about that. I sure do, right there.
Janine: Great, great. Okay. Here's one, "Can I import my cron or cron on Windows Task Scheduler jobs into Skybot?"
Pat: Yes, you can. We have an import for both of those cron. What we would do? There's a shell script that you run once you install a Skybot agent on your Linux server or AIX. And I think the shell script is something like import to Skybot, and what it does is it reads those crontab files, makes a copy of them and puts them on the Skybot server. Leave them right where they are, so those jobs are going to keep running until you decide to comment them or delete them.
And then we have an import center, and then we'll import those jobs and those schedules into Skybot within a minute. We import them on hold, so that you can decide when you start running them in Skybot and then on to cron. Same thing with the Windows Task Scheduler. There's a command that you can use to export to a CSV file, and then we'll take that and format it for the imports center.
So yes, we want it. Especially those two. They're nice, free little schedulers, but they're all on just one VM, and so there's no kind of dependencies. So you want to make it easy for you to move.
Janine: Okay. How about this one? “Where can I download a trial version installation configuration to try out Robot?”
Pat: You can go to helpsystems.com and there is a trial for Robot and Skybot and AutoMate or any of those at our website. And we have 30-day trials for all of those products.
Janine: Excellent. Okay. “If a job does not run on time, can I escalate the issue to ServiceNow, SNMP or SMTP?”
Pat: Yeah, either one of those we can use to go to ServiceNow. We've got a MID file that you can import into ServiceNow, and then we can open up the ticket. Yeah, absolutely.
Janine: Okay. “How are your products licensed?”
Pat: So Skybot is licensed… there's a license for the central server, and then there's a license for each agent. And I guess Robot would be the same. There's a license for the IBM i, and then there's a license for each partition or each agent if the other agent. Same with AutoMate. I guess it's simple as far as we don't license by the job or by user or any of those. It's pretty much by the system environment.
Janine: Okay. Well, this is a little bit of a summary question. But it may be even a good one to end on, we're coming toward the end here. It says, "Looks like you have many scheduling products, why would I choose one over the other?"
Pat: Well, and that is a very good question. And that's kind of what we wanted to talk about today. But first, you need to determine what your requirements are. Yes, we have a lot of different solutions and it's going to depend. You know, like Chuck said earlier, "Where is your core system? Where do you do your core scheduling from?"
If you do it from the i, you would want to go with Robot. If you do it from AIX or Linux or Windows, then you would go with Skybot, and AutoMate would plug into either one of those. So it kind of depends on your environment. The functionality is going to be pretty much the same. So I guess that's the big difference. What's the core of your scheduling needs, whether it's IBM i or something else?
Janine: Okay. Sounds great. That's all we have time for today. I want to thank everyone for attending today's webinar and I especially want to thank Pat and Chuck for sharing their expertise.
Later this week, we will be sending out a link to a recording of today's presentations to everyone on the call, as well as to anyone who registered for today's webinar but could not attend. That concludes our webinar. Thanks again and have a great day.