You would think that a server hosting mission-critical applications and data would get the most security attention, right? Unfortunately, that is rarely the case with IBM i.
While Security Information and Event Management (SIEM) solutions are an established way to meet compliance mandates and stay on top of suspicious activity, most enterprise SIEM solutions offer little to no coverage for IBM i servers. This means that event data—if even collected—sits abandoned and ignored.
In this webinar, IBM i security expert Robin Tatam discusses a new solution to bridge the gap between your enterprise SIEM solution and your IBM i servers. You’ll learn:
- How to begin security auditing on IBM i without generating enormous quantities of data
- 3 common mistakes that IBM i people make when auditing
- Why regulations state that access monitoring is essential
- Dangerous events that your audit data may be missing
- How Powertech SIEM Agent can help identify security threats on IBM i
Introduction to Defending IBM i Against Cyber Attacks in Real Time
Hello, everyone. Welcome to the conversation today, where we are going to be talking about defending against cyber attacks, in real time, with a focus on IBM i.
For those of you that may not know me, my name is Robin Tatam. I'm the Director of Security Technologies here at HelpSystems, and it is definitely an honor to be able to present to you today and have this conversation. Part of my responsibilities here at HelpSystems entail guiding customers in their security journey. I'm also a subject matter expert for COMMON, so I present a lot of content for that fantastic organization, as well.
I have been on the platform now for about 30 years, a little over that actually, but I'm also an auditor, as well. So, I kind of straddle that line between an IBM i tech and somebody who works extensively in security and compliance. If you have any questions that pertain either to this content or to any other, than I'm more than happy to answer those questions. You can send them to my e-mail, or you can direct message me on Twitter, and I would be happy to help you in any way that I can.
The presentation I've built out today is fairly straightforward. We're going to talk about the necessity for collecting security events and how we accomplish that. We'll talk about some challenges that come with IBM i, and then I'd like to just share with you a couple of aspects of our portfolio that would work from a solution perspective to resolve these challenges. A lot of this can be done straight in the operating system, and there are some extensions that we can help you with with very little cost on your end, as well. So, I'm excited to be able to share this with you.
IBM i Security Events: Where Do They Come From and Why Do They Matter?
The first thing we have to talk about is security related events in general. When we take the IBM i out of the conversation momentarily, and we just look at your IT infrastructure as a whole, we know that security related events and job-related events originate on lots of different platforms. Whether it is application servers like IBM i, or end user workstations, network equipment, of course firewalls, and the all-important antivirus software just to name a few, these softwares and appliances typically are capable of generating a tremendous amount of audit related data. And in some instances, the events that are being logged are indicative of something that is happening or transpiring right then and there, and those threats may require rapid response. So, being able to have visibility to those types of activities is incredibly important, especially in a time related sensitive manner.
From a regulatory standpoint, there's a lot of recognition that auditing is a desired function, but collecting the data is one half of the puzzle, disseminating it is another. So, when it comes to monitoring of those security events, when we look at industry regulations like the payment card industry, PCI, mandates that we have to have visibility to what's happening on our system. Government legislation like Sarbanes-Oxley (SOX) here in the US, GDPR in the European Union, HIPPA, GLBA, CCPA, all of these acronyms are consistent typically with a requirement to have some level of security monitoring. The experts understand that when activities are happening on the system, that may be deemed in violation of the rules, we need to know about those things in a timely manner so that we can address them.
Simply user tracking is another option. If you don't have any legislation, if you're not regulated in your industry, then just knowing what's happening on your system is an important part of any type of security defense.
If you're doing any replication from your primary system, perhaps to a backup system, there's a very good chance that you're monitoring your system, that you are collecting those events, so the HA can then push those over to the backup system to make sure that the two partitions, or the two systems, stay in sync.
And although it's not used quite as often as probably the primary tool when it comes to debugging an application, especially when you're doing security changes to that application, the monitoring and logging of that data is very beneficial to the developers. Again, as I said, there are other tools that they tend to use as well, but this is another capability within the tool belt that we want to take advantage of.
How to Activate the Security Audit Journal on IBM i for Security Events
The next section I want to talk about is how we actually activate this. So, this functionality within IBM i, is going to leverage what we call the Security Audit Journal. This is a custom resource. It's built specifically for recording security and server-related events. It came about in version one of the operating system, so it's been around a very long time, but a lot of people are still unaware of its presence, or at least how to set it up and configure it.
The operating system does not include a security audit journal when you first fire it up. It's your responsibility to actually create it before you can start auditing to it.
The events that are recorded to that journal are going to be based on the configuration of several different settings, so you control the flow of the data. That's important because one of the things that we often find is people turn on everything under the sun when it comes to auditing, and then they wonder why they're inundated with this data. We want to be able to fine tune that, and the operating system allows that through the configuration.
One of the recommendations I often give my clients is, when they're setting up auditing to consider creating a profile that has audit (*AUDIT) special authority. This allows the user to maintain the configuration, to handle the audit events, and it may be a distinct profile from somebody who would perhaps normally be a privileged account on the system.
We do this, especially where there is a desire for separation of duties.
The State of IBM i Security Study is sharing with us that there are still 20% of shops that have not activated this function. That's a startling number considering that in this day and age, knowing what's happening on your system is really a core requirement.
Unfortunately, in many instances, the 80% that, on the surface, are collecting data, are not doing so correctly. They're not collecting the right kinds of events. They are not retaining that data for a sufficient period of time. So, there tends to be a little bit of a breakdown in that process. So, while you could argue that this is, hey, a pretty good indication that people are using this function, I would offer that a very small percentage are using it and using it in the manner to which IBM intended.
Create a Dedicated Library for Journal Receivers
The default location for this audit journal data is the QGPL Library, the General Purpose Library, which is an IBM supplied library that contains some OS functions and tends to be kind of a dumping ground for people that aren't sure where to store an object. So, in my opinion, a better way to handle that is by creating a dedicated library to handle those journal receivers.
I put a command up here, it's just a standard Create Lib (CRTLIB) command. You can call that library anything you want, and this is what's going to house those receivers that store the all important audit data.
By doing this, we can now secure that data more appropriately, and if we have to back it up or restore it, it's much easier to do that when we're not trying to blend with the contents of a more complex library.
The Audit Journal itself is like a funnel, if you think of it that way. It always has the name QAUDJRN and it always lives in the QSYS library. You have no control over that. However, that funnel is feeding data into the containers that will store it. Those are called journal receivers. The name of those journal receivers can be anything you want. And they can live in any library that you desire. Hence, why we create a library as the first step.
Change Security Auditing (CHGSECAUD)
You can create all the moving parts and pieces 1 by 1.You can also set the system value controls manually or change them on an ongoing basis, but I would offer, especially if this is the first time you have done it, to consider using the Change Security Auditing (CHGSECAUD) command, to pull all of those moving parts and pieces together and simplify that task.
This is what that screen looks like. I've numbered the different parameters as 1, 2, and 3, and we're going to talk about each of them individually, so you have a good understanding of what they do and how they work.
First Parameter: QAUDCTL System Value
First one we're going to talk about is the QAUDCTL system value. As I'm sure you have probably guessed, that pertains to a system value by the name of QAUDCTL. And this system value is considered the master on/off switch for that auditing function.
The default value, as specified by IBM when you first install the operating system, is *NONE, and that is equivalent to having the auditing in the off position, and that's what we want to mitigate. We want to turn this auditing function on, so that we do have visibility to the activities on the system. We have a couple of values that will facilitate that for us. The *AUDLVL value turns on audit activity at a system level, meaning that it pertains to all users. We also have a secondary value of *OBJAUD that turns on auditing for individual objects.
That does not mean we're going to suddenly instantly start auditing every object action on the system, no. There is an attribute possessed by each individual object that says whether that audit function should be active or not for that object. But, without the system setting configured, those attributes are ignored.
Step one is to turn on *OBJAUD, that facilitates the action, and then you can control it at an individual object level, turning it on, presumably, for your sensitive objects.
There's a third option of *NOQTEMP that tells the system that you want to ignore any activities that were logged based on jobs interaction with the QTEMP Temporary Library. If you're not familiar with QTEMP, it's a temporary library that is associated uniquely with each individual job running on the system.
You have one for your interactive job. If you submit a batch job, it has its own QTEMP. Think of it as a scratch pad that it can use during the function of the application. By specifying *NOQTEMP, we just eliminate a small amount of the noise that may be logged for activities that probably are not of major interest.
Second Parameter: Auditing Values
The second parameter was auditing values, and this corresponds to another system value this time called QAUDLVL. There's also an overflow companion to that of QAUDLVL2. We use this value to indicate to the system what type of all user activities we wish to track. If you're not really sure, there is a special value of *DFTSET (default set) that translates into authority failures (*AUTFAIL), object creations (*CREATE), object deletions (*DELETE), security activities (*SECURITY), and save restore functions (*SAVRST). The interesting thing about that is that the operating system only audits restore activities. So, if you say *SAVRST, don't expect to see everything being saved.
The QAUDLVL system value has a finite number of entries that can be specified, and over the years, IBM has maxed that out. In fact, they have exceeded it. So, there was an addition of QAUDLVL2, but we want to make sure that you understand that that secondary value, that overflow, will only be referenced if the primary, the QAUDLVL, has an indicator of *AUDLVL2. You have to specify in the primary setting that it needs to look at that secondary setting. A lot of people forget that and then wonder why those audit configurations are ignored.
My own preference is to actually define everything in the secondary or the overflow value. The way we accomplish that is through my graphic here. We set the QAUDLVL to only have one single value that indicates that the overflow should be used, and then we go to the overflow, and we specify all of our individual criteria.
Why do I prefer that? Because QAUDLVL2 was created with plenty of additional space, so you can create every various combination of all of the settings here without ever worrying, at least for any foreseeable future, that we're going to overflow that value, as well.
There's lots of different categories of event types that we can track at a system wide level. I have them listed here, they will flow onto the next slide as well, but you'll see several of them have their text in italics. If that's the case, then that individual value can be broken down into subsets.
For example, the *JOBDTA (job data) entry can be specified through several entries that begin with *JOB. So, if you find that job data is collecting more data in different ways than perhaps you're truly interested in, if you can, try considering breaking that down and just individually listing the subsets. Of course, if you specify all of the subsets, it's kind of moot. You might as well specify the main value.
All of the values you see here, and all of them on the next slide with the exception of the very first one, can also be specified at an individual user level. So, if you have a lot of users being constrained within an application with no command line and other things, and you go, I’m not really that interested in their activities because we know it's handled by the application, then what you can do is not activate at the system level by specifying these values, but instead, turn them on user by user. If you have a lot of users, obviously, that could be a fair amount of work. The reason we can't specify the first one, the attention event (*ATNEVT) at an individual user level is because those activities are being logged by the operating system's intrusion detection and prevention system. And that system is not specific to any particular user, it's things like denial of service, and so the attention event (*ATNEVT) doesn't pertain to any specific user, and, therefore, it makes no sense that you would specify it for a user.
Here is the additional slide that has the extra options. I want to point out the fourth and fifth one down, the PTF Object (*PTFOBJ), and PTF Operations (*PTFOPR). These are rarely being used, and that is partly because these came about in version 7.2 of the operating system, and almost nobody noticed that in the memo to users. So, if you are doing auditing today and you have proudly done a comprehensive job of it, you may want to consider adding these two. They're not noisy. They don't generate a lot of traffic, so by adding them, it doesn't add a lot of issue, it doesn't add a lot of overhead to the system.
Speaking of overhead, these don't tend to be performance concerning. So, a lot of people don't turn auditing on, because they're worried about the performance of their system. Typically, it is a non-issue.
However, if you have a lot of this data on the system, and you don't actively purge it or archive it, or I should say only archive it, I don't want you just to purge it, then we have to consider the amount of disk space that it may consume. So, in setting up the auditing, the technical aspect of this is something we can do in a good 30 seconds, but we probably want to sit down and think about how long we're going to keep that data, what we're going to do with it, where are we going to archive it to, etc. And so, those considerations need to come into play when we talk about setting this up.
Third Parameter: Initial Journal Receiver
The third parameter is called the Initial Journal Receiver. This is telling the system where we want that first attached receiver. As a reminder, the receiver is where the actual audit data lives. The journal itself is just a funnel that feeds it into the receiver. This parameter is going to supply the system with the name and the library location for the initial journal receiver. We recommend that you name it with a sequence number at the end (e.g. AUDRCV0001). By doing that, the system then can auto increment it. When a receiver becomes full, you can move it out of the way conceptually and move an empty receiver in its place.
That new receiver is created automatically, if you are indicating that the system handles all of the audit journal functions, which is how most people do that. So, over time, you will see (Audit Receiver) AUDRCV0002, and 3, and 4 and 5, which is why you want to have good archiving procedures and policies set.
If you are already auditing, this parameter is ignored. I highlighted that up at the top in red. So, if you are already auditing, and it's going to QGPL, and you're like, Wow, Robin's idea of putting this in its own library is a fantastic idea, no problem, you can do that, but you have to do a couple of manual steps.
You have to create a new journal receiver in your new library and then attach it to the journal. It's only a one-time thing. After you have attached the new journal and the new library to the journal, then the creation of the subsequent receivers will occur automatically in that new library. So, in essence, you're just redirecting the creation one time, and then the system will take it from there.
Additional Audit-Related System Values on IBM i to be Aware of
There are a couple of additional audit related system values that I want you to be aware of. Chances are you probably won't change them, but I want you to know that they're there.
Number one is QAUDFRCLVL (auditing force level) that is in essence a caching directive. It tells the system how many audit records should be potentially stored in memory before they're flushed to disk. If your security policy is extremely strict, maybe you're a government agency or something and it mandates there should never be a scenario where the system’s audit rockets are lost, then you want to set that to zero. Otherwise, the default setting of *SYS is going to allow the operating system to optimize performance. It will figure out how many records should be cached and then, at some point, it is going to write them to disk, and that's an appropriate default for most people.
Another one is the QAUDENDACN (auditing end action). That indicates what the server should do if auditing fails for some reason. The default setting is *NOTIFY, which is going to send a message to the system, operate a message queue (QSYSOPR). If you have created the system message queue (QSYSMSG), then it will send a message there. If you change that value to *PWRDWNSYS, as you might guess, this forces the system into an immediate IPL.
When the system comes back up, you have to login with a profile with all object and audit permissions. You must re-establish the auditing function, figure out what went wrong, resolve that issue, and then bring that system back out of a restricted state, so it's pretty impactful. That's a production system, and you're not being held to strict audit standards. That, of course, can be very disruptive. So I caution you on changing that, but if you are in a situation where you never want there to be an occurrence where audit data is lost, then the *PWRDWNSYS option is for you.
Security Auditing on IBM i: How to Process and Prioritize Critical Data
All right. So, you have turned that data on, and I will offer that probably many of you have. The next thing you're going to figure out quickly is that the system can generate a lot of data, and we need to talk about how we are going to do something with that.
Auditing itself is definitely a good thing. I tell everybody that they need to be doing it, but we want to be cautious of what we're auditing and who we’re auditing. If we turn on everything for every user, there is a very good chance that we're going to just be flooded with this information. I use the analogy drinking from a firehose. Hopefully none of you have ever tried that, but I'm sure you get the idea.
If you have turned the audit function on and you have shuddered with the amount of data in the events that the system is logging, understand that, number one, you can fine tune that function. It is not all or nothing. And that will be the first thing we would encourage, but the next is to leverage automation in order to help you process those events.
Now on a computer, there can be millions of things happening all day, every day, so it's hard to notice what's truly important amongst those things that aren't. We may have critical problems such as possible hardware failure, or even a cyber attack, or data breach that can go totally unnoticed until it ends up being a big mess.
Looking for those events becomes, like looking for that proverbial needle in here, an ever changing, haystack. It's time consuming, it's very prone to human error, and certainly not a fun task that anybody wants to voluntarily take on.
Automation can quickly analyze those log files, so you can sort through what's important and what's not, even logs that contain an incredible number of entries, and then highlight those entries that are deemed to be the most important, such as an invalid password entry.
Of course, this is the real world, and most of us have more than just one server in the data center, so if we multiply that initial technology challenge by hundreds, maybe even thousands, of servers, and you have a critically important task that simply nobody wants to do, or even if they wanted to, wouldn’t be able.
In proverbial terms, think of it now as a scenario that includes lots of constantly changing haystacks, and that unenviable job of trying to find potential needles in each and every one of them.
What makes things even worse is that needles may only be recognized as needles when we reconstruct them from maybe basic individual parts that are found in one haystack with parts found in another. And if that weren’t enough, we have to be watching constantly because these needles or even parts of the needles could arrive at any instance.
In technology terms, imagine if a single invalid sign on attempt occurred on just one server, maybe the IBM i, pretty much guarantee it's going to be deemed as no big deal, right? Users enter invalid credentials all the time. But what if that invalid sign on were to happen across each of the servers in the data center, maybe even running different operating systems? One right after another, in quick succession. If I'm an admin that's responsible for monitoring only my own server, I would probably not know that what was happening to me was, in fact, not a negligible act, but part of a much bigger, systemic issue.
We have to be able to gather event data from all of the compatible devices, analyze all of it, looking for the messages and patterns that are recognized as being truly important.
As we added more, and more, and more and more servers into the data center, this type of cross-platform monitoring has become pretty critical activity. These servers are often running different operating systems, they may be located in different parts of the world, and the security team has to have automation to help with this burdensome task. Because, if we don't, important events are going to go completely unnoticed. Remember, it's not just about seeing that something has happened, it's also about seeing it quickly enough to be able to act before that situation becomes a real issue.
One of the most popular automation technologies out there is called a SIEM (Security Information and Event Manager), and there's several flavors of that term and each has functional nuances, but they all have the same basic premise of informing the administrators or the security staff when things are happening that people should know about.
This SIEM allows the overworked, arguably probably understaffed, security team to see what of the red flags are happening across their entire enterprise. That way, they can focus their valuable time and attention on the most critical issues before they work on issues that are less important.
There are a couple of ways to analyze this type of data, and I'm going to compare the two primary ones. I'm going to compare analytical reporting. This is going to give us post-event visibility, and it's great for non-time sensitive events and any system configurations. So, think of maybe the reports that you run on your system for how many users have all object, or how many system value changes occurred. Maybe these are not as time sensitive as an invalid sign on attempt, so in this instance, they work well as long as we're not urgently waiting for that data.
The common complaints with running reports on the system is that the operating system functionality is pretty limited in this regard. Generating a spool file is not typically ideal because you can't sort and filter it. We can send things to a database file, but again, it's not conducive to quick and easy. We can't correlate events across different systems, so when I see a user logging into my IBM i, again, maybe no big deal, but if I see it cascade across multiple systems, maybe multiple different heterogeneous systems, then we have those challenges there being created, as well. And this is not real time, so we're looking at this maybe on a Friday afternoon, and we discover that Monday morning, something horrible happened. And by now, it has already become a massive issue.
The SIEM has benefits in this regard. It gives us immediate notification of events, and it's going to help bubble up those events that are deemed critical, based on the configure of the SIEM.
It's going to give us visibility to events that are coming from multiple different, in many instances disparate, sources, so we can feed everything into a central aggregator to give us that central control. It's not without complaints. In many instances, especially in IBM i shops, we hear that the security team receiving and processing these events don't know anything about the unique IBM i. They think it's just another Windows server, or it's this “mainframe” that sits out there, and they don't really know the nuances of that.
Our goal by sending events to a SIEM is not to lose visibility to the local admins but to enhance visibility with tools that can help with the burden of processing the logs.
Sometimes SIEMs can be hard to implement, and they can be expensive. Some SIEMs charge based on the volume of data, the transaction volume that's going to them, or the amount of disk space that the logging data consumes. So, there can be significant costs associated with that.
The data coming from different sources may come in completely different formats. What IBM i creates is going to look very different from what Unix creates. In many instances, IBM i is not even speaking to that SIEM, so we invest in this technology, but one of the most critical servers in the data center is not even communicating with it. It is sitting in the corner, trying to handle all its own events, which, sadly, it fails doing in most instances.
I'm going to open a quick poll here. I'm interested in whether you guys are using SIEMs today, if you're familiar with this technology, it may be that you don't have visibility, perhaps that SIEM is on the network, and you're not part of that team, but I gave two options for yes, The difference being whether you're sending IBM i events there or whether you are running a SIEM but not sending the IBM i events there. Of course, if you don't have a SIEM, there's an answer for that, and if you don't know, then you can answer that, as well. And I'll share that data with you here in just a moment when we're done.
My experience has been in most instances, when we determined that a SIEM is installed, and the SIEM could be a tool like Splunk, ArcSight, LogRhythm, QRadar, lots of different technologies, is that the IBM i just doesn't speak to it. They don't know how to integrate those two things.
It looks like right now, the winner is Not Sure. 35% of you don't know if there is a SIEM, so that’s a good question to ask one of your network administrators. In many instances that technology is handled at the network level, not at an IBM i level.
All right let me go ahead and close that out. Final responses:
- 14% of you say that you are running a SIEM and including your IBM i events.
- 25%, so one quarter of you, are running a SIEM but not sending your events there.
- 29% don't have a SIEM today.
- 32%, as I said, not sure.
I appreciate that feedback, and it definitely it looks like the IBM i is not fully integrated, or the SIEM technology is not currently being leveraged, and I want to talk about that next.
The Unique Event Log Management Challenge for IBM i
So, unique challenge for IBM i. The first thing I'm going to show you, and this is a bit of a mindful here, is a Unix log. Talk about a nightmare. These logs are impossible to understand if you're not an expert, or arguably, some kind of robot. And even if you could understand what they're telling you, the sheer volume of these log entries is really just going to be unmanageable. So, this is not something you would ever be able to address as a human.
When we get to the Windows side, you could argue that the log view is a little bit prettier. The interesting thing is that this is considered the friendly view. If you look at the data in the lower third of the screen, it's probably not telling you a whole lot of useful information unless you are a Windows admin, and you know exactly what this type of thing means. So, again, tremendous amount of volume, not easy to interpret.
IBM i is a little bit more organized. In fact, it's a lot more organized, although, again, the volume of events could number now in the hundreds or even thousands, and I'm talking about per second. So, you would never keep up with this as a human being. You see the AF entry there at the top. Let me just highlight that here. This AF entry here, this followed by the CD entries, these codes are telling us the type of entry that it is. In this instance, this is an authority failure. In this one, it's a command execution, so there's a number of different events here that are happening. This was not a production system, and even here we're seeing multiple events per second. On a production system, again, that could definitely exceed that number dramatically.
The events themselves have a payload associated with it that tells the reader probably more than they want to know. Again, you have to have some knowledge of what the system is and what it's doing, in order to be able to interpret this log and whether that event is important or not.
These log entries can be pulled into a database file, so they can be searched and sorted using query or other tools, but the volume of what you would probably consider as being mundane is likely to make it very hard to determine what's truly important and what's happening on your system that you need to know right now.
We also have to talk about the possibility of augmenting the log. What I mean by that is that on the left-hand side, we see those native activities, I've shown you how to activate that function. But on the right-hand side, we have a grayed-out area, those PC activities. If somebody makes an FTP connection, and they run a command, or they download a file, in many instances you're not going to see a log of those entries. A file transfer is not an auditable event.
So, we have a lack of visibility to very important activities on the system, so the first thing we want to talk about is just very quickly the idea that we're going to augment that green screen native auditing function with logging of those PC connections.
There's a term we use there called exit point, maybe you have heard exit program. This is a way that the operating system can have additional functions extended to it. In this instance, we have a tool in our portfolio called Powertech Exit Point Manager for IBM i that provides all the exit programs necessary to log and control those PC activities. That's not the focus of my conversation today, but I wanted you to be aware that if you are logging, but you have no exit programs, you don't have a full picture of what's happening on your system. We probably need to talk.
Let's assume for a second that you do have both sides covered, and that audit journal is flourishing with valuable data. We now have a choice of whether we report on the event, or whether we alert on the events. And, that's where we're going to go next.
Essential Tools for SIEM Integration on IBM i
I'd like to share with you some solutions that will give you better visibility, initially on the reporting side, but with the focus, of course, on alerting, because we're talking about real-time defenses in this session today.
When we get the event into the audit journal, and we have a choice of reporting alerting, if we do want to just report, one of the tools that customers have found very beneficial, is called Powertech Compliance Monitor for IBM i. This will give you reporting over your static configuration settings, like your system values, your user profiles, and your public and private permissions. These are the things that don't change multiple times per second, but it also has audit journal capabilities, as well. So, it can report on those events, but it is reactive. This means that, in this instance, I run a report, and I say, over the last 48 hours, I want to see any changes to my user profiles, I want to see any system values that were modified, and then you have visibility to that. If you struggle also with containment of those entries, there's some harvesting functionality in there, as well, that can store some of your audit data in a compressed form.
So, that's a really great tool, but we're focused here on alerting. We want to know when things are happening as they happen. So we are going to focus on the alert side of the conversation, which is leveraging a utility called Powertech SIEM Agent for IBM i.
I’m guessing you can probably tell from the name the purpose of this particular agent, but this is focused not on the normal day-to-day configuration of the system, but it is focused exclusively on the audit journal events and message queue type events that the system can generate.
The purpose of this is to provide a translator. The operating system of IBM i is a little weird and proprietary, I get it. I've worked on it for a long time, and it doesn't speak the same languages as other servers in your data center. So, we have to be able to take that uniqueness of the platform and translate it into something that is more universal, that's going to be understood by a wider range of facilities, and that is the purpose of the Powertech SIEM Agent for IBM i.
By taking the sources that are noted on the left-hand side here, from the audit journal as we have discussed, from other solutions like the exit point manager, from system message queues, from the antivirus message queue, from message queues that perhaps are being written to by your applications, and even database journals, all of these input sources can be fed into the translator, and then sent out in real-time to a number of different outputs. The most common use of this is to format into the industry standard SYSLOG messaging format. It's a little bit like XML that can then be ingested quite easily into virtually any commercial SIEM solution.
For those of you that are running a SIEM, that 25% of you listening that have a SIEM, but you're not communicating your IBM i events here, this is the way you would go. You would set the SIEM Agent to indicate which events you're interested in, get them into that standard SYSLOG formatting, and then communicating it directly to the SIEM to be processed, meaning IBM i now just becomes another node on the network. It's feeding it event logs into that environment.
For those of you that indicated you didn't have a SIEM, this can also do real-time notification to a message queue, so you don't have to invest in a SIEM, you can still get some functionality popping up in your message queues. If you have a message manager, like Halcyon or Robot, this becomes even more powerful, because now we can communicate those through to e-mail and phone texts and other things. Again, responsiveness is the magic here.
We also have the integrated file system (IFS) that is available as an output. We can create log files there. Again, it's going to generate something that's perhaps not humanly processable just based on volume, but if you have something that needs that log file being generated, even if you're just storing it for posterity, then the IFS is a great place for it.
We have multiple options, and you don't have to pick and choose. You can send to any or all of these, and you can have multiple instances of each one, and you can do so concurrently. Some people will have a SYSLOG output to their SIEM, but they will also send an alert to a message queue.
That can all be handled in different ways, so we can filter at the source, we can set different thresholds. For example, if somebody enters an invalid password for QSECOFR, it is probably a more important event than if they just enter it for Johnny in the warehouse. So in that instance, perhaps we send it with a higher criticality, or we send it to not only the message queue, but also the SYSLOG server. So, we have all of the controls that we need in order to determine what goes where and when.
Now, we have recently released a new version of the SIEM Agent. So, if you have this tool, or if you were using what we used to call Interact, which was the predecessor to SIEM Agent, then I would strongly encourage you to let us know. Reach out to support, reach out to your sales manager, and let them know. Because if you're on a maintenance agreement, this is a free upgrade, and it's a great upgrade to do. We have basically rewritten the software, so that it has an amazing amount of new functionality that we have been wanting to put in here forever, and we have done it, and the developers did a fantastic job of making this incredibly simple but powerful and really, really flexible. Take advantage of that upgrade if you're running a prior version.
Now for those of you that said you don't have a SIEM or you're not sure if you have a SIEM, then understand that we can also have a conversation about that. My focus here is on SIEM Agent, up until now, but I also do want to just make you aware of Powertech Event Manager. Now if you are running an existing SIEM, don't tune out, there may be something that can be beneficial here for you.
The Powertech Event Manager at a glance is going to take a number of different data sources, disparate data sources, so it can be pulling any of those logs: your firewall, your antivirus solution, Windows workstations, IBM i. It's going to translate, normalize and enrich that data and then provide all the auditing and real-time forensics that is demanded by most modern organizations.
What are those disparate data sources? Well, as I've mentioned, of course, IBM i is supported, that's accomplished through our Powertech SIEM Agent for IBM i, but we also have the ability to see those Windows, Unix, AIX, Linux, arguably virtually any application, technology or system that logs something.
Powertech Event Manager has the power even to understand non-standard logs. So, even if your source is not a standard format, we can interpret that.
The process of adding a source is very simple. You just say add new, you give some information, and you now have the ability to send that information, or receive that information in.
When we create the asset, the item being monitored, we can indicate how critical that asset is to our infrastructure. If there is a server that really doesn't perform a deeply important task, then maybe we back burner that behind something like an e-mail server or an order server that is processing high value actions. We can also indicate if there's any type of regulations that apply. Here I have PCI and GDPR checked, but as you can see, there's a lot of acronyms. That audit community loves their acronyms, but what we're going to do is provide that capability so that we can get a better view of whether something is compliant or not.
The Powertech Event Manager, then, is going to run on a Windows Server. It provides a lot of different views into the data and then stores all those metrics and data in an SQL Server Database, so that you can search back and do your analytics. Those metrics can be analyzed and compared here. For example, trending and comparisons with the previous period, it’s going to give us now the importance of activity behaviors that are happening. These dashboards are actually from an IBM i server, and you can see here the number of events. So, if there are suddenly an increase in the number of threats or highlighted activities, then we can be aware of that. Even if they don't stand out as individual items by themselves, the very fact that we see a large increase may be an issue, so that trending.
Of course, sorting and filtering this based on the different criticalities gives us a deeper view, as well. If I zoom in, you can see here that all of a sudden, we have antivirus activity, but the red is the incidences. That's what we're primarily focused on. As a security person, I have to deal with those incidences before I deal with just general notifications, and this is going to help me prioritize my actions and my time.
We can, of course, drill down into a very data centric screen, like this one, but after a lot of filtering, correlation, and translation has already been processed. A lot of that data then has been limited. So, no matter where these events come from, they're going to be presented in a common way. It's going to be much simpler to understand it. And, for any of these threats or highlighted events, of course, I can now filter by, maybe, the reviewer. I can add notes and links to demonstrate that this particular threat was analyzed, reviewed, and eventually closed, or escalated to somebody else.
Leveraging Powertech Event Manager and Powertech SIEM Agent for IBM i
The summary here is the ability to normalize data from disparate sources, prioritize those events so that we focus on the most important ones first. We're doing all of this in real time. We're streamlining the process of incident response. And we're building that audit trail if you have a need under regulatory compliance. And the best part for me as an IBM i person is that it's fully integrated with the SIEM Agent for IBM i.
Now, I said don't tune out if you're already running an existing SIEM, because sure, can we replace that SIEM? There's a very good chance that we can. But if you're happy with the same that you have, I totally get it.
I'm not looking to re-invent the wheel but one of the challenges I've heard from a lot of organizations, especially larger organizations that maybe have a separate security team, is that by flicking these events over to the SIEM, there is sometimes a challenge that the local IBM i administrators are actually losing visibility of what's happening on their local system. And as we've already talked about, it's a unique proprietary system in many ways, so if you don't have the deep IBM i knowledge, you're probably not going to processes events well.
One of the great things we can do with Powertech Event Manager is actually augment what you're doing with your enterprise SIEM. We can deploy Event Manager as a platform specific SIEM, perhaps just for IBM i, and the SIEM Agent can send its events to both places concurrently. So, your Enterprise SIEM gets it for the event correlation and the archiving, but your platform specific SIEM gets it and that form that is visible just perhaps to the IBM i team.
About HelpSystems’ Comprehensive Security Portfolio
This particular component, actually these two components, come under our Security and Integrity Monitoring aspect of our portfolio. The Powertech Event Manager does have a freemium version, so depending on what you want to monitor and how many of them there are there may be a way to get into this at little to no cost. It's not a clipped version. It is full-blown. However, it doesn't necessarily have technical support associated with it, but it's a way to get started.
If you want support, if you want to deploy it as a more comprehensive solution across your network, of course, you can do that. But then there's a cost associated with it. But what a great way to get started.
From a security perspective on power systems, you guys probably know us well in this space already, but I want to just remind you, that not only do we have a ton of security software solutions, but we also have a number of security services. So not only do we know our own products, as you would expect, but we're also experts within the OS, as well. So, we can help ensure that that foundation is robust, and that what you're doing with auditing and your profile configuration, and your public and private permissions is appropriate for the type of requirements that you have.
Leverage HelpSystems in both of those areas.
If you're not really sure where to start or the security discussion is somewhat new to you, I offer that the best way to start is with a free Security Scan. We can do this on your system with some automation. This is just one-page, it’s the summary page as you can see, and there's about 12 or 13 additional pages that come with this. It's run against your system, typically within about 30-seconds, it runs extremely quickly, there's no cost for the use of the scan tool, and we'll even sit down with an expert, and we'll review the line items on the report, explain exactly what they mean, help understand it, and we provide that expertise at no cost, as well. So, what a great way to get a good, comprehensive initial understanding of where your system is today, and perhaps some areas where you could see improvement, and you're welcome to take advantage of that, as well.
I think now is a good time. We have a few minutes here to answer any questions that you may have, you're welcome to use the question panel in order to do that. I know you guys are shy, so feel free to send those questions after the fact, if you like, but, please feel free to ask those questions. If you're thinking it, I'm sure somebody else's, as, well. If you are typing a question in, feel free to continue to do that. I'm going to bounce out a quick poll here to see if you would like to see some of this in action.
I have set it up so that it's a multiple choice, so you can choose if you'd like to see more about the SIEM Agent, if you already have a SIEM, if you'd like to see the Event Manager SIEM, as well, you can do that, and if you're interested in a Security Scan, you're welcome to check that box to.
The Security Scan side of things here can always be run. So, if you say No now, and then you think a week later, oh, we should probably do that, just reach out to us. You can go out to HelpSystems.com and request it. You may have to fill out a form, so if you have any interest at all, check the box here, we’ll get you that info. You're not committed to it, we'll make sure you have the correct answers to your questions about what it is, how it works, what it does, and then, if you're still interested, we'll coordinate the use of that scan, as well.
All right, so I have a couple of questions coming in here, it looks like just looking for clarification between Event Manager and SIEM Agent. OK, so the Powertech SIEM Agent for IBM i is the middleman. It's going to allow the IBM i operating system to speak in that real-time mode. So if you're choosing to use it in order to communicate to the SIEM as opposed to a message queue or something, then the middleman is, in essence, translating those IBM i events as they occur, from the Audit Journal, from the system message queue, and other sources, put them into that standard language, and then it's going to fire them at your SIEM. And if you have an existing SIEM, that's really all you need. If you don't have a SIEM, or if you're interested in the idea of your IBM i people having their own kind of SIEM, then we would combine Powertech SIEM Agent for IBM i with Powertech Event Manager.
If you were running Windows and other platforms and had no IBM i, of course you don't need the SIEM Agent, you would just need Event Manager, but because we're talking predominantly here to an IBM i audience, presumably, you would be interested in making that system talk, then the SIEM Agent is going to be a common denominator across that. Hopefully, that answers your question.
The other one is, how long does the Security Scan take? Well, as I mentioned, it's about 30 seconds. It really is 30 to 60 seconds to actually collect the data and format and build out that report. It's a browser-based facility, you don't need to give us access to anything. It's a Windows tool that you run off your PC. We provide about an hour's worth of professional service time also for free in order to accomplish that, so the first 30 to 60 seconds, or you can run it ahead of time if you want to, and then we'll spend approximately an hour with your team, explaining to you exactly what we find. You have an opportunity, then, to maybe paint some color in the lines as to how your application works, or what you have done with security, and it's a great expert Q&A session that uses the scan as its discussion foundation. You are welcome to take advantage of that anytime, if you have done one before, but it's been at least a year, I would encourage you to run a new one.
Operating system upgrades typically don't make any difference. Box swaps typically don't make any difference. The reason being most people migrate their entire system from old to new. So, in that instance, if you have a pending V7R4 upgrade coming and you think I'll run the scan afterwards, really doesn't make any difference. You can run it now, because you're a new system, most likely will look exactly the same.
I’m going to close that poll. And it looks like you guys have all answered that, so I appreciate it.
As I did mention, I recorded the session. After the event, you should get a follow up e-mail. I believe that has a link to the recording in it. Feel free to re-listen or share that with anybody that you wish.
And if you do have any questions around this that you didn't think of before, please feel free to fire those to us afterwards.
I think that's everything. I appreciate everybody's time today. I hope you guys are keeping safe, and I look forward to talking to you on an upcoming webinar in the future. Take care, everyone. Bye. Bye.