A Principal Control Engineer’s Perspective on Defending Energy Utilities from IoT/ICS Attacks


The grid runs everything, from manufacturing to financial, communications, transportation, water, and hospital networks. At the same time, energy utilities worldwide are under continuous attack from sophisticated adversaries including nation-states and organized crime. As the industry undergoes digital transformation and the deployment of unmanaged IoT/ICS devices, the attack surface is increasing — and so is the business risk.

In this educational webinar featuring Hank Sierk, Principal Controls Engineer with 30+ years of experience in the energy utility industry (now retired), we cover key topics including:

• Key areas of cyber risk including legacy equipment, network design issues, and credential management.
• Why continuous security monitoring is required to address both real-time security needs and operational requirements, such as identifying malfunctioning or misconfigured equipment.
• Streamlining NERC-CIP compliance with automated asset discovery.
• Building multi-layer defenses at both the network and endpoint layers.
• Eliminating IT/OT security silos and why NOS/SOC integration is required.
This webinar is designed to inform attendees about emerging threats and arm them with the knowledge to implement sound security practices that reduce cyber risk. Sierk will utilize what he has learned from his 38-year career and witnessing firsthand the digital transformation of energy utilities and the rise of cyberattacks against industrial and critical infrastructure environments.
About Hank Sierk, Principal Control Engineer (retired)
Henry (Hank) Sierk recently retired after a 38-year career in the energy utility industry. During that time he was responsible for setting the technical direction for a group of engineers performing control system projects of various types; acting as a subject matter expert regarding industry standards; working to actively maintain an overall corporate strategy to maximize the financial benefits derived from control systems; and addressing the need for control system security. He was previously a Power Production Engineer at Pennsylvania Power & Light. Hank is a licensed Professional Engineer (PE) in the Commonwealth of Virginia and holds a BSEE from the New Jersey Institute of Technology (NJIT).


Phil Neray:      

Good afternoon or good morning, wherever you’re located, and welcome to another SANS webinar sponsored by CyberX. Today we have Hank Sierk, a Principal Control Engineer with a ton of experience working in an energy utility. He’s going to talk about based on his personal experiences just why he chose to strengthen ICS security in his environment and what best practices he’s developed over the years to ensure tight security in that environment. And then I’ll pass it on to Hank. Hank, it’s all yours.

Hank Sierk:       

Hello everybody. It’s a pleasure to be able to share with you. I’m going to share a little slide that tells a little bit about my background. I’ve been a control system engineer for probably 40 years in the electric utility business. While I was doing that, we designed control systems and graphics to retrofit power plants. We performed startup and commissioning tune controls.

I was involved in implementing some of the early control of medium and low voltage switch gear through DCS and my company was doing it like a five year look ahead on what to do in terms of upgrades and improvements at our plants. In the last five or six years, I’ve gotten involved in the cybersecurity end of things, and during that period of time I was able to develop a strategy and implement that strategy at a number of plants.

The question would be, why do we care about ICS/IoT security? I think it’s fairly a loaded question. There is a fair amount of legislative interest in this particular issue. The NERC CIP requirements require us to specify consistent and sustainable security management controls that establish responsibility and accountability to protect the bulk electric systems against compromise that could lead to misoperation and instability in the bulk electric system.

That’s a lot of words but basically what it’s saying is the government is very concerned about cyber incidents affecting the reliability of the bulk electric system. In the picture you can see the East Coast blackout from the 2003 satellite view. That blackout which lasted roughly two days cost the American economy between $7 and $10 billion. That’s fairly major. And then of course, the Fukushima issue which was primarily related to loss of power effected a huge area in Japan.

The nuclear industry is certainly sensitive to loss of power. There is the opportunity or negative opportunity for a potential damage to the environment, and obviously company financial risk. What I try to do is look at these things from different angles. Why is monitoring necessary? I look from a strategic standpoint and I say there’s certain things that you may not want to integrate into your ICS. I’ve given some examples.

These are generally support systems, and the support systems, if there’s information that’s wanted in the ICS, is probably a good idea to hardwire and not connect these things. That prime obvious example was the Ukraine issue in 2015 where one of the things that the attackers did was they gained control of the uninterruptible power system. And after they shut down power by opening some breakers, they also killed power to the control center by turning off the UPS remotely.

These are things that we want to be careful that we don’t do. If you look at another area strategically it’s static accounts. There’s a lot of accounts and control systems that are generally set up and forgotten. Those accounts have to have some kind of a password. There’s a piece of software called Mimikatz that allows any user to be able to query the active directory for usernames and file or password hashes.

If a password stays consistently the same, that password if it has elevated privileges, someone could either crack that hash or use that hash for an attack. Time servers are actually a connection to a wireless network. If a time server is connected to your control systems, it should be allowed to pass time and nothing else. From a functional standpoint, why is monitoring necessary? Well, control systems typically support only certain OS versions.

Another major issue with control systems is they tend to be multi-homed. Each machine may sit on multiple networks. This creates an issue because the Windows operating system was never really intended to be a router but it could become a router and could do things that you don’t intend it to do. Domain group policies are critical. Sometimes switch configurations are almost mindless on ICSs or they’re managed switches but just pulled out of the box.

A major issue with control system communication is that the protocols are generally known and they’re not encrypted in their past from peer-to-peer. That allows someone if they were able to get on the network to either modify that traffic or interrupt it in some fashion or another. I got to love it. I’ve had it told to me a number of times and vendors install certain systems and they say, “These passwords cannot be changed.”

If you get into a situation like that, you need to go head-to-head with your vendor, because if you can’t change your password you can’t remain secure. They don’t really test this ICS software really for security purposes. It’s primarily tested for operational. From an equipment standpoint, why would we want to monitor? Well, ICS equipment tends to always be behind the curve.

The reason for this is the development teams for ICS vendors are typically small compared to commercial off-the-shelf software. The effort involved in them developing a new operating system and integrating that into their control platform is fairly heavy, and they typically tend to lag by one update cycle. It’s not uncommon to see a vendor purporting his latest control system on the last major Microsoft offering.

We as control engineers asked for commodity hardware. There were years a long time ago when I started in this industry where most of the control equipment was custom built for a function. We wanted commodity hardware because it would make those devices something that would be priced more reasonably. Then with that comes these problems. These machines aren’t Windows generally, aren’t Linux machines.

Another issue is that the control systems typically are physically distributed. That’s an advantage from the standpoint of actually connecting wires to these things and getting them installed, but it’s a disadvantage in terms of trying to secure them. From a design standpoint, most engineering staffs are somewhat limited in terms of staff. We’re trying to support a lot of equipment that’s remotely located.

If we don’t want to spend all of our time on the road we need some mechanism to remotely support these sites. We also have on control systems, typically connections to third-party systems like Scales or a variety of external non-control system-centric systems that might be connected either by serial or ethernet. We also have enterprise network connections for sometimes work order management or cost tracking. Sometimes historians, environmental reporting.

I hope not, but some sites seem to have e-mail and internet on the control system. Probably a bad choice. I want to address leadership. Leadership is a potential issue related to the fact that we want to think that control systems are built as simply as they were in years past. The control systems are layered on top of a very rich Microsoft operating system for the most part. We expect technicians to be able manage these things but we don’t want to give them IT-based training.

They’re not really sure how to get security set up in a lot of cases. The easiest answer is to make people administrators. There is also an issue with bringing contractors on site. Their hardware is a big concern. Background checks: some vendors will not allow their people to be background checked and supervisioned. There’s a general thought process, I’ve bumped into it a lot of times in the industry, where people say, “Well, the support engineers are safe because if they break it, they’re going to have to fix it.”

That’s probably not the best conclusion. We need to have secure or transient asset policies. Another area of concern that I’ve seen is the OEM. We love to call them spy boxes. If you have an engine and it’s an advanced gas turban engine or something like that, that engine is typically supervised or monitored by the OEM. They put in a system that keeps an eye on it and makes sure that you’re operating it as it was originally designed to operate.

That’s all well and good, but some of these systems are installed with intelligent KVM switches and other mechanisms that often don’t get on the drawings but could allow the vendor to be able to switch what he’s connected to and perform functions that you don’t intend for him to perform. I’ve seen this in practice. Passwords need to be complex. I talked about that. Another area is what I call ecosystem personnel.

People that come into the control system secure area to work on things like HVAC, uninterruptible power supplies, physical security, cleaning and shredding paper. Anyone of a number of different things. These people often are allowed access and they’re not generally managed when they’re in these areas and that’s a security issue. From a maintenance standpoint some people have tried to do roll your own operating system patches.

The vendors test the patches generally for control systems and they don’t necessarily allow every patch to be installed. Some patches will break the control system. We should be aware of which patches are allowed and which ones are not. It’s a good idea to work with the OEM to install patches. Don’t try and do it alone. It’s also that we would like to stay up-to-date with operating systems, but the OEM software that’s layered on top of those operating systems makes it very costly a lot of times to upgrade.

If we can’t manage the upgrade, we need to make sure that we’re securing everything around those soft targets. There’s a maintenance burden associated with updating antivirus files. Antivirus is very time sensitive. There’re updates that come out every day. I don’t know of a lot of people that are able to get an antivirus update onto their ICS every day. There’s a difficulty in making and testing backups.

A lot of control system hardware is tied to physical keys or to physical machine parameters for licensing purposes. It makes testing of backups or spinning up those backups difficult in an offline environment. We need to work with our vendors on how we can do this. In terms of boundary defenses, it’s really important that files coming into the environment are clean. This would involve operating system files, but it would also involve vendor files.

I’ve seen a number of vendors who offer their files on a website and they post the hashes for those files on the website as well. Well, if someone manages to attack that website and change the content on that website, I would think they’d be able to change the hash so it matches the changed content. That hash needs to come in and out of the end method and not too many vendors are doing that.

Other issues: we need to have hardened gold standard images particularly for domain controllers and impossibly moving onto the engineering machines and operating machines. The security FAT. People are starting to do this nowadays. It was never part of the history that I’m familiar with. The operating system after that FAT should be up to date with all the patches allowed by the OEM.

You should know what patches were not installed and why. Only the software that’s absolutely required should be installed. It should be possible and documented what ports are in use by the ICS. We need to work with vendors to engineer around multi-homed machines. They’re not there yet, but that’s a calming thing. Network switches need to be hardened. They need to be domain members.

We shouldn’t let these systems leave the factory without a span port configured that allows us to monitor traffic as owners. Any third party interfaces should be firewalled, and security logs from the machines need to be forwarded to some kind of a SEAM. That should be part of our FAT. Unused switch ports. I used to think it was okay to leave these unlocked and just monitor.

I’ve come to the perspective that prevention is better than cure and particularly from the standpoint of how quickly if someone gets on our network it can do damage. The ICS and IoT system architecture drawings, they need to be protected. That’s not always the case. Sometimes they show up on corporate CAD systems Firewall rules. It’s a really good idea to have somebody look over those besides just the person that did them.

Red teaming. That’s an industry we need to move into. Best in class people are doing that. I want to talk a little bit about the operational benefits of continuous network monitoring. When I started with this I didn’t really think this was going to be a very important function and I considered it optional. I don’t anymore. I think it’s probably essential and it’s very beneficial.

One of the things it does is it helps the engineers to understand the ICS traffic. We all have an idea of how our ICSs work, but when we actually see it in reality, that clarifies that perspective for us and we know what machine talks to what and for what purposes. That’s a good thing to know. In every instance where we’ve monitored network traffic we’ve found undocumented devices on the network.

Things get installed. They don’t always get on drawings and you can’t defend what you don’t know you have. We often find misconfigured equipment. If you see a machine sitting out there requesting DHCP on a control system network, it’s probably not supposed to be doing that. Most ICS networks are fixed IPs. If something’s reaching out for DNS root hints, it’s going to be attempting to go to the internet.

Hopefully a firewall will be blocking that, but it shouldn’t be doing that. Obviously IPv6 is not generally in use on ICS networks. You can identify failed backups. SMB connections will show up and if they fail to authenticate that will also be identified. You might find protocols that are enabled that you’re really wondering why they’re enabled. For instance, NetBIOS, SNMP, IPX. Any of those that are there and not in use are an attack factor.

You want to show failed connection attempts or bad register addresses. Some of the network monitoring tools now can dig down and identify the fact that you’re requesting a Modbus register that doesn’t exist or an OPC tag that’s incorrect or a DNP address that’s incorrect. That could be just a misconfigured device or it could be somebody poking around in your network to see what they can do. It can help clean up traffic that speeds up updates on HMIs.

That always makes operators happy. You can identify switch misconfigurations. If you expect the redundancy to work a certain way and you try it and you don’t see the traffic switch over, your network monitoring tool will show you that. We often find plain text passwords in various configurations. Most often SNMP and FTP, but sometimes in other applications. Network monitoring can also identify whenever our controller is downloaded.

Downloads to the controller should be supervised and managed in the management or change process. If they happen and it’s not during something that’s expected, that’s a threat. If we learn what normal looks like on the network, we can do a lot toward finding out if something’s changed that’s problematic. I’m going to try and roll through quickly some ideas about a multilayered security strategy.

I’m not going to read all these. You can see it on the slide. There are things that I think most people that are serious about cybersecurity know about. From the standpoint of knowing your network, these are the pieces of information you should have. They’re not very difficult to have particularly if you’re monitoring network traffic. But like I said before, you can’t secure what you don’t know you have.

It’s also important to have accurate, logical, and physical maps. Logical showing your information flow, physical showing your actual connections. These are hard to keep up because systems keep getting changed, but without them the ability to troubleshoot and also the ability to identify if something abnormal is happening in your network is very difficult. Software inventory. There should be tools to get software inventory.

Having an up-to-date inventory is essential toward some protocols. Hopefully coming out of an SAT you’d have that, but at least if you’re in an operating state you can find these things out. Domain controllers. I’m going to run through some of these regular password changes and security requirements. The idea for a separate group policy and credentials for domain updates. Managing network switch credentials as domain members.

Any changes to the admin group should be sent to your SIEM. If anyone’s being added or deleted from an admin group on a domain controller, that’s a serious concern unless you know about it. Severely limit access to the domain controllers. When I say severely I mean no one should be in a domain controller unless they absolutely have to be in. The domain account, same deal. The principle of least privilege needs to be followed both here and on operator machines.

Endpoints is an IT term but it means any computers that are being used for purpose, not like your backend computers doing controllers and stuff as such. I would recommend whitelisting antivirus. The update requirements for most antivirus are too heavy. People can’t keep up with it and if you don’t keep up with it it’s not really effective. Antivirus can kill valid ICS files. Whitelisting is more intrusive and it’s more difficult to initially set up, but it’s more beneficial long-term and it doesn’t need regular updates.

Secure boot. That’s a new feature of a lot of hardware. There’s a reason that’s coming on software and hardware inventory. It’s hard to do this but it’s definitely useful to remove unused apps. Any app that’s on there and not monitored and updated is potentially a threat. Regular backups. I think another area I wanted to discuss a little bit is group access accounts. It’s not uncommon to have group access accounts like operator or a technician maintenance person.

These are common accounts. There’s no individual accountability and it becomes difficult to change the passwords because there’s a lot of people involved. It’s not a good practice because everybody’s network cabinet looks like this picture. I should be hearing some laughs in the back room. We want to have hardened switch configurations. At the very least disable Telnet, remove universal passwords, log switch events, disable local terminal access, set up access control lists, set reasonable limits on password retries, make sure the clocks are synchronized and the switches so that events have meaning. Those ought to be basics that happen on all the switches. And then there’s obviously other vendor-specific requirements. The vendors don’t generally allow you to go to the latest and greatest firmware to find out what is the latest stable firmware they support and upgrade your switches to that and stay with that for the ICS.

I recommend that you monitor all networks on all switches. That sounds heavy handed. But if someone gets into one of your networks at a low level and maybe there is a controller and an HMI on that network, someone can get in there and gain access. If all you’re monitoring is the route switches, you’ll never see them until they actually make their move. Shut unused ports. Forward switch events to the SIEMS.

Where vendors have been using their time machines to multiple networks, we need to start working with the OEMs to use firewalls coming out of single or teamed network interfaces and run those through firewalls rather than using the machine as a router. This has been common practice for a lot of years, but it’s a bad practice that needs to change. Any file transfers and new devices on the network or remote procedure calls should be alerted.

And if possible, store pcaps on your root switches. If you can’t store pcaps, store at least the metadata for pcaps. There’s some information on how to do that out on the web. We allow remote access because we need to for business purposes, but that remote access should only be allowed to go to specific machines and it should be regulated down to involve changing the setup of individual machines and the firewall.

From the standpoint of a SIEM, anytime anyone comes from the enterprise network into your ICS, it should show up as a SIEM. There should never be any time that someone comes in from outside that the people responsible for the control network do not know that they’re there. Use multifactor authentication. We use it with banking. I think anybody who’s got an online banking account sure doesn’t want somebody getting in there or buying stuff on eBay without their permission.

We need to do this for getting into our control systems. And believe it or not, there are still dial-up modems. You see them in cabinets occasionally. They’re sitting there old as the hills, dust on them, still working and still connected to phone lives. They need to go away. This picture is frightening. We should have regular backups of our ICS computers stored locally and off site.

I already talked about the problems associated with testing backups. We need to work with vendors on how to do this. We should alert the SIEM if the backups fail or if the disc is full. Transient assets, one my favorite worries. There should be a secure configuration. I would suggest keeping these machines all Microsoft. Use Microsoft AppLock, BitLocker. Enforce a domain group policy on these things.

Anytime they connect into the security network, that domain policy should be enforced and the machine should be updated. Minimize any third-party software to adjust what you need to have and update it regularly and scan them with up-to-date antivirus before they get put back in service. A lot of vendors provide their install files as encrypted files. We need to work with the vendors to avoid this.

Encrypted files can’t be scanned. We need to get files that actually can be scanned. On these machines we want to physically remove the wireless card. They should never go on wireless network. Replace them regularly. My comment is laptops in production plants grow software like seagoing ships grow anchors. That’s the truth. Every time some new piece of instrumentation comes in, a new piece of software gets installed.

Even if you uninstall it, they leave pieces of the software behind. All of those DOOs are there and are potentially threats. It’s a good idea every once in a while to just replace the machine, start over and see what absolutely needs to be installed on the machine. Outside vendor transient assets. I would say avoid these at all reasonable costs. You really don’t want vendors coming in with their laptops and connecting them to your control system, if there’s any way possible that you can avoid it. If not, I would suggest removing the hard drive and scanning it with an offline tool or using non windows bootable disk scan. There’s a lot of viruses running around now that can spoof an antivirus program and indicate to the user that the machine is clean when it’s indeed not. I would suggest depending on the importance of a system you’re connecting it to, validate clean by multiple methods, as many as you think are appropriate.

Once it’s certified, don’t let it leave. Put it in a secure thing. That’s why the picture of the safe is there. Don’t let it leave the site. If it leaves, consider it dirty. Again, you got to start over. We all have interfaces to foreign devices and we love all these intelligent protocols to connect things to our ICS. We need to make sure that when we connect this way, that the data that flows is only the data that we engineered the system for and nothing else.

Like I said earlier, we want to make sure that critical support equipment is hardwired rather than interfaced. If it is, maybe we need to go back and undo that. Firewall any wireless communication. Consider anything wireless as being a risk and monitor all this traffic. The firewall alerts should go to the SIEM. This is an interesting one. If there’s any period of time where communication is lost, that could be someone breaking into that stream and interjecting something.

It could be a simple failure, but it could be potentially an attack. That should be alerted. Same with misconfigured points. In terms of firewalls, it’s important to implement two-layer next gen firewalls between the ICS and the business enterprise network. Each firewall should have a DMZ and nothing should cross both of those firewalls without going through some interim machine RAM.

That’s generally good practice and that ought to be the only practice we allow in these kinds of environments. Where we’re connecting to foreign devices, I just talked about it, make sure the firewalls allow only the communication traffic. Don’t allow programming and stuff like that through these firewalls. If you need to remotely program, provide an out of band method for doing that

Firewall communication links between different ICSs. A lot of recent plants have an engine control system and then a balance of plant control system. A lot of times they are different systems and they communicate between the two of them. If one of those systems becomes compromised, you sure don’t want to have double trouble by allowing whatever the problem is to jump through an interface link to the adjacent system and create a problem there as well.

These links need to be firewalled. Make sure the time server is not a common compromise point that is not connected to both systems or multiple systems. Because then if someone owns that device, they own all the systems. And again, get someone to look over your shoulder on firewall rules. If you allow ping while you’re setting up a security system, once the system is stable, disable ping rules.

Ping can be used as a compromised path as well. Make sure logic and configurations that you have on the systems are encrypted and protected. Control access to this data. Control access to network drawings. Don’t leave them laying around. Use only encrypted USB devices and wireless devices. You need to be connected to an untrusted network. Make sure you don’t have people charging cell phones on your control system machines.

Printers are computers. They have memory, a CPU. Often a hard drive in an operating system. They can be used as an attack vector. When you look at the NOC and SOC integration, these teams have their hands full. They don’t generally understand OT. There’s a good idea to craft your alerts that go onto the NOC in such a way that they understand what they mean and that they know what to do with those.

That’s really what this slide is about. Realized this is important that the more data you send to your NOC or SOC, the more that information is available on an enterprise network. If someone gains access to that information stream if it’s not encrypted, they could learn things about your control system that you don’t want them to know. Consider storing pcaps. I talked about that.

Relay servers is the way to go through the enterprise edge firewalls. Mostly toward the end here educating OT personnel. This is a little tongue in cheek on this side, but the administrator account is not your friend. Avoid it unless you absolutely have to use it. There is no free lunch. Easy is not usually best. When it comes to setting rights for users, even though it takes a lot of work to set them up properly, it’s worth doing.

Clean and dirty. Understand what it means and implement rules to take care of that. Monitor outsiders, people that don’t normally live in your control environment. Make sure somebody who’s watching what they’re doing when they’re in there. Uncontrolled trash. We all think about this from the standpoint of physical accidents, but one controlled software that’s just left on a machine is potential for an attack.

Logging into something and then walking away. Idle applications are the devil’s workshop. Someone can come along behind you and have access. The old car wasn’t really better. I had a ’55 Bird many years ago and that car was really neat looking, but it was a terrible automobile. The new cars are much better. The old Windows XP machine that you have and it’s been running real good for a long time really needs to be replaced.

If it isn’t physically secure, it’s not secure at all. I think we know this, but sometimes cabinets get unlocked and left that way. It’s important that IT and OT work together. Their primary approaches are different but their objectives are the same. IT is more invasive. They can put a lot of different applications on each end point. It can have a performance hit and it’s not really that important.

In OT, you really can’t do that and the vendor only allows certain applications. Performance is really crucial on OT machines. In IT, functionality is desirable but security is supreme. In the OT environment safety is supreme. Functionality equates to production and unfortunately in a lot of cases security is after that. We both have shared goals, safety, reliability, security, production, open data flow, minimal failures.

We need to look for opportunities to pull these groups together, learn from them. They’re more advanced than OT people are in security. Get CISO sponsorship. That’s where the money’s going to come from. It’s a really good idea to cross-train individuals in both directions. These are some takeaways. Take action as obvious. I mean, we can’t put our heads in the sand. This problem’s not going to go away. It’s getting worse every day. Strong domain controllers. That’s really essential. Network monitoring. That’s where you’re going to find out what normal is. You really need to be very honed in on this. The John123 password ain’t going to cut it today. Up-to-date software.

There’s a cost, but there’s a cost to get hacked and the cost to get hacked is really huge in recent times. Tested backups. When you really work with our vendors over this secure transient assets that’s a big threat and we need to figure out how we buy these systems secure on delivery not middle of event. Okay, I’m done.

Phil Neray:

Great. Thank you very much Hank. We’re going to come back to Hank for question and answers in a second. What I want to do is just sort of wrap up in the remaining sort of 15 minutes before we get to the questions with some additional content that you can get that will help you learn more about best practices for industrial security. I’m going to start with a report that we published earlier this year called the Global ICS and IoT Risk Report.

It’s based on an analysis of over 850 production networks that we conducted with our passive noninvasive monitoring. And then analyzing the data after anonymizing the data. Probably some of these things will not be surprising to you if you’re in the ICS space. Certainly Hank talked about some of these things.

The fact that for a long time, vendors didn’t allow installation of antivirus. We see here that more than half of the sites we analyzed were not running any antivirus. This idea that there’s an air gap and therefore there’s no need to implement additional security controls is really a myth. We found that 40% of sites had live internet connections. There’re even more sites that have connections between IT and OT, especially as industrial internet of things and digitalization or Industry 4.0 initiatives take root.

More and more devices are going to be in those environments, and they’re going to need not only internet connections, but connections to the corporate network to gather that real-time information. Finally, this idea that more than half of the sites have Windows boxes that are no longer supported. The 1955 Thunderbird that Hank mentioned a second ago. And then finally that almost more than two thirds are running plain text password.

There’s a link there if you want to download the full report or you can just Google CyberX Risk Report and you should be able to find it. The second report I want to direct you to is an executive summary of a report that was published by NIST earlier this year. It was focused on manufacturing industrial control systems, but its conclusions would apply to any industrial control environment.

What they looked at specifically were the benefits of behavioral anomaly detection. When Hank said know what’s normal and what’s not normal in your environment, that’s the core principle behind behavioral anomaly detection. It’s monitoring the network in a passive manner. Deciding what is normal, what’s abnormal, and then looking for deviations from that. We have an executive summary of that report.

You can find it there at the link or you can just Google CyberX NIST Report. These are some examples of the threat scenarios that NIST used when testing CyberX in their lab environments. They set up two lab environments. One similar to a process control environment you would find in a chemical or a pharmaceutical or food processing environment. And then one that would be more akin to a discrete manufacturing environment such as an auto parts manufacturer.

And then they came up with these scenarios and you see some of them that Hank already mentioned. You should really know if an unauthorized device is connected to the network. You might have a policy that says you’re not allowed to connect an unauthorized device to the network. But if you’re not monitoring for that, there’s no way of enforcing the policy. We know that a major utility was fined recently by NERC CIP and that was one of the violations that employees were just connecting non-approved laptops to the network.

But you see a bunch of other ones there including the unauthorized download of PLC Logic. What this NIST report looks like is some background from this, why behavioral anomaly detection is a better way to secure your environment than traditional signature based approaches. And then these 15 scenarios with explanations about why that scenario is important. And then a screenshot from the CyberX platform showing how that type of event would be detected in real time.

Obviously this type of event would be forwarded typically to a SIEM as well. Talking about NIST, many organizations use the CSF, the cybersecurity framework to measure their maturity in terms of their security. This is some examples of how the CyberX platform which is a continuous monitoring platform for ICS networks helps you address the five different aspects of NIST where identify is about asset discovery, network topology mapping.

This can also be extremely useful for NERC CIP compliance because many of our customers in the energy industry were previously doing asset discovery manually which is a very painstaking task. And so what our platform allows you to do is do it in an automated way, as well as identify unauthorized remote access and the use of weak credentials which is part of the identify category as well.

But you see some of the other ones there in terms of preventing breaches, rapidly detecting the breaches, responding to them. The idea being that in today’s world, sophisticated adversary targeting critical infrastructure will eventually compromise the network. And so the idea becomes not preventing them from ever compromising the network, but rather detecting that compromise as quickly as possible in order to mitigate the risk before they can do any damage.

And then finally, recovery which is about recovering from a breach. Quickly about who we are. We are CyberX, an ICS security provider. We’ve been in business since 2013. Our job is to help you as clients accelerate your digitalization, your Industry 4.0 initiatives, which involves including adding new devices and new kind of activity to your networks which increases the risk. Our approach is to provide the simplest and most robust solution for reducing that risk that will be introduced as you deploy more devices and more connectivity in your environments.

As I said, we were founded in 2013. We’re the longest standing pure-play ICS security provider in the industry, and that’s helped us develop a very mature and comprehensive platform. We’re the only firm with a patent for our M2M, machine to machine awareness of threat analytics. That’s using machine learning to rapidly detect deviations from a normal behavior in an ICS or IoT environment.

We have partnerships with the leading security companies in MSSPs worldwide. The idea being that you already have a full security stack of QRadar and Splunk and Palo Alto firewalls. And so the question now becomes, how do you integrate what we provide with what you already have? Our key differentiator is simple, mature and interoperable. These are the types of challenges we address, and Hank touched on some of these.

What assets do I have? How are they connected? How are they communicating with each other? Again, very useful for NERC CIP compliance. What are our key risks? Of course you have lots of unpatched systems, lots of dual home devices adding additional risk. How do you decide what are the most important attack factors to mitigate for your crown jewel assets? The assets that you care about the most.

The ones that if they were compromised would result in a major incident or a major safety failure. How do you prioritize that? That’s a big part of what we provide. The continuous monitoring and incident response. We’ve talked about the operational efficiency. How do you rapidly identify and eliminate inefficiencies from misconfigured or malfunctioning equipment?

One of our customers recently discovered that they had CCTV cameras on their OT network. They were setting up 40% of the bandwidth. Before installing our solution they really had no visibility into that network traffic so they could understand that. And then finally, how to unify what you already have. Your SOC, you have teams of people who’ve been trained to handle breaches. How do you help them address breaches on the OT side as well as the IT side with our specialized platform that monitors the ICS network?

How are we different? Easiest to deploy. It’s an agentless solution. You don’t need to configure any rules or signatures or have any prior knowledge of the network. As I talked about before, we are the only firm with patented analytics which translates into a faster learning period, faster detection and more accuracy. And then finally the most mature which translates into a number of dimensions including scalability, comprehensive capabilities, interoperability and the fact that the platform is just a technology where the platform is backed by experts in the space.

In terms of deployments, it’s simple. We deliver the solution as either a physical appliance or a virtual appliance that then connects to your span port on your network switch or your tap. Collects a copy of the traffic, never injects anything into the traffic. There’s no performance impact, no risk of disrupting your OT network and then uses that information to deliver information on your assets, your risks and your threats.

And as I said before, we recognize you already have an existing security stack. We were the first vendor to put a lot of focus on this aspect, which is integrating with your existing security stack. Some examples here, QRadar, Splunk, ServiceNow. We have native apps for all of those as well as for Palo Alto Networks and many others. We’re installed in over 1200 networks worldwide including two of the top five U.S. energy utilities.

Also if you’re in the energy utility space, you’ll find we have customers in Europe like Swiss Grid in Asia, in the far east, as well as in many other industrial verticals including manufacturing, chemicals, pharmaceuticals, and oil and gas. One of the other differentiators for us is our threat intelligence capabilities. You can see here that these are all zero days discovered by our threat intelligence team and reported to the ICS.

You can see it process all of the vendors. We have a deep embedded understanding of all the OT vendors, Schneider, GE, Emerson, ABB, Siemens, Rockwell. It doesn’t really matter. For more information, I’ll direct you to our knowledge base where you’ll find some of the materials I talked about today as well as an interesting white paper. It’s very, very popular, called Presenting OT Risks to the Board.

You can also download chapters from this book called ICS Hacking Exposed, which is considered the Bible of ICS security. And then you can also see us at these various conferences that are happening over the next few months all over the world. That’s it for me. Now, let me go back to the chat window, and one of the questions was about the organizational challenges of bringing IT and OT together. A question for Hank would be, in your experience, to what degree should the IT organization be involved or not be involved in establishing stronger controls for OT security?

Hank Sierk:      

Typically the IT organization has the responsibility. The corporate information security officer owns that responsibility. But the OT folks are very fearful based on what’s been done to their workstations and laptops by the IT organizations. That the IT will try to push standard applications that they’re familiar with down to the control system which would in most cases definitely break the control system.

Basically the best answer or the best apparent solution would be to try to cross pollinate, to get people from the IT side in leadership positions to work with OT people and begin to understand the special requirements of the OT environment and vice versa really. That OT folks who, myself included, sometimes have a simplistic view of security work with some of the security folks in the IT realm and understand what some of the threat factors are and get a better handle on what we’re trying to defend against.

I think that’s really the best plan. I know in most organizations it seems like it’s one way. If IT finds somebody in the OT side who they think is particularly knowledgeable, they generally try to steal them or hire them thinking that that’s going to solve the problems and it doesn’t. The problem is a problem of trust and that trust needs to be established at multiple levels.

Phil Neray:     

That’s great Hank. Thank you. And then from a budgeting point of view, what have you found works? Who should be paying for implementing stronger security in the OT environment in the organization?

Hank Sierk:     

Well, in my previous organization, the OT side took that responsibility. But I think you can probably speak better to that question because of your sales experience. It seems as though in most companies it’s the IT side that has the budget and does the control.

Phil Neray:      

Well, we found different models that work sometimes with the CISOs organization. Because as you said in the end, the CISOs are responsible for security whether it’s OT or IT. No one’s going to go to the operations folks and say you should have prevented this breach. The CISO is responsible. We’ve seen one of the models that works is for the CISO to sponsor the deployment, for example in the first year, and then for that budget to be transferred to the operation sides.

Of course we’ve seen what you’ve just said as well, which is that it’s really owned by the business. I mean, somebody is paying for end-point security in the environment. Somebody’s paying for firewalls. Why would network monitoring of the OT environment be any different? Usually it’s the business that has to pay for it because it’s part of the cost of running the business.

We’ve seen different models work. It just really depends on the organization and whether the CISOs organization is more operational and therefore has a bigger budget or whether they’re a more policymaking type of a body. Well, we’ve gotten to the half hour at the end. I want to thank you all for being here on this really informative webinar by Hank. I want to thank Hank for sharing his many years of experience with us.

You will get access to the slides shortly. And then on the CyberX website, you will also be able to access the complete recording as well as a transcript of the dialog in a week or two. Thank you very much and have a great day.