
In the previous post of this three-part series, I discussed security practices associated with the Azure sign-up process, role based access controls (RBAC), remote management of your systems in Azure, and network security controls.
Now let’s build upon the previous post to continue to help create our “defense in-depth” security posture. You must have various security controls at each layer to reduce both the opportunities to reach your workloads and the number of system elements an attacker can leverage.
You start creating your defense in depth security posture by first reducing your attack surface. In the previous post, I shared with you the security controls/features that you have at your disposal to create your network security, and reducing the access channels and protocols. Allowing only required incoming traffic, controlling your access points, and only running required services and protocols following an 80/20 rule (i.e. if 80 percent of the users accessing the system do not need a service or process, do not let it run) all help you manage your risk easily. Simply put, the bigger attack surface means you have a bigger security task at hand!
The unfortunate truth about today’s computing environment is that the applications and operating systems from vendors, or even your in-house application code, will always have known, unknown, or future vulnerabilities. Your job is to eliminate or minimize pathways leading to them and to mitigate the damage caused if exploited.
In this post, I’ll discuss the next layer of controls including host-based intrusion prevention system (IPS), anti-malware and file integrity monitoring.
Host-Based Intrusion Prevention System — In our three-tier application design from the previous post, we enabled inbound connections on port 80 and 443 using network security controls available to us in the Azure Cloud. How do we further protect this allowed communication channel on port 80 and 443 in our application? Can we simply take an approach and trust everything that is coming over this channel? The short answer is: you can’t! You must put necessary controls in place to ensure the traffic coming over this channel is legitimate and monitors your incoming traffic to actively try to prevent any intrusion it detects.
This is where an IPS control comes into play. When you are running your workloads in Azure, you are not going to deploy network-based intrusion prevention appliances; instead, you will go with a host-based intrusion prevention system. This host-based IPS will monitor your allowed incoming traffic and will try to actively prevent any intrusion it detects. As traffic passes through, it looks to make sure that it’s following the rules. Is the packet well-formed (e.g., does it conform to RFC specifications)? Is the packet in sequence? During this analysis, the IPS will make a decision about the traffic. Should it be allowed to continue on through, or should it be dropped immediately? The IPS controls would look for attacks such as SQL injection, cross-site scripting, attacks targeted towards the servers’ OS, and others. If it found any, the traffic would be dropped immediately before it hit your applications and workloads. If nothing was found, the request would continue on as normal.
The IPS provides a level of protection that goes beyond reducing the attack surface. It’s actively looking for the correct behaviour within the permitted traffic.
Patch Management – Protecting your application and the guest operating system in Azure also means applying regular security updates and OS patches. For example, updating security patches for a system hosting an “always-up” web application (e.g., from our three-tier application stacks) may be extremely difficult and costly. Let’s take a look at the typical patch management approaches used today:
Reactive – If you use this approach, you will waste significant amount of resources just to see if your workloads are still vulnerable to attacks. Patches are applied directly to production systems (a big no-no!) in hopes that there will be no problems.
Blue Green – Blue-Green is a software deployment approach that helps you reduce downtime and risk by running two identical environments called blue and green during the time of a software/patch update. Think about running active and standby environments for a duration of time during the release/patch cycle. Using this approach at the time of the patch implementation, you bring up a parallel environment and apply your patches on the standby environment (e.g. green) and then conduct your testing, etc. Once you have deployed and fully tested the patch rollout in green, you switch the traffic so all new incoming requests now go to green instead of blue. This approach may not work for all implementations such as when you have a database and part of your deployment process is to deploy database changes. Following this approach may become tricky for some implementations and if you don’t have continued delivery requirements.
Proactive – Even after you have established a patch rollout frequency along with other aspects of effective patch management (e.g., “testing”), you are likely to remain in “catch-up” mode. After all, performing activities associated with an effective patch management process such as detection, assessment, testing and deployment requires time. The longer duration for your patch rollout will lead to a bigger window of exposure where your workloads are vulnerable. The constant stream of updates from an operating system and applications vendors (i.e., Adobe, Java, etc.) can make it very difficult for any organization to keep up with the higher pace of patch releases. As soon as you apply a patch today, new patches are already waiting in the queue to be applied.
Virtual Patching — Virtual Patching complements a proactive approach and helps reduce your window of exposure. It uses technologies such as intrusion prevention systems to create a security layer and avoid direct modifications to the resources being protected. As soon as vulnerability is announced, you can auto-protect your systems immediately without the need to wait until a patch is issued, tested, and deployed. It provides you the time required to complete all phases of patch management and follow the normal change management process. Virtual patching is less disruptive (i.e., system reboot is not required) and is particularly beneficial to reduce the need for “out-of-band” patches or more frequent patching cycles. By combining proactive patch process management with virtual patching approach, you can get maximum effectiveness of patch management efforts.. This can also help ensure your application availability goals are met without compromising security –a win-win situation!
Selecting a good host intrusion prevention system is critical so that it can help you automate your virtual patching process and can also take the complexity out of your hands by automatically assigning IPS rules that your systems are vulnerable against and later un-assigning IPS rules that are no longer needed after your patch deployment cycle.
When selecting a security control that provides virtual patching capabilities, you should look for these basic features as a start:
- Ability to perform vulnerability scan to discover vulnerabilities that the system is vulnerable against
- Ability to auto-assign IPS rules to protect your system against reported vulnerabilities
- Ability to un-assign IPS rules that are no longer needed after patch deployment on your systems
Monitoring – Until now, we have discussed security controls that provide protection capabilities using firewall and IPS. The defense-in-depth security posture demands controls at each layer. The next step in our security strategy is to uphold continuous integrity of critical system files, application configuration files, and application logs. Let’s look at the options that are available to us for monitoring our system in Azure:
Azure Monitoring Capabilities – Microsoft Azure provides diagnostic capabilities for Windows-based virtual systems that can be used to collect and track various metrics, analyzing log files, defining custom metrics and logging generated by specific applications, or workloads running in virtual machines. The monitoring is done via the VM agent that gets installed automatically (default configuration), and the monitoring is enabled on a VM level.
Once monitoring is enabled against your virtual system, it provides statistics data that you can use to detect abnormal network activity, outages, or indicators of attacks.
You can also trigger alarms when certain conditions are met.
File Integrity Monitoring – The Azure monitoring capabilities provide the foundation for your monitoring requirements, but that’s only a start. Using a good host-based file integrity monitoring solution will take you one step further in your overall monitoring strategy. Having a host-based FIM solution has become a critical aspect of information security since it can provide an early indication of a compromised system. It is also required by various compliance standards such as PCI. The host based integrity monitoring system provides detection capabilities. Simply put, the host-based monitoring solution helps in the following ways:
- Something exists now, and it didn’t exist before, i.e. “created”
- Something existed before, and it doesn’t exist now, i.e. “deleted”
- Something existed before, and it is in a different state now, i.e. “updated”
And that “something” could be a critical operating system and application file(s), directories, registry keys, values and system services, etc.
If you are already using a monitoring solution and are collecting logs to a central server, virtual machines running in the Azure cloud are just another resource that must be monitored.
When selecting a security control that provides “file integrity” capabilities, you should look for these basic features:
- Ability to provide “real-time” monitoring events
- Ability to auto-assign monitoring rules to help monitor critical operating system and application files, registry, system services, etc.
- Provide easy interface/framework to create custom monitoring rules
Is Anti-Virus Dead? There is an ongoing debate over the value of AV value, and the short answer to this question is “no.” The long answer is: “No, AV is not dead.” That being said, the traditional AV is not enough in today’s world. The threat landscape has changed over the years, but you still need an AV solution to weed out all but the most persistent attackers. To put this in simple terms: consider your car’s safety features. Is your seatbelt irrelevant because you have anti-lock brakes and air bags? No, just as with vehicular safety, you must employ a layered approach consisting of multiple, complementary security components to effectively secure in your IT environment.
Click here to read the third installment of this series.
References:
https://msdn.microsoft.com/en-us/magazine/cc163882.aspx#S8
http://resources.infosecinstitute.com/attack-surface-reduction/