In the first part of this blog, I covered the need for container isolation and some of the traditional VM-specific approaches that are applied to protect containers from each other, and also the underlying kernel from the containers. In this second part of the blog, I expand further on emerging approaches to container isolation.

 

Part Two

Privileged container based isolation

If you think about it, from a very high-level perspective, there are three ways of monitoring and making security decisions about any entity. One, by being in a layer underneath that entity, with that layer facilitating the existence and management of that entity. Two, by being alongside that entity. And, three, by being inside the entity. Of course, there’s a fourth option: that of being on top of the entity, but that is covered under the second and third options, because in order to be on top of the entity being monitored, you would have to have some support from something that’s inside the entity or by the layer that’s facilitating the existence of the entity.

Privileged container-based security solutions fall under the second option: that of running alongside the containers that need to be monitored and protected. However, in order to do that, the solution requires special (root) privileges in order to look into what’s going on with the other containers. Hence the term ‘privileged.’

There are multiple issues with using a privileged container-based solution to protect containers:

  • By adding root privileges to the solution that’s running in the privileged container and which is exposed to the outside world for policy and configuration management, you would be adding root privileges to the attack surface that’s exposed to a potentially hostile environment.
  • With root privileges, the security solution would also have complete control over the underlying kernel that’s supporting the containers, be the kernel running on bare metal or as a guest OS in a VM. That, in turn, means that security solution would own the underlying Container-as-a-Service (CaaS) layer. Hence, such a solution would not work in a public CaaS environment that supports containers natively on bare metal, such as IBM’s Bluemix.
  • Granting root privileges to the security solution for monitoring and enforcing security also means that containers from different tenants could not be mixed on a node or within a VM. In other words, it’s not an approach that could be adopted or supported by a public cloud service provider. Even in a private cloud setting, the approach would lead to serious underutilization of hardware resources due to the limitation.
  • Even in a single-tenant private cloud setup, only containers from applications with exactly the same set of security or data privacy policies could be mixed within the same VM or on the same node.
  • The root privileges still don’t give the solution direct cleartext access to data payload going out or coming into the container over an encrypted channel.
  • The requirement of special privileges and using them to access the underlying infrastructure stack specific features in order to get information about other containers also leads to the solution not being easily portable from one environment (public, private cloud, etc.) to another. That drawback, in and of itself, is huge as it directly conflicts with one of the primary attributes of containers, that of workload portability.

Kernel module-based isolation

This approach of adding a special kernel module to get information about the containers running on top falls in the category of the first option mentioned in the above section. Though this approach is more efficient in terms of performance than the privileged container-based one (since it runs in the kernel space hence reduces the context switches between checking security policies and allowing the calls to go through) it has the same drawbacks of requiring root privileges to modify the underlying kernel and hence own the CaaS layer, and conflicting with the workload portability of the containers.

Probes in container runtime

Container runtimes, such as runC for Docker and LXC for LXD containers, are supported by libcontainer that runs in the kernel space. Hence, introducing probes in the container runtime to get information about the containers, would be similar to adding a kernel module, which in turn equates to this approach having the same advantages and disadvantages as the one above.

Baking security into containers

The remaining option from the above-mentioned three options of getting information about containers is that of baking the security into each of the containers, by inserting probes in all the areas of interest in the container. These probes can then provide deep visibility into every behavioral aspect—pertaining to network, i/o, application call stack—of the application container and that information can then to used to create detailed behavioral templates which could then be used to detect anomalous behavior during runtime. Additionally, these inserted probes could also be used to enforce security policies.

There are two primary ways of injecting these probes into a container image, without having any dependencies on the kernel or needing any special privileges. One, during the development phase when developers use an SDK that includes calls which encapsulate the actual calls and include the additional code that represent the probe logic for each call. This option has the disadvantage of putting the onus on the developers to learn, understand and use the appropriate calls that will insert the probes in the applications and could also be used for enforcement. Two, insert the probes in the various layers of a container image in the binary form. This method is also referred to as Dynamic Binary Injection (DBI).

A container security solution based on this approach has the following advantages:

  • Since security is baked into the containers, the solution doesn’t need any special privileges to perform its monitoring and enforcement duties.
  • The baked-in feature also allows the security solution to move seamlessly with the containers. As the containers move, so does the security with them.
  • The approach provides deep visibility and fine-grained control into every aspect of an application behavior.
  • The presence inside the container could also be used to detect any outside code that has been maliciously injected into the container, and to do so in real time.
  • This approach does not incur any additional context switches for each call of interest in the application, as the decision of whether that call needs to go through or not, or any additional operations need to be performed is made in the context of the application itself. This is different than in the case of a kernel module-based enforcement when the call is intercepted in the kernel space and the policies are checked in the user space, leading to two additional context switches.
  • An module that manages all the probes of a container for visibility and enforcement could easily perform that task for multiple containers, not just of one application but containers of different applications, while supporting different data protection policies for different applications.
  • Any indication of a compromise or an anomalous behavior could lead to a remedial action of either shutting the container down, letting it run in a quarantine mode, running with reduced capabilities, or just running in a mode to understand how the attacker is progressing the attack through various applications containers i.e. the Honeypot mode.
  • The probes could also be used to debug performance related bottlenecks in the application.
  • Additionally, the probes could also to be used to take periodic snapshots or snapshots whenever the application is performing certain operations. Those snapshots could be used for forensic analysis later.

This approach could have the following disadvantages if the solution is not implemented correctly:

  • Performance overhead on the application due to the probes. However, this issue could be addressed by ensuring that the probes are lightweight and a group of them or all could be turned on or off with a single command. Further performance enhancements could be done by codifying the information that is sent out by the probes and processing and interpreting of that information to determine the security state of application container are all done outside, but still in real time.
  • The temptation to run an agent per container to manage all the probes for visibility and enforcement could be high, as that’s the easiest thing to do. However, a more flexible architecture should support an agent per node or even per site.

Conclusion

Containers offer a lot of benefits, but the lowering of isolation boundaries between containers and that between containers and the underlying kernel should be taken seriously. When it comes to security-related containment, containers don’t really contain. Hence, a security solution that provides deep visibility and fine-grained control into what’s going on with the various behavioral aspects of containers during runtime should be used. Such a solution should also not require any special privileges as it creates new security challenges for protecting the privileged attack surface exposed to the hostile environment. The security should ideally move with the containers, not impact workload portability negatively, and also add minimal impact (the impact must be evaluated relative to the type of application and the service it’s providing) to the performance. In certain situations, it would make sense to use multiple approaches simultaneously — to apply the Defense in Depth security paradigm.