What’s the deal with securing containers?
This is the first of a two-part blog on container security and the various approaches that can be taken to isolate containers from each other, as well as to protect the kernel that is hosting the containers. In Part One, I compare containers with virtual machines, and discuss a major concern of CIOs as they consider containers for at-scale deployment of applications. I also examine the various isolation approaches that can be used to secure containers. In Part Two, I will examine some of the newer approaches for isolating containers that offer more flexibility and agility.
Containers and immutable infrastructure
Containers have brought a wonderful disruption to the computation stack that we had gotten used to in the cloud era. Until containers burst to the scene, the application-centric infrastructure requirements were addressed by combining the software stack and the infrastructure stack in a single Virtual Machine (VM) image. With that approach, the application developer does have some flexibility in changing the software stack that best suits the growing needs of the application, but a new VM image would have to be created —which, in some cases, could result in changes in the infrastructure stack as well— with the help of the DevOps counterpart. The new image would have to go through the entire testing and qualification cycle. The same process would have to be repeated when the DevOps team would like to change or standardize on a different infrastructure stack. Arguably, the image creation and qualification process could be made seamless by a fully automated CI/CD workflow. However, VMs offer a resource- and management-intensive solution to the problem.
In contrast, containers offer an almost perfect solution to the problem, in that the developer can choose the software stack and the exact runtime (packages, libraries etc) that she would like for the application, while DevOps can standardize on an immutable infrastructure stack either in a private or a public cloud environment. Each of the groups can change their stacks without impacting, or even informing, the other group and the applications will continue to run and serve the users. This model also helps tremendously in the workload portability: portions of, or entire containerized applications, could be moved easily from one immutable infrastructure stack to another, that supports the same container format. Note: this will become a non-issue in future, thanks of the efforts of Open Container Initiative. Given that containers are light on resources and management, they are perfectly suited for the ephemeral microservices that demand quick startup and shutdown times.
So then, why is everyone not using containers for running their applications?
Isolation with containers
Perhaps a quick recap is needed here. We started with monolithic applications running on enterprise class machines on a single OS. Then, came N-tiered applications (I have skipped the option of running multiple applications on the same enterprise class machine but in separate partitions with a dedicated OS)—where each tier could be represented by or split into services, in a Service Oriented Architecture (SOA)—with potentially each tier running on its own dedicated enterprise class machine. With these approaches, the risk of an attack spreading from a compromised application onto another was just network-based. Hence, network based isolation or network segmentation was enough to protect applications from each other. Then came VMs, and with them the option to run various applications and services in their own independent environments on the same underlying hardware. Portions of or complete applications and services could be run not just on enterprise class but also on commodity machines. The attack surface for spreading attacks across applications then extended to the guest OS within the VM, the hypervisors supporting the VMs and the host OS.
As the virtualization or partitioning is moving up the stack, the separation or isolation boundaries are getting thinner while the attack surface increases. Containers make the isolation boundaries even thinner, since they share the same underlying kernel. So, the attack surface now extends to the container management layer and to the kernel — a compromised application could be used to attack the vulnerabilities of the container runtime and the kernel and get control of other applications that are running on the same infrastructure stack. One of the biggest concerns about container isolation is that while the container implementations mature, the entire kernel is available as an attack surface to a compromised application.
Hence, security is a very big deal with containers, and security concerns are among the primary reasons hindering the adoption of containers.
Approaches for container isolation
Various approaches are being applied to protect the containers from each other and also to protect the underlying kernel from the containers. The approaches listed below are being put to use either independently or as a combination of some of them, depending on the requirements arising from how much control and ownership one has of the IaaS, CaaS layers, nodes etc.
Container in a VM
Given that most of the existing private or public cloud infrastructure is VM-based, the easiest approach to address the concern of container isolation is by running each container in a VM. The approach lets the good attributes of containers—that of developers choosing whichever software stack they would like to use for the applications along with the immutable infrastructure stack chosen by the devOps—to be still retained. However, apart from the solution being resource-intensive and taking away the benefits of being able to launch and shutdown containers quickly, the approach also requires having to manage the VM-based infrastructure layer and the container-based Container-as-a-service (CaaS) layer on top of it, and then another new management layer to manage the allocation of containers to the VMs in order to optimally utilize the VMs in cases when the containers are not running for a long time.
Sandboxing the container
Another solution for protecting the kernel from the containers running on top and also from each other is to use the kernel’s system calls based sandboxing support. Such SELinux- or AppArmor-based sandboxing of containers could limit the kernel’s attack surface that is exposed to the application running in the container to only the system calls that are required by the application. However, this approach also has a couple of major flaws.
- A generic-enough sandbox that can support any and every type of application makes the sandbox useless for the purpose it’s being used.
- An application-specific sandbox is not that easy to create as it requires going through the entire list of system calls that’s required by the application at any point during it’s run and creating a sandbox that allows only those calls. The difficulty involved in creating and managing such an application-specific sandbox requires the SecOps to have an in-depth knowledge of the functioning of the applications (especially at the system call level) and also understand that the sandbox covers all possible scenarios of regular functioning of the application, so that the sandbox allows only those functionality.
Then there’s the classic firewall approach of creating separate networks or network segments for different environments or even for different applications. With containers promoting a microservices-based architecture, an application container could be communicating with a lot of microservices. Hence, isolating applications through network segmentation is not just difficult but impossible to do. Any additional protection layer on top of such segmentations, such as an Intrusion Prevention System (IPS) or a Web Application Firewall (WAF), would be very difficult to configure. The intra-application and inter-application containers could be communicating in various different hierarchical configurations thereby making the configuration of that protection layer very complex. Even then, such a layer will not be able prevent any attacks due to some of that inter-container traffic being encrypted due to SSL/TLS or IPSec. If such protection mechanisms were to play Man-In-The-Middle (MITM) or facilitate such transport layer secure communications, they would take the additional overhead of having to manage the lifecycle of the keys and certificates transparently (to the applications) and even then they wouldn’t have the complete application context to make decisions on whether an application container is compromised.
[END OF PART ONE]