This post is second part of my blog on the types of container. In the first part, I covered Application Container, Unprivileged Container, OS Container and Privileged Container. In this concluding part, I cover the rest of the container types.

Hyper-V Containers and Hyper Containers

Microsoft calls its approach of running each container in its own dedicated virtual machine Hyper-V Container. Additionally, if the underlying hardware supports virtualization, such containers are referred to as hardware assisted isolated containers. The concept of Hyper Container takes a similar approach to running containers but over a Type-1 Hypervisorwith each container running in its own dedicated virtual machine. Each virtual machine is launched with a configured Linux distribution as the guest OS over which the Docker Engine is run.

Container in a VM (Hyper-V Container)

Container in a VM (Hyper-V Container)

The claim that goes with these types of containers is that they take the best of virtualization — that of isolation from a security perspective — and the best of containers — that of software distribution, agility and portability — and put them together. While the concern with running containers natively is a valid one, addressing it by cradling each container with a VM is not the right approach.

VMs have much longer startup and shutdown time — typically, over a minute on commodity hardware — compared to containers — in milliseconds. Even if the unused drivers are stripped from a VM image, such an image still takes over 15 to 20 seconds to boot up. Additionally, the ephemerality of micro services — coming up and going down in microseconds — makes the overhead of VMs even more undesirable.

Hyper Container

Hyper Container

Since IaaS providers such as Amazon, Google etc have invested a lot of time and resources in building a VM based infrastructure, they continue to offer support for native VMs and not (yet) for native containers. Hence, any enterprise customer of theirs, who intends to utilize the full potential of containers, would have to build their own container management layer on top of the VM management infrastructure offered by the IaaS providers. For a good example, check out Titus, the container management framework that Netflix built on top of Amazon’s IaaS infrastructure.

Don’t get me wrong; the security and hence the isolation concerns are valid ones, but there are better ways to address those concerns without compromising the true potential that containers have to offer or requiring root privileges to be given to containers. And, that’s the kind of solution that we at Layered Insight are building (plug :^)).

Microkernel Container

Microkernel Containers, to a degree, imitate the approach of Hyper Containers. But, in order to address the heavyweight nature of the VMs, they strip off the Ring 1 and 2 bundles — device drivers, network protocol stacks, file system etc — of an OS distribution and move them to user space. In my opinion, such an approach doesn’t really address the issue of heavyweight nature of VMs. Instead, it makes the VM layer thinner by making the application container bulkier.

More specifically, if two different applications running in two different Microkernel Containers on the same bare-metal host need the file system functionality, then both of those applications would have to include the file system OS bundle as part of the application dependency. So, the microkernel will come up faster than a monolithic OS but the application will take longer to startup and shutdown due to the added dependencies. In fact, the application will run slower since some of the OS features that it critically needs (such as file system and network protocol stacks etc) have been moved to the user space thereby causing additional context switches.

Unikernel Container

Unikernel containers further strip the microkernel capabilities, especially that of the kernel to support the running of multiple applications, in order to speed up the startup and shutdown time. This type of container is built by identifying and extracting absolutely only those features and dependencies from the kernel that the application needs to run. Hence, the kernel ends up being very specific to the application, rather than being a general-purpose kernel. Which also means that if any new feature set is added to the application, or if any patches have to be applied to the application’s dependencies, a new container image has to be created carefully.

From a security perspective, the kernel’s attack surface is significantly reduced since all the unwanted features of the kernel which are not required by the application are removed. However, running any monitoring services for such containers is a challenge because the monitoring agents can not be included in these purpose-built containers. The only way to provide comprehensive security or deep monitoring for such containers is by baking the security into such containers, meaning, include such functionality in the layers of the container while the image is being built.

Conclusion

The tremendous benefits offered by the container ecosystem can’t be ignored. Enterprises are adopting containers in an evolutionary manner, by not throwing away their entire existing VM-based infrastructure and building a native container-based one from scratch, but by adopting the containers for existing applications slowly. Some of the container types discussed above are a result of that evolution.

As new security solutions are built with a container-first approach and address the separation/isolation concerns, we will see an acceleration in the migration to a native container based infrastructure. As I mentioned above, one of the top public cloud service providers, Rackspace, has already taken a step in that direction by launching Carina which is a CaaS offering that runs containers natively on bare metal. Though I am not very familiar with the security or isolation features offered in the service at this time, all I know is that the containers are protected by AppArmor profiles. Perhaps, I will do a blog post on Carina’s security features after I learn the details.