Are Containers the next big thing?
Google has been using Containers in their data center for years. They support millions of users with search, email, Google docs. Anything you do as a Google customer is running in its own container. So, what is it that Google knows to bet their business on Containers?
Microsoft announced support for Containers and a new Cloud/Container optimized Windows Server called Nano Server.
Amazon AWS have services that provides Docker containers with numerous management features.
IT leaders in organizations of all size are looking to unlock value quickly with new ways to build and deploy applications fast enough to meet the demands of digital strategies like social, mobile applications and cloud-based services with short-term roadmaps and frequent updates and always available.
Do Containers represent a new disruptive technology or it is just hype?
What is a container?
Containers are naively created and supported in Linux and soon by Windows Server. It is a completely isolated computing environment for your application to run. Essentially, a LinuX Container (LXC) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single host (LXC host). This is very different from a virtual machine (VM), that has to simulate hardware. Instead, a Container provides a virtual operating system environment that has its own CPU, memory, block I/O, network interface, file system and IP Address.
Containers are 90% lighter than a VM, so you run more per machine. Some tests show up to 15 times more density than a VM, with better performance and less latency.
Container run-time for both Linux and Windows introduces a operating system virtualization layer that lives between the “user mode” and the “kernel mode” creating isolation boundaries in which you can run many applications side by side without impacting each other. When you do this without emulating hardware (the way Virtual Machines do) you gain tremendous efficiencies.
A struggle for most System Administration is finding balance of the right density without over subscribing a Host Server with more Virtual Machines that can peak at once and over tax the host server resources and make some guest VM’s non-responsive. Containers are different in the way isolation is achieved by namespacing kernel resources and using unique process IDs. This provides a relative share of CPU usage in a given time window. That prevents oversubscription by a single application and the starving the CPU to other Containers.
Containers are well suited for DevOps
When Containers are created all it sees is the base OS. As you add software such as frameworks or application code, an image layer is created. These image layers can be placed in repositories. This is well suited for DevOps. Once an image layer is created by the System Administrator, Developers can incorporate by reference that configuration to quickly create a new Container by pointing to the configurations in the repository. Being able to reference a image layer and create a new Container quickly and easy with well established “golden” configurations empowers a DevOps workflow model to streamline tasks and time to provision environments and greatly improve productivity.
Containers smooth out the SDLC process by accelerating iterations. Containers bring their own isolation and runtime. So, iterating between development and testing is simplified by making environments consistent where the code runs thereby eliminating the friction caused by runtime errors and environment issues when progressing from Dev, unit test, integration test, system testing and regression testing. This “Container consistency” is carried all the way through the SDLC to deployment. All this without large capital investments in infrastructure.
Docker is an open source product that built management tools for both Linux and Windows Containers. Docker brings the ability to create and manage Containers to the mainstream with an easy to use interface and a robust range of features.
I have seen a demo using Docker to create a new container running Windows Server 2016 OS from scratch in less than 10 seconds–that included boot time–Containers start in seconds.
Creating newly configured Containers on-the-fly that are able to run in any Cloud or in your data center is a new paradigm for how we have traditionally approached Development and Deployment. For example, Developers can share their Container images to create new Containers to quickly run unit testing, present a Database to a middle tier developer for system integration testing; all this without making a request to a System Administrator or need additional infrastructure.
Docker can version your Containers. Imagine, there is an issue with your Production application impacting your business. Using Docker, you can quickly rollback back the Container to the previous version. This is something the System Administrator can do without having to track down the Developer.
Docker is highly integrated with Linux, Eclipse and very soon Windows Server and Visual Studio (you can use in Beta now).
Containers are native to Cloud Computing
The battle cry for Containers is: Configure once and run a container anywhere. This future proofs your application by allowing you to move it between Cloud Service providers without having to refactor your application to account for API differences between Cloud hosting services.
Containers are ideal for a wide range of workloads: distributed computing, Web Sites, Databases as well as scaling out and SDLC tasks such as testing
Container instances start in seconds and can be created and started with scripts making them ideal for scaling out your application to account for spikes in workloads. Both AWS and Azure support creating and run Linux containers out of the box with your choice of Container management like Docker, Powershell and others. Windows Server containers are in beta now.
Most development frameworks and languages work with Containers: PHP, Ruby, Go, Node, Perl, .NET, Java and Java Script, C++ just to name a few.
The coming wave of microservices architecture is another ideal use case for Containers. Microsoft is creating “Service Fabric” to provide technology solutions to manage and run microservices which will be purpose built to use Containers.
Containers are a new and exciting computing resource. Vendors are rapidly launching new tools to manage the entire life cycle.
A Break-through approach to Container Security is now available from Polyverse. This directly addresses the risk of a large scale data breaches that is all to common.
Polyverse dynamically partitions a system into a vast number of individually secured containers, each of which is fully intact and isolated; runs existing database and middle-tier code; and has its own cyberdefenses. A successful cyberattack compromises only a single container, rather than the entire system. Polyverse systems are also intrinsically and automatically SELF-HEALING. Containers are continuously created from last known good state and put into use servicing requests.
Polyverse works with existing hardware and software and can be installed into existing environments. All existing software and infrastructure remain intact.
Containers are real, momentum is strong and being used in mission critical applications. If you are not using Containers yet and considering starting a pilot here are some places to get more information:
Containers enable non-traditional compute consumption models that supports newer more agile IT operations and the more frequent feature updates to Mobile, Social, and Cloud applications/business services.
The abstraction from specific Cloud hosting services than enable a configure once and run anywhere approach–minimizes the risk of Cloud provider lock-in post migration to the Cloud for your application–and that in turn–should spur growth in Cloud adoption.
Ability to layer on Security add-ons with solutions like Polyverse to prevent data breaches represents a break through in application / data protection that just cannot be achieved without the use of Containers.
For more please follow me at @cameroncosgrove on twitter