What is Linux Container
LXC (Linux
Containers) is an operating system-level lightweight virtualization
method for running multiple isolated Linux systems (containers) on a
single control host.
The Linux kernel
comprise cgroups for resource isolation (CPU, memory, block I/O,
network, …) that does not require starting any virtual machines on
an emulation of physical hardware, thus the performance of
application in LXC is the same as if it’s on the host. It also
provides namespace isolation to completely isolate applications’
view of operating environment.
You can run a
single application within a container (an application container)
whose namespace is isolated from the other processes on the system in
a similar manner to a chroot jail. I.e. starting a container is like
starting a normal process on the machine, except they have a
stamp/flag to indicate it belongs to a container. It’s very
similar to UID checking of a process, thus, it has very little extra
code path when compared to normal process.
The main use of
Linux Containers is to allow you to run a complete copy of Linux
operating system in a container (a system container) without the
overhead of running a level-2 hypervisor such as VirtualBox. But
one thing hypervisors can do that containers can’t is to use
different operating systems or kernels. So, for example, you can use
Microsoft Azure to run both instances of Windows Server 2012 and SUSE
Linux Enterprise Server,
at the same time. All containers must use the same operating system
and kernel.
It can bind-mount
a directory with zero overhead to host or another container running
on the same host. The directory can contain named pipes (FIFOs),
UNIX sockets and memory-mapped files. It can also share the host’s
network stack (by reusing its network namespace) with native
performance.
“Ops”
functions (backups, logging…) can be performed in separate
containers. Application containers can run unchanged in various
environments.
What is Docker
Docker is built
on top of LXC to automates the deployment of application inside LXC,
and provides the capability to package an application with its
runtime dependencies into a container. Docker is not about
containers but more about standardizing software unit, empowering
infrastructure around such software unit. So we have a standard
interface to deploy any “dockerized” application no matter if
it’s in Java, C or nodejs.
It provides a
Docker CLI command line tool for the lifecycle management of
image-based containers. Docker works with the following
fundamental components.
- Container – an application sandbox. Each container is based on an image that holds necessary configuration data. When you launch a container from an image, a writable layer is added on top of this image. Every time you commit a container (using the docker commit command), a new image layer is added to store your changes.
- Image – a static snapshot of the containers' configuration. Image is a read-only layer that is never modified, all changes are made in top-most writable layer, and can be saved only by creating a new image. Each image depends on one or more parent images.
- Platform Image – an image that has no parent. Platform images define the runtime environment, packages and utilities necessary for containerized application to run. The platform image is read-only, so any changes are reflected in the copied images stacked on top of it.
- Registry – a repository of images. Registries are public or private repositories that contain images available for download. Some registries allow users to upload images to make them available to others.
- Dockerfile – a configuration file with build instructions for Docker images. Dockerfiles provide a way to automate, reuse, and share build procedures.
- Docker Daemon – Docker running in daemon mode.
Why use Docker
Docker brings
in an API for container management, an image format and a possibility
to use a remote registry for sharing containers. This scheme benefits
both developers and system administrators with advantages such as:
- Rapid application deployment – containers include the minimal runtime requirements of the application, reducing their size and allowing them to be deployed quickly.
- Portability across machines – an application and all its dependencies can be bundled into a single container that is independent from the host version of Linux kernel, platform distribution, or deployment model. This container can be transferred to another machine that runs Docker, and executed there without compatibility issues.
- Version control and component reuse – you can track successive versions of a container, inspect differences, or roll-back to previous versions. Containers reuse components from the preceding layers, which makes them noticeably lightweight.
- Sharing – you can use a remote repository to share your container with others. Red Hat provides a registry for this purpose, and it is also possible to configure your own private repository.
- Lightweight footprint and minimal overhead – Docker images are typically very small, which facilitates rapid delivery and reduces the time to deploy new application containers.
- Simplified maintenance – Docker reduces effort and risk of problems with application dependencies.
Docker also makes
it possible to setup local development environments that are exactly
like a live server, and developer can pack, ship and run the
application as a lightweight, portable, self-sufficient LXC container
that can virtually run anywhere.
Especially when
deploying microservices, we’ll need an extremely fast and
convenient way to deploy a lot of services and many times a day.
Using virtual machine and configuration management like puppet or
chef will take a very long time to just redeploy the application and
maybe restart the VMs. Also, since VM are heavy, we’ll have to put
multiple services in the same VM which make scaling less efficient as
we’ll have to bring up all those services together, even though
only one of them are needed. Putting them in an application
container (each service has its own container) have the advantage to
scale out a service easily, without the added overhead of starting a
virtual machine.
No comments:
Post a Comment