Docker image vs container
You are probably already well familiar with the typical
virtual machine setup. In essence, you select your server configuration, such
as memory, CPU and so on and then an operating system to run upon it.
Underlying the virtual machine, somewhere in the stack, is some physical
hardware and the resources are shared between virtual machines. The host
hardware performs a balancing act of sharing resources between all the virtual
machines, giving more computing power when required and shifting it around
accordingly.
This is the defacto offering for most hosting providers -
you "own" the virtual machine and are entirely responsible for its
running.
In the VM scenario, all instances of the virtual machines
are running an operating system. The inherent cost is giving up your resources
to the OS which leaves whatever is left over for the job of running your
application. If you have a virtual machine with 2 gigabytes of the memory, the
operating system might be consuming 1 gigabyte before you have even served your
first user request.
Docker takes a different approach that is best visualised.
The Docker approach does away with the notion of a guest OS and
instead acts as more of an application broker to the host OS.
Does this mean the operating system is abstracted away
through "emulation" ? Docker for Windows actually runs a minimal
Linux on Windows using Hyper-V (although this is big over simplification and in
a recent beta, it actually becomes possible to run native Windows containers!).
What are containers?
The term "container" probably conjures up an image
of a shipping container which is the perfect analogy. Your apps run inside a
container and everything that it needs is then within the container. For
example, if your application were to make use of an native image processing
library to resize images users might upload, then this library would be added
to your container.
What are images?
Images are essentially a snapshot of a container that are
then used to base containers upon. For
example, if you were to build a Node app, you would typically use an existing
Node container image. These are described in an aptly named Dockerfile.
This is a Dockerfile taken from a Node.js-based Divio
project.
FROM node:8
COPY package.json .
RUN npm install
COPY . /app
# noop for legacy migration
RUN echo "#!/bin/bash" > /app/migrate.sh
&& \
chmod +x
/app/migrate.sh
EXPOSE 80
WORKDIR /app
CMD npm start
In this example, the FROM directive tells Docker that we
want to use a Node image as a basis for a Node.js application. Specifically, it
refers to the Node repository at Docker Hub. In this case, since we specify
node:8, it references Node 8.12.0-jessie.
You can find images for almost everything at Docker Hub
which is a large community repository for Docker images.
You can probably already begin to see the benefits just with
this simple example configuration.
Why Docker ?
Use your resources more effectively
The most obvious is, of course, that the computing resources
are entirely dedicated to your containers. If you pay for a certain
specification then that is actually what is made available to you without
having consider the loss of resources to be shared with a guest OS. It becomes
easier to understand scaling and resource consumption without having to factor
in the guest OS.
Continuous deployment and testing
Docker has quickly become a top topic in dev-ops for its
savings in setup and configuration time. In the example above, one line gave us
a working Node environment that is ready to run our application.
This is amplified during development, especially in a team
environment. Rather than needing to install your development repeatedly across
the team, perhaps mixing Linux, OS X and Windows, you can simply use a
container and be assured of the same environment. Your development machine is
kept clean with everything neatly inside the container.
Your local working environment then perfectly matches your
testing, pre-production and production environments with no risk of different
binaries or libraries. One test can cover everything without needing to worry
about differences in environments.
Version control and recovery
By having everything in your container, patches and changes
can be easily versioned through Git. In contrast, if you were to install a
patch directly on your VM, replicate across your other environments then find
it leads to a another issue, rolling-back can be messy and cause breakage along
the way. Perhaps someone in the team has a patch or change whilst others stay
on another version causing lost time in debugging environments.
No vendor tie-in
Anywhere Docker can run, you can run your container. This
means without changing a line of code, you can run your container on AWS, Azure
and others. Perhaps a customer wants to move an application to there own data
centres long after a project finishes - the container can be readily migrated
without needing to review deployment scripts or vendor-specific steps.
How does Divio work?
Divio doesn't use traditional virtual machines.
All applications run on Divio are container-based. When you
first install the Divio Desktop application, it will automatically install
Docker and configure it accordingly if it is not already installed. Further,
the Divio CLI (command-line) can simplify working with Docker and wrap some
more complex commands.
Checking a development environment with divio doctor
When you run your application, a container is built and run
on your machine locally. When you deploy
to either testing or production, an identical container is then also deployed
for you.
If you want to get started quickly with Docker, head to the
Divio Control Panel and create a new project then use Divio Desktop to sync it
with local environment and you have your first Docker container up and running
in a minute or so.[Source]-https://www.divio.com/blog/docker-image-vs-container/
Beginners & Advanced level Docker Training Course in Mumbai. Asterix
Solution's 25 Hour Docker Training gives broad hands-on practicals.
Comments
Post a Comment