Why Docker containers will take over the world
Migrating apps to the cloud
Moving existing workloads to the cloud used to be a choice
between IaaS and PaaS. The PaaS option means matching the requirements of your
app to the product catalogue of your chosen cloud, and adopting a new
architecture with components which are all managed services:
migrating-apps-to-the-cloud
This is good for operational costs and efficiency, but it
takes a project to make it happen – you’ll need to change code and run full
regression test suites. And when you go live, you’re only running on one cloud,
so if you want to go multi-cloud or hybrid, it’s going to take another project.
The alternative is IaaS which means renting VMs in the
cloud. It takes less initial effort as you just need to spin up a suite of VMs
and use your existing deployment artifacts and tools to deploy your apps:
renting-vms-in- the-cloud
But copying your VM landscape from the datacenter to the
cloud just means copying over all your operational and infrastructure
inefficiencies. You still have to manage all your VMs, and they’re still
massively under-utilised, but now you have a monthly bill showing you how
inefficient it all is.
The new way is to move your apps to containers first and
then run them in the cloud. You can use your existing deployment artifacts to
build Docker container images, so you don’t need to change code. You can
containerize pretty much anything if you can script the deployment into a
Dockerfile – it could be a 15-year-old .NET 2.0 app or last year’s Node.js app:
script-the-deployment-into-a-Dockerfile
Dockerized apps run in the same way everywhere, so
developers can run the whole stack locally using Docker Desktop. You can run
them in the datacentre or the cloud using Docker Enterprise or choose your
cloud provider’s container service. These apps are now portable, run far more
efficiently than they did on VMs and use the latest operating systems, so it’s
a great way to move off Windows Server 2003 and 2008, which is soon to be out
of support.
Delivering cloud native apps
Everywhere from start-ups to large enterprises, people are
seeing the benefits from a new type of application architecture. The Cloud
Native Computing Foundation (CNCF) defines these types of apps as having a
microservices design, running in containers and dynamically managed by a
container platform.
Cloud native apps run efficiently and scale easily. They’re
self-healing, so application and infrastructure issues don’t cause downtime.
And they’re designed to support fast, incremental updates. Microservices
running in containers can be updated independently, so a change to the product
catalogue service can be rolled out without having to test the payment service,
because the payment service isn’t changing:
microservices-running-in-containers
This architecture is from the microservices-demo sample on
GitHub, which is all packaged to run in containers, so you can spin up the
whole stack on your laptop. It uses a range of programming languages and
databases chosen as the best fit for each component.
Modernizing traditional apps
You can run your existing applications and your new cloud
native applications in Docker containers on the same cluster. It’s also a great
platform for evolving legacy applications, so they look and feel more like
cloud native apps, and you can do it without a 2-year rearchitecture project.
You start by migrating your application to Docker. This example is for a
monolithic ASP.NET web app and a SQL Server database:
monolithic-aspnet-web-app-and-sql-server
Now you can start breaking features out of the monolith and
running them in separate containers. Version 2 could use a reverse proxy to
direct traffic between the existing monolith and a new application homepage
running in a separate container:
reverse-proxy-to-direct-traffic-between-existing-monolith-and-new-application-homepage-running-in-separate
container
This is a simple pattern for breaking down web UIs without
having to change code in the original monolith. For the next release you could break
out an internal feature of the application and expose it as a REST API running
in another container:
rest-api-running-in-another-container
These new components are completely independent of the
original monolith. You can use whatever tech stack you like. Each feature can
have its own release cadence, and you can run each component at the scale it
needs.
Technical innovation: Serverless
By now you’ve got legacy apps, cloud native apps and evolved
monoliths all running in Docker containers on the same cluster. You build,
package, distribute, run and manage all the components of all the apps in the
same way. Your entire application landscape is running on a secure, modern and
open platform.
It doesn’t end there. The same platform can be used to
explore technical innovations. Serverless is a promising new deployment model
and it’s powered by containers. AWS Lambda and Azure functions are proprietary
implementations, but there are plenty of open-source serverless frameworks
which you can deploy with Docker in the datacentre or in the cloud:
docker-in-datacentre-or-cloud
The CNCF serverless working group has defined the common
architecture and pipeline processes of the current options. If you’re
interested in the serverless model, but you’re running on-premises or across
multiple clouds, then an open framework is a good option to explore. Nuclio is
simple to get started with and it runs in Docker containers on the same
platform as your other apps.
Process innovation: DevOps
The next big innovation is DevOps, which is about breaking
down the barriers between teams who build software and teams who run software
with the goal of getting better quality software to market faster. DevOps is
more about culture and process than it is about software, but it’s difficult to
make impactful changes if you’re still using the same technologies and tools.
CALMS is a good framework for understanding the areas to
focus on in DevOps transformation. It’s about culture, automation, lean,
metrics and sharing as key pieces. It’s much easier to make progress and to
quantify success in those areas if you underpin them with technical change.
Adopting containers underpins that framework:
docker-underpins-calms
It’s much easier to integrate teams together when they’re
working with the same tools and speaking the same language – Dockerfiles and
Docker Compose files live with the application source code and are jointly
owned by Dev and Ops. They provide a common ground to work together.
Automation is central to Docker. It’s much harder to
manually craft a container than it is to automate one with a Dockerfile.
Breaking apps into small units supports lean, and you can bake metrics into all
those components to give you a consistent way of monitoring different types of
apps. Sharing is easy with Docker Hub where there are hundreds of thousands of
apps packaged as Docker images.
Webinar Q&A
We had plenty of questions at the end of the session, and
not enough time to answer them all. Here are the questions that got missed.
Q. You said you can run your vote app on your laptop, but
it's a mix of Linux and Windows containers. That won't work will it?
A. No, you can’t run a mixture of Linux and Windows
containers one a single machine. You need to have a cluster running Docker Swarm
with a mixture of Linux and Windows servers to do that. The example voting app
has different versions, so it can run in all-Linux, all-Windows or hybrid
environments.
Q. Compile [your apps from source using Docker containers]
with what? MSBuild in this case?
A. Yes, you write a multi-stage Dockerfile where the first
stage compiles your app. That stage uses a Docker image which has your toolset
already deployed. Microsoft have .NET Framework SDK images and .NET Core
images, and there are official Docker images for other platforms like Go, and
Maven for Java. You can build your own SDK image and package whatever tools you
need.
Q. How do we maintain sticky sessions with Docker swarm or
Kubernetes if legacy application is installed in cluster?
A. You’ll have a load-balancer across your cluster nodes, so
traffic could come into any server, and then you could be running multiple
containers on that server. Neither Docker Swarm or Kubernetes provide session
affinity to containers out of the box, but you can do it by running a reverse
proxy like Traefik or a session-aware ingress controller for Kubernetes like
Nginx.
Q. How do different OS requirements work when testing on a
desktop? (e.g. Some containers need Linux, some need Windows, and a Mac is used
for development)
A. Containers are so efficient because they use the
underlying OS of the host where they’re running. That means Linux containers
need to run on a Linux host and Windows containers on a Windows host. Docker
Desktop makes that easy – it provisions and manages a Linux VM for you. Docker
Desktop for Mac only lets you run Linux containers, but Docker Desktop for
Windows supports Windows and Linux.
Q. How do IDEs fit into Docker (e.g. making sure all dev
team members are using compatible IDE configurations)?
A. The beauty of compiling and packaging your apps from
source using Docker is that it doesn’t matter what IDEs people are using. When
developers test the app locally, they will build and run it using Docker
containers with the same build scripts that the CI uses. So the build is
consistent, and the team doesn’t need to use the same IDE – people could use
Visual Studio, VS Code or Rider on the same project.
Q. How is the best way to orchestrate Windows containers?
A. Right now only Docker Swarm supports Windows nodes in
production. You can join several Windows servers together with Docker Swarm or
provision a mixed Linux-Windows cluster with Docker Enterprise. Kubernetes
support for Windows nodes is expected to GA by the end of 2018.
Q. Do I need a hypervisor to manage the underlying hardware
my Docker environment runs on? Better
yet, does using Docker obviate the need for VMware?
A. Docker can run on bare metal or on a VM. A production
Docker server just has a minimal OS installed (say Ubuntu Server or Windows
Server Core) and Docker running.
Q. Can SQL Server
running in a container use Windows authentication?
A. Yes. Containers are not domain-joined by default, but you
can run them with a credential spec, which means they can access AD using the
credentials of a group-managed service account.
Q. Any advice for Java build/compile inside container...for
old Eclipse IDE dependent?
A. You need to get to the point where you can build your app
through scripts without any IDE. If you can migrate your build to use Maven
(for example), then you can build and package with your Maven setup in the
Dockerfile.
Q. So, the server has to have all of the applications that
the containers will need? What happens if the server doesn't have some
application that the container needs?
A. No, exactly the opposite! The Docker image is the package
that has everything the container needs. So, an ASP.NET app in a Docker image
will have the .NET Framework, IIS and ASP.NET installed and you don’t need any
of those components installed on the server that’s running the container.
Q. If you need multiple technologies to run your application
how do you create a Docker image that supports them in a single package? What
about if you need a specific tech stack that isn't readily available?
A. Your application image needs all the pre-requisites for
the app installed. You can use an existing image if that gives you everything
you need or build your own. As long as you can script it, you can put it in a
Dockerfile – so a Windows Dockerfile could use Chocolatey to install
dependencies.
Q. How does Docker decide as to which libraries/runtime will
be part of container? How does it demarcate between OS & other runtime?
A. Docker doesn’t decide that. It’s down to whoever builds
the application image. The goal is to make your runtime image as small as
possible with only the dependencies your app actually needs. That gives you a
smaller surface area for attacks and reduces time for builds and
deployments.[Source]-https://www.pluralsight.com/blog/it-ops/docker-containers-take-over-world
Beginners
& Advanced level Docker Training Course in Mumbai. Asterix Solution's 25 Hour Docker Training gives
broad hands-on practicals.
Comments
Post a Comment