Microservices, Containers and Kubernetes in 10 minutes
What is a Microservice?
What is a microservice? Should you be using microservices?
How are microservices related to containers and Kubernetes? If these things
keep coming up in your day-to-day and you need an overview in 10 minutes, this
blog post is for you.
Fundamentally, a microservice is just a computer program
which runs on a server or a virtual computing instance and responds to network
requests.
How is this different from a typical Rails/Django/Node.js
application? It is not different at all. In fact, you may discover that you
already have a dozen of microservices deployed at your organization. There are
not any new magical technologies that qualify your application to be called a
microservice. A microservice is not defined by how it is built but by how it
fits into the broader system or solution.
So what makes a service a microservice? Generally,
microservices have a more narrow scope and focus on doing smaller tasks well.
Let’s explore further by looking at an example.
Example: Amazon Product Listing
Let’s examine the system which serves you this product page
on Amazon. It contains several blocks of information, probably retrieved from
different databases:
The product description, which includes the price, title,
photo, etc.
Recommended items, i.e. similar books other people have
bought.
Sponsored listings that are related to this item.
Information about the author of the book.
Customer reviews.
Your own browsing history of other items on the Amazon
store.
If you were to quickly write the code which serves this
listing, the simple approach would look something like this:
monolithic design
When a user’s request comes from a browser, it will be
served by a web application (a Linux or Windows process). Usually, the
application code fragment which gets invoked is called a request handler. The
logic inside of the handler will sequentially make several calls to databases,
fetch the required information needed to render a page and stitch it together
and render a web page to be returned to the user. Simple, right? In fact, many of
Ruby on Rails books feature tutorials and examples that look like this. So, why
complicate things, you may ask?
Imagine what happens as the application grows and more and
more engineers become involved. The recommendation engine alone in the example
above is maintained by a small army of programmers and data scientists. There
are dozens of different teams who are responsible for some component of
rendering that page. Each of those teams usually wants the freedom to:
Change their database schema.
Release their code to production quickly and often.
Use development tools like programming languages or data
stores of their choice.
Make their own trade-offs between computing resources and
developer productivity.
Have a preference for maintenance/monitoring of their
functionality.
As you can imagine, having the teams agree on everything to
ship newer versions of the web store application will become more difficult
over time.
The solution is to split up the components into smaller,
separate services (aka, microservices).
microservices design
The application process becomes smaller and dumber. It’s
basically a proxy which simply breaks down the incoming page request into
several specialized requests and forwards them to corresponding microservices,
who are now their own processes and are running elsewhere. The “application
microservice” is basically an aggregator of the data returned by specialized
services. You may even get rid of it entirely and offload that job to a user’s
device, having this code run in a browser as a single-page JavaScript app.
The other microservices are now separated out and each
development team working on their microservice can:
Deploy their service as frequently as they wish without
disrupting other teams.
Scale their service the way they see fit. For example, use
AWS instance types of their choice or perhaps run on specialized hardware.
Have their own monitoring, backups and disaster recovery
that are specific to their service.
What is the difference between microservices and containers?
A container is just a method of packaging, deploying and
running a Linux program/process. You could have one giant monolithic
application as a container and you could have a swarm of microservices that do not
user containers, at all.
A container is a useful resource allocation and sharing
technology. It’s something devops people get excited about. A microservice is a
software design pattern. It’s something developers get excited about.
Containers and microservices are both useful but not
dependent on each other.
When to use Microservices?
The idea behind microservices is not new. For decades,
software architects have been at work trying to decouple monolithic
applications into reusable components. The benefits of microservices are
numerous and include:
easier automated testing;
rapid and flexible deployment models; and
higher overall resiliency.
Another win of adopting microservices is the ability to pick
the best tool for the job. Some parts of your application can benefit from the
speed of C++ while others can benefit from increased productivity of higher
level languages such as Python or JavaScript.
The drawbacks of microservices include:
the need for more careful planning;
higher R&D investment up front; and
the temptation of over-engineering.
If an application and development team is small enough and
the workload isn’t challenging, there is usually no need to throw additional
engineering resources into solving problems you do not have yet and use
microservices. However, if you are starting to see the benefits of
microservices outweigh the disadvantages, here are some specific design
considerations:
Separation of computing and storage. As your needs for CPU
power and storage grow, these resources have very different scaling costs and
characteristics. Not having to rely on local storage from the beginning will
allow you to adapt to future workloads with relative ease. This applies to both
simple storage forms like file systems and more complex solutions such as
databases.
Asynchronous processing. The traditional approach of
gradually building applications by adding more and more subroutines or objects
who call each other stops working as workloads grow and the application itself
must be stretched across multiple machines or even data centers.
Re-architecting an application around the event-driven model will be required.
This means sending an event (and not waiting for a result) instead of calling a
function and synchronously waiting for a result.
Embrace the message bus. This is a direct consequence of
having to implement an asynchronous processing model. As your monolithic
application gets broken into event handlers and event emitters, the need for a
robust, performant and flexible message bus is required. There are numerous
options and the choice depends on application scale and complexity. For a
simple use case, something like Redis will do. If you need your application to
be truly cloud-native and scale itself up and down, you may need the ability to
process events from multiple event sources: from streaming pipelines like Kafka
to infrastructure and even monitoring events.
API versioning. As your microservices will be using each
other’s APIs to communicate with each other via a bus, designing a schema for
maintaining backward compatibility will be critical. Simply by deploying the
latest version of one microservice, a developer should not be demanding
everyone else to upgrade their code. This will be a step backward towards the
monolith approach, albeit separated across application domains. Development
teams must agree upon a reasonable compromise between supporting old APIs
forever and keeping the higher velocity of development. This also means that
API design becomes an important skill. Frequent breaking API changes is one of
the reasons teams fail to be productive in developing complex microservices.
Rethink your security. Many developers do not realize this
but migrating to microservices creates an opportunity for a much better
security model. As every microservice is a specialized process, it is a good
idea to only allow it to access resources it needs. This way a vulnerability in
just one microservice will not expose the rest of your system to an attacker.
This is in contrast with a large monolith which tends to run with elevated
privileges (a superset of what everyone needs) and there is limited opportunity
to restrict the impact of a breach.
What does Kubernetes have to do with microservices?
Kubernetes is too complex to describe in detail here, but it
deserves an overview since many people bring it up in conversations about
microservices.
Strictly speaking, the primary benefit of Kubernetes (aka,
K8s) is to increase infrastructure utilization through the efficient sharing of
computing resources across multiple processes. Kubernetes is the master of
dynamically allocating computing resources to fill the demand. This allows
organizations to avoid paying for computing resources they are not using.
However, there are side benefits of K8s that make the transition to
microservices much easier.
As you break down your monolithic application into separate,
loosely-coupled microservices, your teams will gain more autonomy and freedom.
However, they still have to closely cooperate when interacting with the
infrastructure the microservices must run on.
You will have to solve problems like:
predicting how much computing resources each service will
need;
how these requirements change under load;
how to carve out infrastructure partitions and divide them
between microservices; and
enforce resource restrictions.
Kubernetes solves these problems quite elegantly and
provides a common framework to describe, inspect and reason about
infrastructure resource sharing and utilization. That’s why adopting Kubernetes
as part of your microservice re-architecture is a good idea.
Kubernetes, however, is a complex technology to learn and
it’s even harder to manage. You should take advantage of a hosted Kubernetes
service provided by your cloud provider if you can. However, this is not always
viable for companies who need to run their own Kubernetes clusters across
multiple cloud providers and enterprise data centers.
For such use cases, we recommend trying out Gravity, the
open source Kubernetes packaging solution, which removes the need for
Kubernetes administration. Gravity works by creating Kubernetes clusters from a
single image file or “Kubernetes appliances” and can be downloaded, moved,
created and destroyed by the hundreds, making it possible to treat Kubernetes
clusters like cattle, not pets.
The Conclusion
To summarize:
Microservices are not new. It’s an old software design
pattern which has been growing in popularity due to the growing scale of
Internet companies.
Small projects should not shy from the monolithic design. It
offers higher productivity for smaller teams.
Kubernetes is a great platform for complex applications
comprised of multiple microservices.
Kubernetes is also a complex system and hard to run.
Consider using hosted Kubernetes if you can.
If you must run your own K8s clusters or if you need to
publish your K8s applications as downloadable appliances, consider the open
source solution, Gravity.[Source]-https://gravitational.com/blog/microservices-containers-kubernetes/
Basic
& Advanced Kubernetes Courses using cloud computing, AWS, Docker etc. in Mumbai. Advanced
Containers Domain is used for 25 hours Kubernetes Training.
Comments
Post a Comment