What are Microservices and Containers?
Microservices is an architectural design for building a distributed application. Microservices break an application into independent, loosely-coupled, individually deployable services. This microservices architecture allows for each service to scale or update using the deployment of service proxies without disrupting other services in the application and enables the rapid, frequent and reliable delivery of large, complex applications. Microservices get their name because each important function of an application operates as an independent service proxy. This architecture allows for each service to scale or update without disrupting other services in the application so that applications can be continuously delivered to end users. A microservices framework creates a massively scalable and distributed system, which avoids the bottlenecks of a central database and improves business capabilities, such as enabling continuous delivery/deployment applications and modernizing the technology stack.
A microservices framework including microservices and containers creates a massively scalable and distributed system, which avoids the bottlenecks of a central database. It also enables continuous integration / continuous delivery (CI/CD) pipelines for applications and modernizing the technology stack.
Companies like Amazon and Netflix have re-architected monolithic applications to microservices applications, setting a new standard for container technology.
What are Containers?
Containers are a lightweight, efficient and standard way for applications to move between environments and run independently. Everything needed (except for the shared operating system on the server) to run the application is packaged inside the container object: code, run time, system tools, libraries and dependencies.
The biggest benefits of microservices is simplicity. Applications are easier to build, optimize and maintain when they’re split into a set of smaller parts. Managing the code also becomes more efficient because each microservice is composed of different code in different programming languages, databases and software ecosystems. More microservice benefits include:
- Independence — Small teams of developers can work more nimbly than large teams.
- Resilience — An application will still function if part of it goes down because microservices allow for spinning up a replacement.
- Scalability — Meeting demand is easier when microservices only have to scale the necessary components, which requires fewer resources.
- Lifecycle automation — The individual components of microservices can more easily fit into continuous delivery pipelines when monoliths bring complexities.
Types of Containers
Don’t save or store data. Stateless microservices handle requests and return responses. Any data required for the request is lost when the request is complete. Stateless containers may use limited storage, but anything stored is lost when the container restarts.
Requires storage to run. Stateful microservices directly read from and write to data saved in a database. Storage persists when the container restarts. However, stateful microservices don’t usually share databases with other microservices.
Monolithic Architecture versus Microservices Architecture
Applications were traditionally built as monolithic pieces of software. Monolithic applications have long life cycles, are updated infrequently and changes usually affect the entire application. Adding new features requires reconfiguring and updating the entire stack — from communications to security. This costly and cumbersome process delays time-to-market and updates in application development.
Microservices architecture was designed to remedy this problem. All services are created individually and deployed separately. This allows for autoscaling based on specific business needs. Containers and microservices require more flexible and elastic load balancing due to the highly transient nature of container workloads and the rapid scaling needs without affecting other parts of the application.
- Application is a single, integrated software instance
- Application instance resides on a single server or VM
- Updates to an application feature require reconfiguration of entire app
- Network services can be hardware based and configured specifically for the server
- Application is broken into modular components
- Application can be distributed across the clouds and datacenter
- Adding new features only requires those individual microservice to be updated
- Network services must be software-defined and run as a fabric for each microservice to connect to
Why Microservices Architecture Needs Container Ingress
Applications require a set of services from their infrastructure—load balancing, traffic management, routing, health monitoring, security policies, service and user authentication, and protection against intrusion and DDoS attacks. These services are often implemented as discrete appliances. Providing an application with these services required logging into each appliance to provision and configure the service.
This process was possible when managing dozens of monolithic applications, but as these monoliths become modernized into microservices based applications it isn’t practical to provision hundreds or thousands of containers in the same way. Observability, scalability, and high availability can no longer be provided by discrete appliances.
The advent of cloud-native applications and containers created a need for a service mesh to deliver vital application services, such as load balancing. The service mesh handles east-west services within the datacenter, with container ingress handling north-south into and out of the datacenter. By contrast, trying to place and configure a physical hardware appliance load balancer at each location and every server is overly challenging and expensive. And require businesses need to deploy microservices to keep up with application demands and multi-cloud environments.
A solution to this problem is Kubernetes ingress — a new way to deliver service-to-service communication through APIs that cannot be provided by appliances.
For more on the actual implementation of microservices and containers, check out our Application Delivery How-To Videos or watch the Kubernetes Ingress and Load Balancing How To Video here:
An Essential Guide to
Container Networking and Service Mesh
Learn how both modern applications in container clusters as well as traditional applications in on-prem data centers and clouds can benefit from the granular application services made possible by a service mesh – understand what it is, why it matters and how to deploy large container clusters in production.
Microservices Architecture Definition
“Microservices architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API,” according to authors Martin Fowler and James Levis in their article Microservices. “These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.”
Microservices Best Practices
Designing Microservice Architectures is challenging due to the vast amount of possible, custom application delivery solutions they can create which can also lead to complex issues that arise if they are not designed, deployed and managed correctly. Companies that are transforming from a monolithic architecture to a more modern microservice architecture, need to be cautious not to rush the design of microservices deployment which can end up doing more harm than good. Costs can quickly pile up if you are dealing with unexpected bugs, slow system updates, insufficient development teams or relying on hybrid hardware and software components.
Here are some high-level microservices best practices:
- Fully Commit to Microservices: Trying to turn a nicely designed monolithic architecture with tightly coupled modules into microservices will likely cost more money especially if you have to breakdown an application to retro fit the design. Start with a purpose-built deployment architecture from the ground up.
- Assemble an Integrated Team: Designing effectively will require architects, developers, domain experts and business leaders to collaborate on defining the Bounded Context and Core Domain and Ubiquitous Language, Subdomains, Context Maps. Developers and architects then break down the Core Domain into autonomous services: Entity, Value Object, Aggregate, Aggregate Root.
- Dedicated Databases: Although shared databases provide some pragmatic advantages, for more sustainable, scalable and long term software development every microservice should have a dedicated database (private tables).
- Deployment Automation: When deciding how to deploy it is important to implement a “build and release” automation structure to reduce lead time and make releases quicker.
- Phase the Migration to Microservices: Monolithic architectures often involve a complex weave of repositories, deployment, monitoring, and other complex tasks so break down the migration into phases to avoid errors and gaps.
- Bake in the Splitting System: One of the key best practices is to inspect the current monolithic structure to understand the components causing problems and transform this part into a microservice. Define the interactions and processes between different pieces as you split them into microservices until you have enough pieces in place to make the final switchover.
- Observability: Utilize solutions that simplifies observability issues inherent with continuously monitoring so many individual services and pulls logs and application performance metrics into a centralized hub.
- Service Mesh and Container Ingress: Using a service mesh for microservices deployments enables efficient handling of service discovery, traffic management, security authentication, and authorization for container-based applications no matter the size or geographic distribution of servers. Combined with a Container Ingress that provides North-South traffic management, including local and global server load balancing (GSLB), web application firewall (WAF) and performance monitoring, across multi-cluster, multi-region, and multi-cloud environments unlocks advanced container and microservices orchestration.
Deployment of Microservices
In a modern microservices architecture, the deployment of microservices plays an important role in the effectiveness and reliability of an application infrastructure. The following components of microservices guidelines should be considered in the deployment strategy:
- Ability to spin up/down independently of other microservices.
- Scalability at each microservices level.
- Failure in one microservice must not affect any of the other services.
Docker is a standard way to deploy microservices using the following steps:
- Package the microservice as a container image.
- Deploy each service instance as a container.
- Scale based on changing the number of container instances.
Kubernetes provides the software to build and deploy reliable and scalable distributed systems. Large-scale deployments rely on Kubernetes to manage a cluster of containers as a single system. It also lets enterprises run containers across multiple hosts while providing service discovery and replication control. Red Hat OpenShift is a commercial offering based on Kubernetes for enterprises.
Running a small amount or building hundreds of microservices and running thousands of instances presents different challenges. Instances should be able to increase when users increase and decrease when users decline. Open source helps with building microservices that run intelligently, give a clear view of the service instances that are running or going down, and manage complexity.
This article from opensource.com looks at some of the key terminologies in the microservices ecosystem and some of the open source software to build out a microservices architecture.
Container Ingress Traffic Management
First of all, the key functionality of container ingress is traffic management, which includes routing the traffic from external sources into the cluster through an ingress gateway or out of the cluster through an egress gateway. This is called north-south traffic management.
Container ingress traffic management capabilities include:
- Ingress gateway with integrated IPAM/DNS, deny list/accept list and rate limiting
- L4-7 load balancing with SSL/TLS offload
- Automated service discovery and application map
The next logical step of an application lifecycle is to secure the application, especially in the case of thousands of microservices. The connectivity is dynamic, and each service-to-service communication needs to be encrypted, authenticated and authorized.
Microservices security capabilities include:
- Zero trust security model and encryption
- Distributed WAF for application security
- SSO for enterprise-grade authentication and authorization
Observability in microservices is important because most enterprises replace monolithic applications incrementally. As microservices are introduced, there can be many different applications that need to communicate with each other and interact with the monolithic applications that remain. Microservices observability is key for understanding the complicated architecture and root-causing problems when failures happen. It allows for health checks with a broad view of application interactions.
Microservices observability capabilities include:
- Real-time application and container performance monitoring with tracing
- Big data driven connection log analytics
- Machine learning-based insights and app health analytics
Microservices integration can be a much simpler process by providing various integration solutions or transformation and routing capabilities. Because microservices offer continuous integration and continuous delivery, it is easier to put new ideas to the test. And if something doesn’t work out, there is no need to panic, because you can easily rollback without trouble. Failure of experimentation comes at a low cost, which enables speedy time-to-market for new features and an uncomplicated process for updating code.
Challenges with Microservices
While microservices come with many benefits, there are also some pitfalls as well. Here is a breakdown of the series of challenges with microservices you may encounter.
When design microservices, there are some struggles in determining:
- The size of each microservice
- Optimal boundaries and connection points between each microservice
- Proper framework for better integration of services
It’s important for each microservices to have a specific responsibility/function, making sure that they’re created within a bounded context. In order for developers to model a domain with the most logic, they use a data-centric view.
Because microservices are usually deployed across multi-cloud environments, there is a larger risk of loss of control and visibility, which results in various vulnerable points. Another concern within a microservices-based framework is data security, where maintaining the confidentiality, integrity, and privacy of user data becomes tricky. This is because the framework remains distributed, which also presents a problem with increased attack surface substantially since setting up access controls can be a technical challenge.
For any software development lifecycle (SDLC) testing can be a complicated process, since each individual service needs to be tested independently. To make it more complex, development teams have to factor in integrating services and their interdependencies in test plans.
In order for microservices to communicate with each other and act as miniature standalone applications, it has to be properly configured. The configuration involves infrastructure layers that enable resource sharing across services.
If the configuration is handled poorly, it may lead to:
- Increased latency
- Speed of calls reduced across different services
As a result, you have a slower response time of a non-optimized application.
For each microservice’s team they must decide which technology to use and manage it. If the team is not prepared, maintaining operations will become difficult as each service needs to be deployed and operated independently.
Some challenges are as follows:
- A microservices-based application may not respond to traditional forms of monitoring.
- Another operational challenge with microservices architecture is its scalability.
- Coordination is more complex when optimizing and scaling.
- Every service needs fault tolerance.
Event Driven Architecture: Microservices
A common feature in modern applications built with microservices is event-driven architecture, which utilizes events to set off and communicate between decoupled services. Events are classified as a change in state, and it can either carry the state or it can be an identifier.
There are three main components of an event-driven architecture: event producers, event routers, and event consumers. The producer issues an event to the router, which is then filtered and pushed to consumers. The process of decoupling occurs between producer service and consumer services, allowing them to be scaled, updated, and deployed instantly.
Microservices Architecture Patterns
Along with Monolithic Architecture being an alternative to the microservices architecture, there are additional microservices architecture patterns you may encounter when applying the microservices architecture. The patterns are broken down into 1) decomposition patterns, 2) integration patterns, 3) database patterns, 4) observability patterns, and 5) cross-cutting concern patterns.
Decomposition is split into decomposing by business capability and subdomain. When decomposing by business capability, you’re also applying the concept of the single responsibility principle. Services are defined by corresponding to business capabilities, in which they are a concept from business architecture modeling. This is the approach a business should take in order to generate value.
At some point, you will come across certain “God Classes” which can’t be easily decomposed. In this case, Domain-Driven Design (DDD) subdomains must be defined, which refers to the application’s problem space as the domain. Subdomains are also narrowed down to three categories: 1) Core, 2) Supporting, and 3) Generic.
Applications exchange data or invoke behavior by connecting through a common messaging system, which uses messages of a pre-defined format. Not only do the messaging styles maximize decoupling from the interface perspective, but also from a time-based perspective. In order to handle the high-volume spikes, the messaging system allows queuing for a consumer to read a message once it’s ready.
Using this type of pattern means choosing the most appropriate data stores for your business and application requirements (e.g., relational or non-relational database). Therefore, any changes made to a microservice’s individual database will not impact other microservices and persistent data is accessed by APIs only. Additionally, a data layer is not shared between microservices and they can not be accessed by other microservices for individual data stores.
This pattern focuses on making sure development teams can have access to the data they need in order to detect failures and identify problems. It is also about having the right data exposure when communications fail or do not occur as expected. There has to be consistent monitoring, managing and controlling of how the service interact with each other at runtimes.
Cross-Cutting Concern Patterns
When using this pattern, the main concerns revolve around health checks, service registration and discovery, externalized configuration, circuit breakers, and logging. Depending on which technologies the microservices are using, there are particular cross-cutting concerns specific to them.