Microservices is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. Applications built as a set of modular components are easier to understand, easier to test, and most importantly easier to maintain over the life of the application.
It enables organizations to achieve much higher agility and be able to vastly improve the time it takes to get working improvements to production. The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack.
Each component is continuously developed and separately maintained, and the application is then simply the sum of its constituent components. This is in contrast to a traditional, “monolithic” application which is all developed all in one piece.
In order to actually run an application based on microservices, you need to be able to monitor, manage, and scale the different constituent parts. There are a number of different tools that might allow you to accomplish this. For containers, open-source tools like Kubernetes will probably be a part of your solution.
With this context, now let us look at some of the key microservices design patterns.
Quick Snapshot
Define services corresponding to business capabilities. A business capability is a concept from business architecture modeling. It is something that a business does in order to generate value. A business capability often corresponds to a business object, e.g.
Business capabilities are often organized into a multi-level hierarchy. For example, an enterprise application might have top-level categories such as Product/Service development, Product/Service delivery, Demand generation, etc.
Define services corresponding to Domain-Driven Design (DDD) subdomains. DDD refers to the application’s problem space – the business – as the domain. A domain is consists of multiple subdomains. Each subdomain corresponds to a different part of the business., e.g.The subdomains of an online store include:
Each service is deployed as a set of service instances for throughput and availability.This pattern is about running multiple instances of different services on a host (Physical or Virtual machine).
There are various ways of deploying a service instance on a shared host including:
This pattern deploys each service in its own environment. Typically, this environment will be a virtual machine or container, although there are times when the host may be defined at a less abstract level.
This kind of deployment provides a high degree of flexibility, with little potential for conflict over system resources.Services are either entirely isolated from those used by other clients (as is the case with single-service-per-VM deployment) or can be effectively isolated while sharing some lower-level system resources (i.e., containers with appropriate security features).
Deployment overhead may be greater than in the single host/multiple services model, but in practice, this may not represent significant cost in time or resources.
This pattern is about using deployment infrastructure that hides any concept of servers (i.e. reserved or preallocated resources)- physical or virtual hosts, or containers. The infrastructure takes your service’s code and runs it. You are charged for each request based on the resources consumed.
Some of the prominent serverless deployment environments:
The deployment infrastructure is a utility operated by a public cloud provider. It typically uses either containers or virtual machines to isolate the services. However, these details are hidden from you. Neither you nor anyone else in your organization is responsible for managing any low-level infrastructure such as operating systems, virtual machines, etc.
This pattern uses automated infrastructure for application deployment. It provides a service abstraction, which is a named, set of highly available (e.g. load balanced) service instances.e.g.,
When you start the development of an application you often spend a significant amount of time putting in place the mechanisms to handle cross-cutting concerns. Examples of cross-cutting concern include:
This pattern advises you to build your microservices using a microservice chassis framework, which handles all of the cross-cutting concerns.
Examples of microservice chassis frameworks:
Take a free course on Building Scalable Java Microservices with Spring Boot and Spring Cloud
An application typically uses one or more infrastructure and 3rd party services. Examples of infrastructure services include a Service registry, a message broker, and a database server. Examples of 3rd party services include payment processing, email and messaging, etc.
This pattern advises you to externalize all application configuration including the database credentials and network location. On startup, a service reads the configuration from an external source, e.g. OS environment variables, etc.
If you are building an online store that uses the Microservice architecture pattern and that you are implementing the product details page. You need to develop multiple versions of the product details user interface for each of the clients below:
In addition, the online store must expose product details via a REST API for use by 3rd party applications.
Implement an API gateway that is the single entry point for all clients. The API gateway handles requests in one of two ways. Some requests are simply proxied/routed to the appropriate service. It handles other requests by fanning out to multiple services.Rather than provide a one-size-fits-all style API, the API gateway can expose a different API for each client. For example, the Netflix API gateway runs a client-specific adapter code that provides each client with an API that’s best suited to its requirements.
The API gateway might also implement security, e.g. verify that the client is authorized to perform the request
Services typically need to call one another, In a modern microservice-based application runs in a virtualized or containerized environments where the number of instances of a service and their locations changes dynamically.
When making a request to a service, the client obtains the location of a service instance by querying a Service Registry, which knows the locations of all service instances.
Services typically need to call one another, In a modern microservice-based application runs in a virtualized or containerized environments where the number of instances of a service and their locations changes dynamically.
When making a request to a service, the client makes a request via a router (a.k.a load balancer) that runs at a well-known location. The router queries a service registry, which might be built into the router, and forwards the request to an available service instance.
Clients of a service use either Client-side discovery or Server-side discovery to determine the location of a service instance to which to send requests.
Implement a service registry, which is a database of services, their instances, and their locations. Service instances are registered with the service registry on startup and deregistered on shutdown. Client of the service and/or routers query the service registry to find the available instances of a service. A service registry might invoke a service instance’s health check API to verify that it is able to handle requests
Service instances must be registered with the service registry on startup so that they can be discovered and unregistered on shutdown.
A service instance is responsible for registering itself with the service registry. On startup, the service instance registers itself (host and IP address) with the service registry and makes itself available for discovery. The client must typically periodically renew its registration so that the registry knows it is still alive. On shutdown, the service instance unregisters itself from the service registry.
Services sometimes collaborate when handling requests. When one service synchronously invokes another there is always the possibility that the other service is unavailable or is exhibiting such high latency it is essentially unusable.
Previous resources such as threads might be consumed in the caller while waiting for the other service to respond. This might lead to resource exhaustion, which would make the calling service unable to handle other requests. The failure of one service can potentially cascade to other services throughout the application.
A service client should invoke a remote service via a proxy that functions in a similar fashion to an electrical circuit breaker. When the number of consecutive failures crosses a threshold, the circuit breaker trips, and for the duration of a timeout period all attempts to invoke the remote service will fail immediately.
After the timeout expires the circuit breaker allows a limited number of test requests to pass through. If those requests succeed the circuit breaker resumes normal operation. Otherwise, if there is a failure the timeout period begins again.
Most services need to persist data in some kind of database.Keep each microservice’s persistent data private to that service and accessible only via its API. The following diagram shows the structure of this pattern.
The service’s database is effectively part of the implementation of that service. It cannot be accessed directly by other services.
There are a few different ways to keep a service’s persistent data private. You do not need to provision a database server for each service. For example, if you are using a relational database then the options are:
If you have applied Database per service pattern, it is no longer straightforward to implement queries that join data from multiple services. Split the application into two parts: the command-side and the query-side. The command-side handles create, update, and delete requests and emits events when data changes. The query-side handles queries by executing them against one or more materialized views that are kept up to date by subscribing to the stream of events emitted when data changes.
The application consists of numerous services. The API gateway is the single entry point for client requests. It authenticates requests and forwards them to other services, which might, in turn, invoke other services.Services often need to verify that a user is authorized to perform an operation.The API Gateway authenticates the request and passes an access token (e.g. JSON Web Token) that securely identifies the requestor in each request to the services. A service can include the access token in requests it makes to other services.
The application consists of multiple services and service instances that are running on multiple machines. Requests often span multiple service instances.
Each service instance generates writes information about what it is doing to a log file in a standardized format. The log file contains errors, warnings, information, and debug messages.Use a centralized logging service that aggregates logs from each service instance. The users can search and analyze the logs. They can configure alerts that are triggered when certain messages appear in the logs.
Sometimes a service instance can be incapable of handling requests yet still be running. For example, it might have run out of database connections. When this occurs, the monitoring system should generate an alert. Also, the load balancer or service registry should not route requests to the failed service instance.
A service has a health check API endpoint (e.g. HTTP /health
) that returns the health of the service. The API endpoint handler performs various checks, such as
A health check client – a monitoring service, service registry, or load balancer – periodically invokes the endpoint to check the health of the service instance.
Requests often span multiple services. Each service handles a request by performing one or more operations, e.g. database queries publishes messages, etc.
Instrument services with code that
Kubernetes Tutorial : Distributed tracing with Jaeger
The application consists of multiple services and service instances that are running on multiple machines. Errors sometimes occur when handling requests. When an error occurs, a service instance throws an exception, which contains an error message and a stack trace.
Report all exceptions to a centralized exception tracking service that aggregates and tracks exceptions and notifies developers.
Like this post? Don’t forget to share it!
References :
There are few things as valuable to a business as well-designed software. Organizations today rely…
The cryptocurrency industry is being reshaped by the fusion of blockchain technology and artificial intelligence…
Introduction Artificial Intelligence (AI) has also found its relevance in graphic design and is quickly…
Imagine a world where the brilliance of Artificial Intelligence (AI) meets the unbreakable security of…
In today’s fast-paced digital landscape, automation is not just a luxury but a necessity for…
The world of casino gaming has leveraged the emerging technology advancements to create immersive and…
This website uses cookies.