A microservices architecture can be very beneficial when developing software. Its main strengths are scalability, application resilience, ease of deployment and higher efficiency maintaining systems.

In this article, we will better understand how a microservice and its layers are organized, by defining its ecosystem and explaining how each part relates with each other.

The ecosystem of the microservices architecture

Build, standardize and maintain infrastructure in a stable, scalable, fault-tolerant and reliable is essential to be successful when operating microservices.

There are many distinct models that suggest how a microservice ecosystem should be organized. An interesting and well-structured model is described by Susan J. Fowler, in her book entitled “Production-Ready Microservices”.

According to Susan, the microservice ecosystem can be divided in four “layers”: Hardware, communication, application platform and microservices.

Even though all layers are interconnected, establishing this separation helps us better understand the different responsibilities and modules that compose a microservice architecture ecosystem.

Hardware

Hardware are the computers and machinery where microservices are stored and executed. These servers may be within the organization or belong to cloud infrastructure providers, such as Amazon Web Services, Google Cloud Platform or Windows Azure.

There many solutions aiming to reduce the challenge of managing this infrastructure, such as the use of containerization and clustering technologies to store and run microservices.

It is important that servers are provided with monitoring and logging mechanisms to identify issues like disk, network or processing failures. In a highly diverse and dynamic ecosystem, it is critical to have tooling available to expose machines’ health data and keep a history of this information. This way, any potential failures may be quickly detected and traced. Additionally, a well-structured monitoring mechanism makes it possible to gauge the impact that of evolving applications cause in the infrastructure.

Communication

Communication covers all elements allowing interaction between microservices, and includes the following key elements: API Endpoints, Service Discovery, Service Registry and Load Balancers.

API Endpoints

Given a microservice’s API, its endpoints are the pipes through which all communication happens. It is on the endpoints that the microservice sends and receive data from other microservices and applications.

Service Discovery and Service Registry

In a microservice ecosystem running in the cloud, the network configuration change dynamically given scalability, update and fault recovery needs. In this scenario we need to keep track of a service’s most up to date location. Microservices register themselves on the Service Registry when a new instance is created, and removed when their execution ends or in case of failure.

When a service needs to communicate with another, it used the Service Discovery mechanism to learn that service’s most recent location. This management is a very important point and needs a lot of attention when configuring a microservice ecosystem.

Load Balancers

Load Balancers are responsible for routing incoming requests to the proper running microservice instances, ensuring no server is overloaded and maximizing capacity. If the instance of a microservice fails, the Load Balancer stops routing request to it. Conversely, it will route requests to new instances added to the cluster.

Application Platform

The Application Platform is the third layer and refers to all the tools that are independent of a microservice. These tooling should be built and arranged in such a way that the development team does not need to worry about anything else beyond application code.

Standards on the development process such as using git repositories and mirroring the production environment are efficient ways to keep the codebase organized and simulate potential issues the application may face when going live.

Centralized and automated builds with continuous integration and deploy are essential. Depending on the size of the application and the number of microservices, dozens of deploys may happen daily. Thus, it is important that tools are correctly configured and that they are capable of executing tests automatically, adding new dependencies as needed and preparing releases as new versions are built.

Lastly, a centralized logging and monitoring mechanism on a microservice level is key to understand potential problems that may occur on a running service. Given that microservices are constantly going through changes, logs give visibility to what is happening inside of a system at any given moment. Monitoring is used for checking service status and health in real time.

Microservices

Microservices house all the business logic for the task at hand. The service should be completely abstract from all the previously described above, without any specific knowledge about hardware, service registry and discovery, load balancing and deployment. The microservice itself only has the code and configuration needed to deliver the features it was designed to perform.

We have covered a high-level description of a microservice architecture ecosystem model. In order to be fully applied in real-life projects, the topics above need to be further understood, always considering the ever-changing nature of this type of architecture.

Related Articles:

Microservices: Concepts and Characteristics
Microservices: Properties and Archtecture