Hello, how are you? It is awesome to see you again! Welcome to another article on the microservices architecture. In our previous article, we discussed the deployment patterns of microservices and in this article, we are going to discuss logging in microservices. So first of all, let us understand why do we need to take care of logging?

Nowadays, a lot of companies are migrating their applications from monolithic to microservices. We will discuss how to do this in our upcoming article. So when a large application is broken down into microservices, loosely coupled modules are created. These can be tested easily and it reduces the risk of changes. All these modules can be deployed independently which further enables horizontal scaling. But all this is not as simple as it seems, there are some issues! Logging is one of them. Logging is an important aspect and regardless of the architecture, monolithic or microservices, it has to be done. When we start breaking the application into microservices, it takes a lot of time for deciding the business boundaries. Which is the best way to partition the application logic? How many options are available? We think about all this but thinking about log at that moment is also important. While some must be thinking that “we have been doing logs since time immemorial, then why do we need to worry about them now?”

The need to worry about logs is that tracking a transaction within a monolithic application already has some inherent difficulties. At that point in time, only the logs can help you understand what is going on. The difficulty of monitoring and logging is directly proportional whenever your business logic is running in multiple services. So if there is no planning regarding logging, it might become impossible to understand what the application is doing! 

Now let us discuss the Logging Patterns

While dealing with microservices, one needs to understand that logs are coming from numerous different services. So there could be different ways to approach these logs. 

Approach Number 1: Logging from Individual Services

Services are meant to achieve specific objectives or perform some specific functions. In microservices, each service can be considered as a system. This also means that one could add the logging framework into the service itself! It is just like adding the logging framework to a regular application. For example, we have an NGINX web server handling requests from the public Internet. We have a MySQL server storing customer and order data, and multiple PHP modules for processing orders. It also helps in validating payment with a third-party processor and generating the HTML that gets returned to the customer.

For PHP modules, we can choose from numerous logging frameworks like log4php, Monolog, or the standard error_log function. The components can be broken up into services. A logging strategy can be defined for each service. The strategy, as well as the service, should be independent of every other service and the logging strategy itself.

In the service, information related to each log event can be appended. This would help identify the event. For example, we can append a field to our logs that record the name of the service that generated the event. This lets us easily group events from the same service. The disadvantage of this approach is that each service needs to implement its own logging methods. This adds to the complexity and increases the difficulty of changing logging behavior across multiple services.

Approach Number 2: Logging from a Central Service

In this approach, services send their logs to a logging service. They need a way to generate logs, but everything else like processing, storing, and sending logs to a centralized logging service is managed by the logging service. Example of a logging service is Loggly. There is a Loggly Docker image that creates a container running a rsyslog daemon that listens for incoming syslog messages. The incoming events are forwarded to Loggly. Also, there can be multiple logging containers that run simultaneously for load balancing and redundancy.There is also an alternative to implementing a logging solution for each service. The logs can be gathered from the infrastructure itself. For example, active Docker containers can print log events. Then, the logging driver detects the generated output, captures it and forwards it to the configured driver. The syslog driver is then configured and all the log events are forwarded directly to Loggly.

There is a tool known as logspout. It collects and forwards the distributed log events. It offers a Docker-based solution and runs its own service. It can also be attached to other containers on the same host. The advantage of logspout is that it is easy to use. One can simply deploy it and switch over to some other task.

Now Let Us Discuss Some Logging Tips for Microservices

While you must be happy that there are different approaches to logging, your logs are completely unaware of your microservice architecture. The data for errors will be available but no information shall be available about the service that generated the log. Due to this, tracing an error becomes extremely difficult. So it is time to discuss some logging tips!

Tip Number 1: Have an application instance identifier

Multiple instances of the same component running at the same time have proved to be quite helpful. An instance identifier on the log entry shows the origin of the entry. The ID can be generated in any way, it doesn’t matter! The only thing that matters is that it should be unique and should allow you to trace back the exact server or container. The service registry helps in unique identification for each of the microservices.

Tip Number 2: Don’t forget to use UTC time

This tip is universal for any distributed application or for an application with components scattered globally. The importance of UTC time can only be understood if some of the components use the local time on the log entries. It becomes quite annoying. In microservice architecture, all problems related to locally-timed log entries are exponentially worse. For local time, one can always have the time zone as a field on the log entry, so it becomes easy to retrieve the information. But it’s also important to have a field with the UTC time that will be used to order messages in the aggregation tool.

For example, one service is running in New Zealand and the other one in Brazil. Both these services use local dates. The message generated in Brazilian service will appear before the message by New Zealand service if ordered by date. But they aren’t generated in this order. These messages are ordered by date. So if someone needs to know the local time when the message was generated, one has to convert it from UTC to the particularly specified timezone.

Tip Number 3: Please generate request identifiers 

When you break down the business logic into different components, you will achieve gaining logical transactions that are scattered across numerous components. Tracing these transactions can be tough without identifiers. A unique identifier is a must for every transaction. For example, an e-commerce website has numerous operations. So how are you going to group them? It depends on the definition or function of a transaction. Also, make sure that at the beginning of a transaction, you will create one identifier that will be passed down and used in the log entries.

Also, understand that the identifier should have enough information to differentiate every transaction individually from other transactions. The transaction identifier is made up of fields derived from the log entries.

Tip Number 4: Group logs with the help of aggregation tools

One of the most important tips that you should definitely implement is to aggregate log entries from all microservices. Also, you must use a tool that allows you to group and query these entries easily. One such tool is ELK stack. It has shown promising results. This tool is a combination of three applications that together provide a full solution for dispatching log entries, storing, and indexing. Further, it also aggregates and visualizes the information.There are numerous patterns and approaches for scaling and distributing application logs using ELK.

Tip Number 5: Distributed Tracing

Understanding the flow of events across services is quite important in microservices and it isn’t easy to achieve this. A single operation can call multiple services. To reconstruct the entire sequence of steps, each service should have a correlation ID. It enables distributed tracing across services.The first service that receives a client request generates the correlation ID. If the service initiates an HTTP call to some other service, the correlation ID should be put in the request header. In the same way, if the service sends an asynchronous message, it puts the correlation ID into the message. The downstream service shall propagate the correlation ID throughout the entire system. Also, all the code that writes application metrics or log events should include the correlation ID.

Conclusion

This article is about microservices logging and we begin it by understanding the need for logging. A lot of companies are now migrating from monolithic applications to microservices based applications, but for both of them, logging is important. After discussing the need for logging, we discuss the logging patterns. In that section, we discuss two different types of logging approaches. One approach is to log from individual services and the other is to log from a central service.

For following any of the approaches, one also needs to take care of a few things. So all these things are discussed in the next section of the article titled as the logging tips for microservices. Here, we discuss five different types of tips. In short, one needs to understand the importance of microservices logging. It has proved its worth. So do implement logging if you get a chance, you won’t regret it! If you have already implemented, try improving it as much as you can. See you guys in the next article.

Here is the link to the previous article of this series.

Tao
Tao
Tao is a passionate software engineer who works in a leading big data analysis company in Silicon Valley. Previously Tao has worked in big IT companies such as IBM and Cisco. Tao has a MS degree in Computer Science from University of McGill and many years of experience as a teaching assistant for various computer science classes.

Leave a Reply

Your email address will not be published. Required fields are marked *

LEARN HOW TO GET STARTED WITH DEVOPS

get free access to this free guide, downloaded over 200,00 times !

You have Successfully Subscribed!

Level Up Big Data Pdf Book

LEARN HOW TO GET STARTED WITH BIG DATA

get free access to this free guide, downloaded over 200,00 times !

You have Successfully Subscribed!

Jenkins Level Up

Get started with Jenkins!!!

get free access to this free guide, downloaded over 200,00 times !

You have Successfully Subscribed!