With Spring 6 and Spring Boot 3 released, we got some nice additions to the Spring Actuator module that make sure that monitoring our application will be fully compatible with Micrometer. Micrometer is a facade API that let’s us specify metrics and other data we want to observe in our application and makes sure that the data is compatible with many of the commonly used monitoring tools. Micrometer is vendor neutral, which means that you won’t need any vendor specific classes or annotations in your code to monitor your application. This comes quite handy if you want to later change your monitoring tools to a different vendor. Furthermore, developers only have to learn the Micrometer API to specify metrics without having to know the API of each of the vendors. Micrometer supports integration to a wide range of monitoring systems e.g. Prometheus, Netflix Atlas or Datadog.
In general, monitoring your application can be divided into three subcategories, which are all supported by the Micrometer API:
- Metrics: Monitoring of classic metrics like CPU usage, memory usage or any custom metric you want to measure over time.
- Tracing: Monitor the stack trace of a call through the application. This is very useful in a distributed systems, when a call might be worked on by several different servers.
- Logging: Gathering log statements from all servers of your system in one central place and correlate them to requests, users or any other information you might want to correlate on.
Spring Boot Demo Application
As a simple example for a Spring Boot application to observe, let’s create a brand new one with the Spring Initializer. I will use the following parameters for the application:
- Spring Boot 3.1 (M2) as the base line
- Maven as the build tool
- Java 20 as the runtime
We also need two Spring dependencies for this demo, which are:
- Spring Web for a basic http endpoint to monitor
- Spring Actuator for the latest Micrometer features of Spring Boot 3
With these parameters selected, press the Generate button, download the new application and unzip it into a folder of your choice.
Next, we need a basic http endpoint that we can later monitor. We will use a simple controller that returns a greeting message when you hit it with a get request. Here is the source code of the controller:
import org.slf4j.*;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class GreetingController {
private static final Logger log = LoggerFactory.getLogger(GreetingController.class);
@GetMapping("/greeting")
public Greeting getGreeting(@RequestParam String name){
log.info("Request received.");
return new Greeting(String.format("Hello %s!", name) );
}
record Greeting(String text){}
}After the start of the application, you can hit the endpoint under http://localhost:8080/greeting and it will return the greeting in JSON format.
Collecting Metric with Prometheus
In order to start collecting application metrics, you first need a tool that gathers and stores the metrics of all your applications in a central place. The most commonly used tool for that is Prometheus, which I will also use for this demo. The idea behind Prometheus is, that each application has an http endpoint that returns the current value of the monitored metrics. Prometheus calls this endpoint every couple of seconds and stores the result as a snapshot of the metrics. From these snapshots, you get a time series of the metrics, that you can then visualize as a graph or table. Let’s setup a local Prometheus instance to see this in action.
First, download Prometheus from the project download page. The installation is quite easy. Simply unzip the download and run the prometheus.exe (for Windows) in the root directory of the installation to start the server. The Prometheus admin page will then be available under http://localhost:9090/.
To activate a Prometheus endpoint in our application we need to change two settings in our project. The first one is to add a Prometheus specific dependency to our Maven pom file, to tell Micrometer which format to use to for our metrics endpoint. This is done by the following dependency:
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-registry-prometheus</artifactId>
<scope>runtime</scope>
</dependency>Next, we need to activate the metrics endpoint in our application.properties file by adding the following property:
management.endpoints.web.exposure.include=prometheusThis activated the metrics endpoint and specifies that it should return the data in a format readable by Prometheus.
If we restart our Spring Boot application with these settings added, we can hit the Prometheus endpoint as a subresource of the actuator endpoint:
http://localhost:8080/actuator/prometheusThis endpoint returns a lot of information about our application:
jvm_gc_memory_promoted_bytes_total 1974272.0
jvm_compilation_time_ms_total{compiler="HotSpot 64-Bit Tiered Compilers",} 11599.0
executor_completed_tasks_total{name="applicationTaskExecutor",} 0.0
executor_pool_size_threads{name="applicationTaskExecutor",} 0.0
jvm_buffer_memory_used_bytes{id="mapped - 'non-volatile memory'",} 0.0
jvm_buffer_memory_used_bytes{id="mapped",} 0.0
jvm_buffer_memory_used_bytes{id="direct",} 24576.0
process_cpu_usage 0.0
jvm_threads_started_threads_total 25.0
jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 9830400.0
jvm_memory_committed_bytes{area="heap",id="G1 Survivor Space",} 2097152.0
jvm_memory_committed_bytes{area="heap",id="G1 Old Gen",} 2.7262976E7
jvm_memory_committed_bytes{area="nonheap",id="Metaspace",} 3.6438016E7
jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-nmethods'",} 2555904.0
jvm_memory_committed_bytes{area="heap",id="G1 Eden Space",} 1.6777216E7
jvm_memory_committed_bytes{area="nonheap",id="Compressed Class Space",} 4784128.0
jvm_memory_committed_bytes{area="nonheap",id="CodeHeap 'non-profiled nmethods'",} 2621440.0
disk_free_bytes{path="C:\\spring-6\\spring-boot-3-micrometer-observations\\.",} 5.50906056704E11
jvm_memory_max_bytes{area="nonheap",id="CodeHeap 'profiled nmethods'",} 1.2288E8
jvm_memory_max_bytes{area="heap",id="G1 Survivor Space",} -1.0
# ... many more metrics ommitedOut of the box, Spring Boot 3 supports almost a hundred metrics covering memory usage, CPU usage, garbage collection and disk space to name a few. This should cover most of your needs without writing a single line of code.
Next, we need to tell our local Prometheus installation where to look for our new metrics endpoint. This done by editing the prometheus.yml file in the root of the installation directory, because by default Prometheus only monitors itself. To start to monitor our local Spring Boot application, change the contents of the file to the following:
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
rule_files:
scrape_configs:
- job_name: 'spring boot 3 micrometer demo'
metrics_path: '/actuator/prometheus'
scrape_interval: 1s
static_configs:
- targets: ['localhost:8080']The interesting part starts at line (9), where we connect Prometheus to our local application. We define the path to our metrics endpoint in line (11) and the root URL of the application in line (14).
After we restarted our Prometheus instance with the new settings, we can see our application being monitored in the Prometheus web interface (http://localhost:9090/targets):

To see the metrics from our Spring Boot endpoint earlier, click the Graph menu item, enter any of the metrics listed previously from our metrics endpoint in the search bar and click search. In the graph tab, we can now see the time series of the metric. Here is an example for the metric jvm_memory_used_bytes{area=“heap“,id=“G1 Survivor Space“,}.:

That’s all you need to do to connect a Spring Boot 3 application to Prometheus. Let’s look at enabling tracing for incoming request of our app next.
Enabling Tracing with Zipkin Brave
The next step is to activate the tracing of requests that enter our application. By enabling tracing for all your applications that depend on each other, you can also easily see which application are used to process a request, how long this took and where exceptions occurred if you are looking to fix a bug.
The required request correlation for this is done by Spring Boot automatically and you only need to activate it in your application by adding a dependency. We are going to use Zipkin to trace and visualize requests and a Spring Boot dependency called Zipkin Brave to activate tracing in your application.
Zipkin itself is also written in Java and the installation is quite easy. Simply download the latest release from the Zipkin homepage as an executable jar-File.
To start the Zipkin simply use the following java command. I used Zipkin Version 2.24.0 but the version might be different when you download it.
java -jar zipkin-server-2.24.0-exec.jarIf you run this command from a shell, you will see Zipkin starting up and listening on port 9411:

After started, Zipkin is running locally under port 9411.
http://localhost:9411Next, we need to connect our Spring Boot application to our local Zipkin installation. This is done by two Maven dependencies that makes sure Zipkin can read our tracing data provided by a Micrometer endpoint. As mentioned previously we will use Zipkin Brave for this and you can find the project page here on Github. The dependencies to activate Zipkin Brave with Micrometer data are:
<!-- For Micrometer tracing support with Zipkin-Brave -->
<dependency>
<groupId>io.micrometer</groupId>
<artifactId>micrometer-tracing-bridge-brave</artifactId>
</dependency>
<!-- to send Zipkin data to a remote Zipkin instance -->
<dependency>
<groupId>io.zipkin.reporter2</groupId>
<artifactId>zipkin-reporter-brave</artifactId>
</dependency>By default Zipkin Brave will sent tracing data to your localhost on port 9411, which works for this demo with our local Zipkin instance running. If you want to change this to something different than your localhost, you can use the following Spring Boot property in your application.properties to override it:
management.zipkin.tracing.endpoint=<url of your server>:9411/api/ve/spansLet’s look at Zipkin in action by sending a couple of requests to our Spring Boot application. Our greeting endpoint has the following URL:
http://localhost:8080/greeting?name=BertAfter hitting this endpoint a couple of times, let’s navigate to the Zipkin frontend and press the search button without any criteria specified. This will show all request that were recorded:

We can see how long each of the requests took to process, from which application it was processed and from where the tracing data came from. If we click on the show button, we even get more information about the request like the IP address of the server, the HTTP method and status code of the outcome to name a few.

If our request would have been processed by multiple servers and all of them are connected to Zipkin, you could also see how long the processing took for each server.
Another neat feature is that you can search for the trace Id of the request by the search bar at the top. To see the traci id in the logging message, we new need to change the format of the logging messages to include the metadata we want to see in the logs. This is done by adding the following kline to the application.properties file:
logging.pattern.level=%5p [${spring.application.name:},%X{traceId:-},%X{spanId:-}]If we hit the endpoint again, we will see the trace id and span id in the logs:

The logging statements now include the application name, trace id and span Id with every request. If we now copy the first trace id in the screenshot and search for it in Zipkin, we will get the trace of this exact request:

This is pretty handy if you have an exception message in the logs and want to find out where the request went or came from.
Centralized Logging with Loki and Grafana
The final feature I want to have a look at is centralized logging for our Spring Boot application. The idea of centralized logging is, that the log output of all your applications is sent to a central database, which gives you a complete picture of the logs for a single request that was processed by several servers. To identify corresponding log entries, there is also metadata sent with ever log output. The most important metadata are the trace id and the server name. The trace id identifies logging entries on different servers that belong to the same unit of work, e.g. a request being processed. By the server name, you can see on which machines the request was processed.
We will need to install two tools to get log correlation and visualization setup locally. The first tool is Grafana Loki, which will store our logging messages and the corresponding metadata for us. The second one is Grafana, which is used to visualize and search the logging data. Grafana Loki has no graphical frontend, that’s why we need a separate visualization tool. Grafana can actually visualize a lot more than logging data, but that would be a topic for another post.
Download a copy of Grafana Loki from here. You will also need a custom configuration file to run Loki locally. You can download the custom file from here. Next, put both files into the same directory and start your local Loki instance with the custom configuration file. To do this on Windows, use the following command:
.\loki-windows-amd64.exe --config.file=loki-local-config.yamlIf you’re using Linux use this command:
./loki-linux-amd64 -config.file=loki-local-config.yamlTo check if Loki is running properly, you can hit Loki’s own metrics endpoint under this URL, which returns a lot of application metrics.
http://localhost:3100/metricsNext, we need to start a local Grafana instance to look at the logs that Loki is collecting for us. You can download Grafana from the official download site. I downloaded the zipped binaries for Windows and unzipped the download to a folder. To start Grafana simply use the grafana-server.exe in the bin directory of your installation. You can check if your local installation runs properly under http://localhost:3000. The default user/password for a fresh installation is admin/admin and you will be asked to change it after logging in the first time.
Now it’s time to modify our Spring Boot application to send all log messages to Grafana Loki. Again, we need to add a dependency to our pom file to connect to Grafana Loki:
<dependency>
<groupId>com.github.loki4j</groupId>
<artifactId>loki-logback-appender</artifactId>
<version>1.4.0</version>
</dependency>This dependency provides us a special logback appender, which will send all log data to the console but also to the local Loki instance running. To use this appender we need a custom logback-spring.xml configuration file, which specifies the appender and the URL the log data should be sent to. Copy the following file into your scr/main/resources folder to configure logback to use Loki for logging:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<include resource="org/springframework/boot/logging/logback/base.xml" />
<springProperty scope="context" name="appName" source="spring.application.name"/>
<appender name="LOKI" class="com.github.loki4j.logback.Loki4jAppender">
<http>
<url>http://localhost:3100/loki/api/v1/push</url>
</http>
<format>
<label>
<pattern>app=${appName},host=${HOSTNAME},traceID=%X{traceId:-NONE},level=%level</pattern>
</label>
<message>
<pattern>${FILE_LOG_PATTERN}</pattern>
</message>
<sortByTime>true</sortByTime>
</format>
</appender>
<root level="INFO">
<appender-ref ref="LOKI"/>
</root>
</configuration>The appender is specified in lin (6) and is included in the dependency we added previously. The URL to out local Loki instance is configured in line (8). In line (12) we specify the metadata that will be sent to Loki to categorize the log messages. In this case we will use the appName, host name, trace id and log level to later filter on. Finally, we add the new appender to the root level for up to INFO messages in line (21)-(23).
After we hit our greeting endpoint for a couple of times again, we can now search for the logs in Grafana.
First, navigate to http://localhost:3000/ to access the Grafana web frontend. The username/password are both admin for a fresh installation.
Next, press the explore button in the main menu on the left:

In the explore view, we open the Label Browser to select the logging data we want to see:

The label browser now let’s us select which labels we want to filter on. As mentioned before, Loki also receives some metadata with every log statement and these labels were included in the metadata. Loki automatically grouped our logging statement by these labels that are defined automatically by our Spring Boot application. By default, we will have the host name, application name, log level and again the trace id to filter on.

Let’s select only the application name to see all logs that our greeting service generated:

This result in an overview of the selected logs and a graph that shows when the logging statements came in. If the application would run on several instances of e.g. a Kubernetes cluster, you would see all logs of all instances in this list. Here you can easily filter by the trace Id again, which will also help you search in Zipkin for a particular request you are looking for.
Conclusion
That’s all I wanted to showcase you about the new Micrometer integration in Spring Boot 3. We used only the build-in Micrometer functionality to gather metrics of our application, traced requests that entered our system and aggregated logs in a central logging instance. All this could be done without writing a single line of Java code. Spring Boot 3 application are instrumented so well with the Micrometer API that most metrics and functionality is working out of the box.
To use different tools to monitor your application, simply add a corresponding dependency to your pom and since all data is gathered by the well defined Micrometer API, the 3rd party dependencies can wrap the output of our application into any format required. I showcased this with Prometheus to gather metrics, Zipkin and Zipkin Brave for request tracing, and Loki and Grafana for centralized logging.
You can find the source code for the Spring Boot application in this github repo. If you liked this post, just leave a comment or star my repo on github.
Schreibe einen Kommentar