spring boot
,

Containers based deployments are rapidly gaining popularity in the enterprise. One of the more popular container solutions is Docker.

Many view containers as virtual machines. They’re not. Well kind of not. A container is a virtual walled environment for your application. It’s literally a ‘container’ inside the host OS. Thus your application works like it is in it’s own self contained environment, but it’s actually sharing operating system resources of the host computer.  Because of this, containers are more resource efficient than full blown virtual machines. You get more bang for your buck running a bare metal machine with a bunch of containers, than you do running a bare metal machine with a bunch of VMs. This is why massive cloud computing companies running 10’s of thousands of servers are running containers. Google, Facebook, Netflix, Amazon are all big advocates of containers.

Introducing  Docker Containers

To help you visualize the difference, here are a couple images provided by Docker. Here is the bloated architecture of a traditional virtual machine environment. A popular solution you can try is Oracle’s Virtual Box which allows you run a variety of operating systems on your personal machine. I personally use VMWare Fusion to run Windows on my MBP (and I still feel a little dirty every time I do). If you’ve never used either these, I recommend you give them a try.

In this graphic, note how each stack has its own Guest OS.

what-is-docker-diagram

Now for comparison, here is the same stack containerized by Docker. Here you can see how each application does not get its own operating system. This is key to why Docker containers are so efficient. You’re not providing a virtual layer mimic the hardware, for the the guest OS to use. And you’re not running n+1 guest hosts either.

Docker What is a VM

Clearly this is more efficient computing. I’ve seen estimates in the 10-25% range of improved performance. But like with everything else when it comes to computing performance – your mileage may vary. I’d expect lightweight linux VMs to be closer to the 10% side of the scale, and Windows VMs probably closer to the 25% end of the scale – just because the Windows OS is so bloated in comparison.

This leads me to an important distinction about Docker – Linux only. Yes, you can “run” Docker on Windows and OSX – but at this time, you can only do so using a VM running in Virtual Box to run – a Linux VM.

Running Spring Boot in a Docker Container

Introduction

When I first heard about running Spring Boot in a Docker container, I personally thought – “now why would you want to run a JVM in a VM, on a VM?” At first glance, it just seemed like a absolutely terrible idea from a performance standpoint. I doubt if any of these solutions will ever match the performance of a JVM running on a bare metal installation of Linux. But, I’ve shown above, running a Spring Boot application in a Docker container should have minimal performance impact. Certainly less of an impact than running in a VM. Which is exactly what you’re doing running applications in any cloud provider (see image one above).

Installing Docker

I’m not going to get into installing Docker on your OS. There is ample documentation about installing Docker in the internet. Going forward, I’m going to assume you have Docker installed. Since Docker is Linux based, my focus will be on Linux (RHEL / CentOS).

Spring Boot Example Application

For the purposes of this tutorial, let’s start with a simple Spring Boot Application. I’m going to use the completed application from my Mastering Thymeleaf course. This is a simple Spring Boot web application which is perfect for our needs.

If you want to follow this tutorial along step by step, head over to GitHub and checkout this Spring Boot project. Be sure to change to the branch “spring-boot-docker-start”.

Free Spring TutorialBuilding a Spring Boot Docker Image

For us to run Spring Boot in a Docker container, we need to define a Docker image for it. Building Docker images is done through the use of “Dockerfile”s. Dockerfiles are basically a manifest of commands we will use to build and configure our docker container. To configure our Docker image to run our Spring Boot application, we will want to:

  • Start with the latest CentOS image from Docker Hub.
  • Install and configure Oracle Java.
  • Install the Spring Boot artifact – our executable JAR file.
  • Run the Spring Boot application.

I’m using CentOS for its compatibility with RHEL, which is probably the most popular linux distribution used by enterprises. And Oracle’s Java, mainly for the same reason.

Create Our Dockerfile

In our Maven project, we need to create our Dockerfile. In /src/main/docker  create the file Dockerfile .

NOTE: As a Java developer you may be tempted to create the file as DockerFile. Don’t do this. The Maven Plugin we cover later won’t see your file if it’s CamelCase. I learned this lesson the hard way.

CentOS

We’ll start our Docker image off by using the CentOS image from Docker hub.

Dockerfile

Installing Oracle Java

The following lines in our dockerfile will install wget into the image using the yum package installer, download the Oracle Java JDK from Oracle using wget, then configure Java on the machine.

Dockerfile

Installing the Spring Boot Executable Jar

In this section of the Dockerfile, we are:

  • Adding a /tmp volume. Docker will map this to to /var/lib/docker on the host system. This is the directory Spring Boot will configure Tomcat to use as its working directory.
  • The ADD  command adds the Spring Boot executable Jar into our Docker image.
  • The RUN  command is to ‘touch’ the JAR and give it a modified date.
  • The ENTRY  point is what will run the jar file when the container is started.

I learned about these configuration settings from a post from the Pivotal team here.

Dockerfile

Complete Dockerfile

Here is the complete Dockerfile.

Dockerfile

Building the Docker Image Using Maven

Naturally, we could build our Docker image using docker itself. But this is not a typical use case for Spring developers. A typical use case for us would be to use Jenkins to generate the Docker image as part of a CI build. For this use case, we can use Maven to package the Spring Boot executable JAR, then have that build artifact copied into the Docker image.

There’s actually several competing Maven plugins for Docker support. The guys at Spotify have a nice Maven / Docker plugin. In this example, I’m going to show you how to use the Fabric8 Docker plugin for Maven.

Fabric8

Of the Maven plugins for Docker, at the time of writing, Fabric8 seems to be the most robust. For this post, I’m only interested in building a Docker Image for our Spring Boot artifact. This is just scratching the surface of the capabilities of the Fabric8 Maven plugin. This plugin can be used to spool up Docker Images to use for your integration tests for CI builds. How cool is that!?!? But let’s learn to walk before we run!

Here is a typical configuration for the Fabric8 Maven plugin for Docker.

Fabric8 Maven Docker Plugin Configuration

If you’re following along in tutorial, the complete Maven POM now is:

pom.xml

Building the Docker Image

To build the Docker image with our Spring Boot artifact run this command:

The ‘clean’ tells Maven to delete the target directory. While this step is technically optional, if you don’t use it, sooner or later you’re going to get bit in the ass by some weird issue. Maven will always compile your classes with the package command. If you’ve done some refactoring and changed class names or packages, without the ‘clean’ the old class files are left on the disk. And in the words of IBM – “Unpredictable results may occur”.

It is very important to run the package command with the docker:build command. You’ll encounter errors if you try to run these in two separate steps.

While the Docker image is building, you will see the following output in the console:

Docker images are built in layers. The CentOS image from Docker Hub is our first layer. Each command in our Dockfile is another ‘layer’. Docker works by ‘caching’ these layers locally. I think of it as being somewhat like your local Maven repository under ~/.m2. Where Maven will bring down Java artifacts once then cache them for future use.

The first time you build this Docker image, will take longer since all the layers are being downloaded / built. The next time we build this, the only layers that change are the one which adds the new Spring Boot artifact, all commands after this. The layers before the Spring Boot artifact aren’t changing, so the cached version will be used in the Docker build.

Running the Spring Boot Docker Image

Docker Run Command

So far, we have not said anything about port mapping. This is actually done at run time. When we start the Docker container, in the run command, we will tell Docker how to map the ports. In our example we want to map port 8080 of the host machine to port 8080 of the container. This is done with the ‘-p’ parameter, followed with <host port>:<container port>. We also want to use the ‘-d’ parameter. This tells Docker to start the container in the background.

Here is the complete command to run our docker container:

This command will start the Docker container and echo the id of the started container.

Congratulations, your Spring Boot application is up and running!

You should now be able to access the application on port 8080 of your machine.

Working with Running Docker Containers

Viewing Running Docker Containers

To see all the containers running on your machine, use the following command:

View Log Output

Our running Docker containers are far from little black boxes. There’s a lot we can do with them. One common thing we want to do is see the log output. Easy enough. Use this command:

Access a Running Docker Container

Need to ssh into a Docker container? Okay, technically this really isn’t SSH, but this command will give you a bash:

Stopping the Docker Container

Shutting down our Docker container is easy. Just run this command:

Ending Source Code

Just incase you’ve run into trouble, like always, I have a branch in GitHub with the complete working example. You can get the ending source code for this tutorial here on GitHub.

Conclusion

The default executable Jar artifact of Spring Boot is ideal for deploying Spring Boot applications in Docker. As I’ve shown here launching a Spring Boot application in a Docker container is easy to do.

In terms of technology, Docker is still fairly young. At the time of writing, Docker is only about three years old. Yet, it is rapidly catching on. While Docker is widely used by the web giants, it is just starting to trickle down to Fortune 500 enterprises. At the time of writing, Docker is not available natively on OSX or Windows. Yet. Microsoft has committed to releasing native version of Docker for Windows. Which is interesting. A lot things going on around Docker at Red Hat and Pivotal too.

Docker is a fundamental paradigm shift in the way we do things as Spring developers. I assure you, if you’re developing applications in the enterprise using the Spring Framework and have not used Docker, it’s not a matter of if, it is a when.

As a developer Docker brings up some very cool opportunities. Need a Mongo database to work against? No problem, spool up a local Docker container. Need a virtual environment for your Jenkins CI builds. No problem.

I personally have only been working with Docker a short time. I am honestly excited about it. My thoughts on Docker – Now we’re cooking with gas!

4
Share
Spring Framework Guru

Its not uncommon for computers to need to communicate with each other. In the early days this was done with simple string messages. Which was problematic. There was no standard language. XML evolved to address this. XML provides a very structured way of sharing data between systems. XML is so structured, many find it too restrictive. JSON is a popular alternative to XML. JSON offers a lighter and more forgiving syntax than XML.

JSON is a text-based data interchange format that is lightweight, language independent, and easy for humans to read and write. In the current enterprise, JSON is used for enterprise messaging, communicating with RESTful web services, and AJAX-based communications. JSON is also extensively used by NoSQL database such as, MongoDB, Oracle NoSQL Database, and Oracle Berkeley DB to store records as JSON documents. Traditional relational databases, such as PostgreSQL is also constantly gaining more JSON capabilities. Oracle Database also supports JSON data natively with features, such as transactions, indexing, declarative querying, and views.

In Java development, you will often need to read in JSON data, or provide JSON data as an output. You could of course do this on your own, or use an open source implementation. For Java developers, there are several options to choose from. Jackson is very popular choice for processing JSON data in Java.

Maven Dependencies for Jackson

The Jackson library is composed of three components: Jackson Databind, Core, and Annotation. Jackson Databind has internal dependencies on Jackson Core and Annotation. Therefore, adding Jackson Databind to your Maven POM dependency list will include the other dependencies as well.

Spring Boot and Jackson

The above dependency declaration will work for other Java projects, but in a Spring Boot application, you may encounter errors such as this.

Jackson Dependency Conflict Error - NoSuchMethod Error

The Spring Boot parent POM includes Jackson dependencies. When you include the version number, thus overriding the Spring Boot curated dependency versions you may encounter version conflicts.

The proper way for Jackson dependency declaration is to use the Spring Boot curated dependency by not including the version tag on the main Jackson library, like this.

NOTE: This problem is highly dependent on the version of Spring Boot you are using.

For more details on this issue, check out my post Jackson Dependency Issue in Spring Boot with Maven Build.

Reading JSON – Data Binding in Jackson

Data binding is a JSON processing model that allows for seamless conversion between JSON data and Java objects. With data binding, you create POJOs following JavaBeans convention with properties corresponding to the JSON data. The Jackson ObjectMapper is responsible for mapping the JSON data to the POJOs. To understand how the mapping happens, let’s create a JSON file representing data of an employee.

employee.json

The preceding JSON is composed of several JSON objects with name-value pairs and a phoneNumbers array. Based on the JSON data, we’ll create two POJOs: Address and Employee. The Employee object will be composed of Address and will contain properties with getter and setter method corresponding to the JSON constructs.

When Jackson maps JSON to POJOs, it inspects the setter methods. Jackson by default maps a key for the JSON field with the setter method name. For Example, Jackson will map the name JSON field with the setName() setter method in a POJO.

With these rules in mind, let’s write the POJOs.

Address.java

Employee.java

With the POJOs ready to be populated with JSON data, let’s use ObjectMapper of Jackson to perform the binding.

ObjectMapperDemo.java

In the ObjectMapperDemo class above, we created an ObjectMapper object and called its overloaded readValue() method passing two parameters. We passed a File object representing the JSON file as the first parameter. And Employee.class as the target to map the JSON values as the second parameter. The readValue() method returns an Employee object populated with the data read from the JSON file.

The test class for ObjectMapperDemo is this.

ObjectMapperDemoTest.java

The output on running the test is this.

Output of ObjectMapperDemo

Simple Data Binding in Jackson

The example above we covered full data binding – a variant of Jackson data binding that reads JSON into application-specific JavaBeans types. The other type is simple data binding where you read JSON into built-in Java types, such as Map and List and also wrapper types, such as String, Boolean, and Number.

An example of simple data binding is to bind the data of employee.json to a generic Map is this.

ObjectMapperToMapDemo.java

In the ObjectMapperToMapDemo class above, notice the overloaded readValue() method where we used a FileInputStream to read employee.json. Other overloaded versions of this method allow you to read JSON from String, Reader, URL, and byte array. Once ObjectMapper maps the JSON data to the declared Map, we iterated over and logged the map entries.

The test class for the ObjectMapperToMapDemo class is this.

ObjectMapperToMapDemoTest.java

The output on running the test is this.
Output of ObjectMapperToMapDemo

With simple data binding, we don’t require writing JavaBeans with properties corresponding to the JSON data. This is particularly useful in situations where we don’t know about the JSON data to process. In such situations, another approach is to use the JSON Tree Model that I will discuss next.

Reading JSON into a Tree Model

In the JSON Tree Model, the ObjectMapper constructs a hierarchical tree of nodes from JSON data. If you are familiar with XML processing, you can relate the JSON Tree Model with XML DOM Model. In the JSON Tree Model, each node in the tree is of the JsonNode type, and represents a piece of JSON data. In the Tree Model, you can randomly access nodes with the different methods that JsonNode provides.

The code to generate a Tree Model of the employee.json file is this.

In the constructor of the JsonNodeDemo class above, we created an ObjectMapper instance and called its readTree() method passing a File object representing the JSON document as parameter. The readTree() method returns a JsonNode object that represents the hierarchical tree of employee.json. In the readJsonWithJsonNode() method, we used the ObjectMapper to write the hierarchical tree to a string using the default pretty printer for indentation.

The output on running the code is this.

Next, let’s access the value of the name node with this code.

In the code above, we called the path() method on the JsonNode object that represents the root node. To the path() method, we passed the name of the node to access, which in this example is name. We then called the asText() method on the JsonNode object that the path() method returns. The asText() method that we called returns the value of the name node as a string.

The output of this code is:

Next, let’s access the personalInformation and phoneNumbers nodes.

Few key things to note in the code above. In Line 4, notice that we called the get() method instead of path() on the root node. Both the methods perform the same functions – they return the specified node as a JsonNode object. The difference is how they behave when the specified node is not present or the node doesn’t have an associated value. When the node is not present or does not have a value, the get() method returns a null value, while the path() method returns a JsonNode object that represents a “missing node“. The “missing node” returns true for a call to the isMissingNode() method. The remaining code from Line 5- Line 9 is simple data binding, where we mapped the personalInformation node to a Map<String, String> object.

In the readPhoneNumbers() method, we accessed the phoneNumbers node. Note that in employee.json, phoneNumbers is represented as a JSON array (Enclosed within [] brackets). After mapping, we accessed the array elements with a call to the elements() method in Line 15. The elements() method returns an Iterator of JsonNode that we traversed and logged the values.

The output on running the code is this.

The complete code of generating the JSON tree model and accessing its nodes is this.

JsonNodeDemo.java

The test class for the JsonNodeDemo class above is this.

JsonNodeDemoTest.java

Writing JSON using Jackson

JSON data binding is not only about reading JSON into Java objects. With the ObjectMapper of JSON data binding, you can also write the state of Java objects to a JSON string or a JSON file.

Let’s write a class that uses ObjectMapper to write an Employee object to a JSON string and a JSON file.

JsonWriterObjectMapper.java

In Line 22 of the code above, we used an ObjectMapper object to write an Employee object to a JSON string using the default pretty printer for indentation. In Line 24, we called the configure() method to configure ObjectMapper to indent the JSON output. In Line 25, we called the overloaded writeValue() method to write the Employee object to the file provided as the first parameter. The other overloaded writeValue() methods allow you to write JSON output using OutputStream and Writer.

The test code for the JsonWriterObjectMapper class is this.

JsonWriterObjectMapperTest.java

In the test class above, we used the JUnit @Before annotation on the setUpEmployee() method to initialize the Address and Employee classes. If you are new to JUnit, checkout my series on JUnit starting from here. In the @Test annotated method, we called the writeEmployeeToJson() method of JsonWriterObjectMapper, passing the initialied Employee object.

The output on running the test is this.
Output of JsonWriterObjectMapper

Spring Support for Jackson

Spring support for Jackson has been improved lately to be more flexible and powerful. If you are developing Spring Restful webservice using Spring RestTemplate API, you can utilize Spring Jackson JSON API integration to send back JSON response. In addition, Spring MVC now has built-in support for Jackson’s Serialization Views. Also, Jackson provides first class support for some other data formats than JSON- Spring Framework and Spring Boot provide built-in support Jackson based XML. In future posts, I will discuss more about advanced JSON-based processing with Jackson- particularly Jackson Streaming Model for JSON, and also Jackson based XML processing.

Conclusion

Jackson is one of the several available libraries for processing JSON. Some others are Boon, GSON, and Java API for JSON Processing.

One advantage that Jackson has over other libraries is its maturity. Jackson has evolved enough to become the preferred JSON processing library of some major web services frameworks, such as Jersey, RESTEasy, Restlet, and Apache Wink. Open source enterprise projects, such as Hadoop and Camel also use Jackson for handling data definition in enterprise integration.

1
Share
spring boot
,

Recently while working with Jackson within a Spring Boot project, I encountered an issue I’d like to share with you.

Jackson is currently the leading option for parsing JSON in Java. The Jackson library is composed of three components: Jackson Databind, Core, and Annotation. Jackson Databind has internal dependencies on Jackson Core and Annotation. Therefore, adding Jackson Databind to your Maven POM dependency list will include the other dependencies as well. To use the latest Jackson library, you need to add the following dependency in the Maven POM.

The above dependency works well in other Java projects, but unfortunately in a Spring Boot 1.3.x application, you may stumble upon this error.

Jackson Dependency Conflict Error in Spring Boot

You may see several different errors. Here are some additional examples.

This error occurs due to Jackson dependency conflict. We are working on a Spring Boot project and it’s inheriting from the Spring Boot parent POM that includes Jackson. Without any Jackson dependency in the project POM, let’s print the Maven dependency tree to view the in-built Jackson dependencies.

The output is this.

As you can see above, the Spring Boot parent POM uses an older version of Jackson (2.6.5).

Now, if we add the Jackson dependency to our Maven POM using the version like this:

Maven will pulling in older versions of Jackson-annotation and Jackson-core and overriding the newer ones. We can see this by running the dependency:tree command again.

I did not expect Maven to behave this way in dependency resolution. The POM for the primary Jackson artifact does call for the proper version. However, this seems to be getting overridden by the versions specified explicitly in the Spring Boot parent POM.

Ideally, when working with Spring Boot, is to leverage the curated dependencies in the Spring Boot parent POM. In this case, we drop the version for the Jackson dependency so it will get inherited from the Spring Boot Parent POM.

Now, the version will be inherited from the parent POM, and the issue will be resolved.

But what if we want to use a newer version of Jackson? The proper way is to exclude the inherent dependencies, and explicitly add their new versions, like this.

This POM configuration will override the Jackson dependencies set in the Spring Boot parent POM.

This post is specific to Spring Boot version 1.3.3. Naturally, the Spring Boot team will be evolving the version of Jackson used in future releases.

For developers accustomed to working with Maven dependendencies, it’s a very easy mistake to include the version in the dependency declaration. This can cause some unintended issues due to version conflicts. It is a little bit of a paradigm shift for experienced developers to depend on the Spring Boot parent POM. Some won’t want to relinquish control, but in the long run I expect you will be better of leveraging the Spring Boot curated dependencies.

3
Share
spring boot
, ,

When it comes to logging in enterprise applications, logback makes an excellent choice – it’s simple and fast, has powerful configuration options, and comes with a small memory footprint. I have introduced logback in my introductory post, Logback Introduction: An Enterprise Logging Framework. YAML is just one option you can use for Spring Boot configuration. In a series of posts on logback, I’ve also discussed how to configure Logback using XML and Groovy and how to use Logback in Spring Boot applications. The posts are available as:

In my earlier post on Using Logback with Spring Boot, I used a properties file to configure logback. In this post, I’ll discuss how to configure Logback using Spring Boot’s YAML configuration file. If you’re a seasoned user of the Spring Framework, you’ll find YAML a relatively new configuration option available to you when using Spring Boot.

Creating a Logger

We’ll use a simple Spring Boot web application and configure logback with YAML in that application. Please refer my previous post, where I wrote about creating a web application using Spring Boot. This post expands upon concepts from the previous post, but is focused on the use of YAML configuration with Spring Boot.

The application from the previous post contains a controller, IndexController to which we’ll add logging code, like this.

IndexController.java

Since Logback is the default logger under Spring Boot, you do not need to include any additional dependencies for Logback or SLF4J.

Run the SpringBootWebApplication main class. When the application starts, access it from your browser with the URL, http://localhost:8080

The logging output on the IntelliJ console is this.

Default Logging Output

In the output above, the logging messages from IndexController are sent to the console by the logback root logger. Notice that the debug message of IndexController is not getting logged. Logback by default will log debug level messages. However, the Spring Boot team provides us a default configuration for Logback in the Spring Boot default logback configuration file, base.xml. In addition, Spring Boot provides provide two preconfigured appenders through the console-appender.xml and file-appender.xml files. The base.xml file references both of them.

The code of the base.xml file from the spring-boot github repo is this.

Here you can see that Spring Boot has overridden the default logging level of logback by setting the root logger to INFO, which is the reason we did not see the debug messages in the example above. As we’ll see in the next section, changing log levels in Spring Boot is very simple.

YAML Configuration via Spring Boot’s application.yml File

In a Spring Boot application, you can externalize configuration to work with the same application code in different environments. The application.yml file is one of the many ways to externalize configuration. Let’s use it to externalize logging configuration.

If you wish to use YAML for your Spring configuration, you simply need to create a YAML file. Spring Boot will look for a  application.yml file on the classpath. In the default structure of a Spring Boot web application, you can place the file under the Resources directory. To parse YAML files, you need a YAML parser. Out of the box, Spring Boot uses SankeYAML, an YAML parser. There is nothing you need to do to enable YAML support in Spring Boot. By default under Spring Boot, YAML is ready to go.

Here is an example of an application.yml file with basic configurations of logging levels.

In the configuration code above, we set the log levels of the Spring framework, any application logger of the guru.springframework.controllers package and its sub-packages, and hibernate to DEBUG. Although, we are not using Hibernate in our example application, I have added the Hibernate logging configuration for demonstration purposes so you can see how to configure logging for various Java packages.

When you run the application, you’ll notice DEBUG messages of the Spring frameworks’ startup on the console. When you access the application, notice the log messages of the IndexController now include the debug message.

Logging Level Configuration in YAML with Spring Boot
At this point, log messages are only being sent to the console. You can configure Spring Boot to additionally log messages to log files. You can also set the patterns of log messages both for console and file separately, like this.

With the updated YAML configuration, here is an example of the logging output.

Logging to File and Console with updated YAML configuration

Spring Active Profile Properties in YAML

Spring Profiles are commonly used to configure Spring for different deployment environments. For example, while developing in your local machine, it is common to set the log level to DEBUG. This will give you detailed log messages for your development use. While on production, its typical set the log level to WARN or above. This is to avoid filling your logs with excessive debug information and incurring the overhead of excessive logging.

You can segregate a YAML configuration into separate profiles with a spring.profiles key for each profile. Then, add the required logging configuration code to each profile and ensure that the profile lists are separated by the --- lines. In the same file, you can use the spring.profiles.active key to set the active profile. However, this is not mandatory. You can also set the active profile to use programmatically or passing it as a system property or JVM argument while running the application.

The complete application.yml file with logging configuration based on Spring profiles is this.

In the configuration code above, we defined two profiles: dev and production with different logging configurations. We also set the active profile to dev.

When you run and access the application, the logging configuration of the dev profile will be used and the logging outputs will be similar to this.
Logging Output of dev Profile

Now let’s make production the active profile by passing the -Dspring.profiles.active=production JVM argument.

In IntelliJ, select Run-> Edit Configurations, and set the JVM argument in the Run/Debug Configurations dialog box that appears, like this.
Run Debug Configurations Dialog Box

The logging output on accessing the application with production as the active profile is this.

Logging Output of production Profile

Separating Profiles in YAML Configuration Files

A Spring Boot configuration file is not limited to logging configurations only. Typically, several different types of configurations go into the different profiles of an enterprise application. Configurations can be of bean registrations, database connection settings, SMTP settings, etc. spread across development, testing, staging, production, and other profiles.

It’s both tedious and error prone to maintain a single file with multiple profiles with each profile containing different types of configuration settings. Remember, much more time is spent reading code and configuration files than is spent writing it. At some point in the future, yourself, or someone else, will be reading or updating the configuration files. And for monolithic configuration files with low readability, the chances of errors creeping in is high. Spring addresses such challenges by allowing separate configuration files – one for each profile. With separate configuration files, you improve the long term maintainability of your application.

Each of such configuration file must follow the application-.yml naming convention. For example, for the dev and production profiles, you need the application-dev.yml and application-production.yml files in the classpath. You should also add a application-default.yml file containing default configurations. When no active profile is set, Spring Boot falls back to the default configurations in application-default.yml.

It’s important to note that if you have a application.yml  file (with no suffix) on your path, that it will always be included by Spring, regardless of what profiles are or not active.

The project structure of the Spring Boot web application with different profile-specific configuration files is this.
Profile-specific YAML Configuration Files for Spring Boot

Following are the code for each of the configuration files.

application-default.yml

application-dev.yml

application-production.yml

Test the application by first starting it without any profile, then with the dev profile, and finally the production profile. Ensure that the expected configurations are being used for the different environments.

Conclusion

YAML configuration file in Spring Boot provides a very convenient syntax for storing logging configurations in a hierarchical format. YAML configuration, similar to Properties configuration cannot handle some advanced features, such as different types of appender configurations, and also encoders and layout configurations.

Functionally, YAML is nearly the same as using a traditional properties file. Personally, I find YAML fun to write in. It feels more expressive than the old school properties files and it has a nice clean syntax. Often you don’t need many of the more advanced logging features of logback. So you’re fine using the simplicity of the YAML file configuration. For advanced logging configurations using XML and Groovy, explore my earlier posts on them available here, and here.

I have encountered one issue with using YAML files for Spring Boot configuration. When setting up a JUnit test outside of Spring Boot, it was problematic to read the YAML properties file with just Spring. Remember, YAML support is specific to Spring Boot. I suspect at some point it will get included into the core functionality of Spring (If it has not already been).

1
Share
spring boot
, ,

Logback makes an excellent logging framework for enterprise applications – it’s fast, have simple but powerful configuration options, and comes with a small memory footprint. I introduced logback in my introductory post, Logback Introduction: An Enterprise Logging Framework. In a series of posts on Logback, I’ve also discussed how to configure logback using XML and Groovy. The posts are available as Logback Configuration: using XML and Logback Configuration: using Groovy.

In this post, I’ll discuss how to use Logback with Spring Boot. While there are a number of logging options for Java, the Spring Boot chose to use Logback for the default logger. Like many things in Spring Boot, Logback by default gets configured with sensible defaults. Out of the box, Spring Boot makes Logback easy to use.

Creating Loggers

In a previous post, I wrote about creating a web application using Spring Boot. We’ll configure logback for this application. The application contains a controller, IndexController to which we’ll add logging code. The code of IndexController is this.

Let’s add a SpringLoggingHelper class with logging code to the application. Although this class doesn’t do anything except emitting logging statements, it will help us understand configuring logging across different packages. Here is the code of SpringLoggingHelper :

In both the classes above, we wrote logging code against the SLF4J API. SLF4J is a façade for commonly used logging frameworks, such as Java Util Logging, Log4J 2, and Logback. By writing against SLF4J, our code remains decoupled from Logback, thus providing us the flexibility to plug-in a different logging framework, if required later.

If you are wondering about SLF4J and Logback dependencies, you don’t need to specify any. Spring Boot contains them too. Assuming you’re using Maven or Gradle to manage you Spring Boot project, the necessary dependencies are part of the dependencies under Spring Boot.

Run the SpringBootWebApplication main class. When the application starts, access it from your browser with the URL, http://localhost:8080

The logging output on the IntelliJ console is this.

Logging Output with Default Configuration in Spring Boot

We haven’t written any configuration for Logback. The output of both the IndexController and SpringLoggingHelper classes are from the logback root logger. Notice that the debug messages are not getting logged. Logback by default will log debug level messages. However, the Spring Boot team provides us a default configuration for Logback in the Spring Boot default logback configuration file, base.xml.  In addition, Spring Boot provides provide two  preconfigured appenders through the console-appender.xml and file-appender.xml files. The base.xml file references both of them.

Here is the code of the base.xml file from the spring-boot github repo.

Here you can see the Spring Boot has overridden the default logging level of Logback by setting the root logger to INFO, which is the reason we did not see the debug messages in the example above. As we’ll see in the next section, changing log levels in Spring Boot is very simple.

Free Spring Framework Tutorial
Check out my FREE Introduction to Spring Course

Configuration via Spring Boot’s application.properties File

In a Spring Boot application, you can externalize configuration to work with the same application code in different environments. The application.properties file is likely the most popular of several different ways to externalize Spring Boot configuration properties. In the default structure of a Spring Boot web application, you can locate the application.properties file under the Resources folder. In the application.properties file, you can define log levels of Spring Boot, application loggers, Hibernate, Thymeleaf, and more. You can also define a log file to write log messages to in addition to the console.

Here is an example of an application.properties file with logging configurations.

Note: There is also a logging.path property to specify a path for a logging file. If you use it, Spring Boot creates a spring.log file in the specified path. However, you cannot specify both the logging.file and logging.path properties together. If done, Spring Boot will ignore both.

When you run the main class now and access the application, log messages from IndexController and SpringLoggingHelper are logged to the console and the logs/spring-boot-logging.log file.
Spring Boot Logging Output with application.properties

In the output, notice that debug and higher level messages of IndexController got logged to the console and file. This was because in the application.properties file, we specified DEBUG as the log level for the guru.springframework.controllers package that IndexController is part of. Since we did not explicitly configure the SpringLoggingHelper class, the default configuration of base.xml file was used. Therefore, only INFO and higher level messages of SpringLoggingHelper got logged.

You can see how simple this is to use when you need to get more detailed log messages for a specific class or package.

Logback Configuration through an External File

Logback configuration through application.properties file will be sufficient for many Spring Boot applications. However, large enterprise applications are likely to have far more complex logging requirements. As I mentioned earlier, Logback supports advanced logging configurations through XML and Groovy configuration files.

In a Spring Boot application, you can specify a Logback XML configuration file as logback.xml or logback-spring.xml in the project classpath. The Spring Boot team however recommends using the -spring variant for your logging configuration,   logback-spring.xml is preferred over  logback.xml. If you use the standard logback.xml configuration, Spring Boot may not be able to completely control log initialization.

Here is the code of the logback-spring.xml file.

In the configuration code above, we included the base.xml file in Line 3, notice that we didn’t configured any appenders. Rather we relied on the CONSOLE and FILE appenders which are provided by Spring Boot.

With the updated Spring Boot Logback configuration, our logging output now looks like this:

Logging Output of Spring Boot XML Configuration

Note: Spring Boot expects the logback-spring.xml configuration file to be on the classpath. However, you can store it in a different location and point to it using the logging.config property in application.properties.

Spring Boot Profiles in Logging

While developing in your local machine, it is common to set the log level to DEBUG. This will give you detailed log messages for your development use. While on production, its typical set the log level to WARN or above. This is to avoid filling your logs with excessive debug information and logging overhead while running in production. While logging is very efficient, there is still a cost.

Spring Boot has addressed these requirements by extending Spring profiles for logback configuration with the <springProfile> element. Using this element in your logback-spring.xml file, you can optionally include or exclude sections of logging configuration based on the active Spring profile.

Note: Support for <springProfile> in logback configuration is available from SpringBoot 1.3.0.M2 milestone onwards.

Here is an XML example to configure Logback using active Spring profiles.

In the configuration code above, for the dev and staging profiles, we configured the guru.springframework.controllers logger to log DEBUG and higher level messages to the console. For the production profile, we configured the same logger to log WARN and higher level messages to a file.

To pass a profile to the application, run the application with the -Dspring.profiles.active= JVM argument.

For local development, in IntelliJ, select Run-> Edit Configurations, and set the JVM argument in the Run/Debug Configurations dialog box, like this.

setting active profiles for logging in IntelliJ

Now, when we run the application with the dev profile, we will see the following log output.

Logging Output with Spring Active Profiles

In the output above, observe the logging output of IndexController. DEBUG and higher log messages got logged to console based on the configuration of the dev profile. You can restart the application with the production profile to ensure that WARN and higher log messages gets logged to the file.

Conditional Processing of Configuration File

Logback supports conditional processing of configuration files with the help of the Janino library. You can use <if>, <then> and <else> elements in a configuration file to target several environments. To perform conditional processing, add the Janino dependency to your Maven POM, like this.

The complete logback-spring.xml file with conditional processing logic is this.

In the code above, we specified a condition in the <if> element to check whether the current active profile contains dev. If the condition evaluates to true, the configuration code within the <then> element executes. In the <then> element, we configured guru.springframework.helpers to log DEBUG and higher messages to console. We used the <else> element to configure the logger to log WARN and higher messages to the log file. The <else> element executes for any profiles other than dev.

When you run the application with the production profile and access it, both loggers will log WARN and higher messages to the log file, similar to this.
Logging Output with production Profile

For the dev profile, both loggers will log DEBUG and higher messages to the console, similar to this.
Logging Output for dev Profile

Logback Auto-Scan Issue with Spring Boot

In a logback-spring.xml file, you can enable auto-scan of the configuration by setting the scan="true" attribute. With auto-scan enabled, Logback scans for changes in the configuration file. For any changes, Logback automatically reconfigure itself with them. You can specify a scanning period by passing a time period to the scanPeriod attribute, with a value specified in units of milliseconds, seconds, minutes or hours.
For example, this code tells Logback to scan logback-spring.xml after every 10 seconds.

One limitation of Spring Boot Logback is that with springProfile and springProperty, setting auto-scan results in error.

The error occurs because of incompatibility issues. Spring Boot uses the JoranConfigurator subclass to support springProfile and springProperty. Unfortunately, Logback’s ReconfigureOnChangeTask doesn’t provide a hook to plug it in.

free spring framework tutorial
Checkout my FREE Introduction to Spring Course!

Conclusion

The popularity of Logback is trending in the open source community. A number of popular open source projects use Logback for their logging needs. Apache Camel, Gradle, and SonarQube are just a few examples.

Logback is clearly has the capabilities to handle the needs of logging in a complex enterprise application. So, it’s no wonder the Spring Boot team selected Logback for the default logging implementation. As you’ve seen in this post, the Spring Boot team has provided a nice integration with Logback. Out of the box, Logback is ready to use with Spring Boot. In this post, you’ve seen how easy it is to configure Logback in Spring Boot as your logging requirements evolve.

10
Share
, ,

Logback is designed to be faster and have a smaller memory footprint than the other logging frameworks around. If you are new to Logback, you should checkout my introductory post on Logback: Logback Introduction: An Enterprise Logging Framework.

Logback supports configuration through XML and Groovy. I explained XML configuration in my previous post, Logback Configuration: using XML. We’ll use similar configuration options for Logback, but this time in Groovy.

Because of its simplicity and flexibility, Groovy is an excellent tool for configuring Logback. Groovy is intuitive and has an easy-to-learn syntax. Even if you aren’t familiar with it, you should still easily understand, read, and write groovy configurations for Logback.

Creating a Logger

We will start by creating an application logger. As I mentioned in my earlier post here, for a Spring Boot application, we don’t need any additional Logback dependencies in our Maven POM. Groovy support is already there. We just need to start using it. Let’s start with creating a class and test for our example.

LogbackConfigGroovy.java

Our test class uses JUnit to unit test the LogbackConfigGroovy class.

LogbackConfigGroovyTest.java

The Groovy Configuration File

When Logback starts, it searches for a logback.groovy file in the classpath to configure itself. If you store the file in a different location outside the classpath, you will need to use the logback.configurationFile system property to point to the location, like this.

In a logback.groovy file, you can enable auto-scan using the scan() method. With auto-scan enabled, Logback scans for changes in the configuration file. For any changes, Logback automatically reconfigure itself with them. By default, Logback scans for changes once every minute. You can specify a different scanning period by passing a scan period, with a value specified in units of milliseconds, seconds, minutes or hours as parameter to scan(). For example, scan("30 seconds") tells Logback to scan Logback.groovy after every 30 seconds.

In logback.groovy, you can define one or more properties, like this.

Configuration code after a property declaration can refer the property with the ${property_name} syntax, as shown in the second line of the code above. If you’re familiar with Spring, you’ll find this syntax similar to SpEL.

Console and File Appenders

You declare one or more appenders with the appender() method. This method takes the name of the appender being configured as its first mandatory argument. The second mandatory argument is the class of the appender to instantiate. An optional third element is a closure, an anonymous block containing further configuration instructions.

The code to create a Logback console and a file appender, is this.

In the code above:

  • Line 1: We called the appender() method passing Console-Appender as the appender name and ConsoleAppender as the appender implementation class.
  • Line 2: We injected an encoder of type PatternLayoutEncoder into the appender.
  • Line 3: We set the pattern property of the encoder to the %msg%n conversion pattern. A conversion pattern is composed of literal text and format control expressions called conversion specifiers. You can learn more about pattern layout and conversion specifiers here.
  • Line 6 – Line 12: We created a file appender named File-Appender of the FileAppender type, and set the file property to a log file. We injected an encoder into the appender and set the pattern and outputPatternAsHeader properties of the encoder. The outputPatternAsHeader property, when set to true, inserts the pattern used for the log output at the top of log files.

We will now configure an application-specific logger along with the root logger to use the console and file appenders.

You use the logger() method to configure a logger. This method accepts the following parameters.

  • A string specifying the name of the logger.
  • One of the OFF, ERROR, WARN, INFO, DEBUG, TRACE, or ALL fields of the Level class to represent the level of the designated logger.
  • An optional List containing one or more appenders to be attached to the logger.
  • An optional Boolean value indicating additivity. The default value is true. We will come to additivity a bit later.

The code to configure the loggers is this.

In the code above, we configured all loggers of the guru.springframework.blog.Logbackgroovy package and its sub-packages to log INFO and higher level messages to the configured File-Appender. We also configured the root logger to log INFO and higher level messages to the configured Console-Appender.

The output on running the test class, LogbackConfigGroovyTest is this.
Output of Console and File Appender

Rolling File Appender

The rolling file appender supports writing to a file and rolls the file over according to one of your pre-defined policies. To learn more about the rolling file appender and its policies, refer to the Logback manual.

The code to configure a rolling file appender is this.

in the code above:

  • Line 1: We called the appender() method passing RollingFile-Appender as the appender name and RollingFileAppender as the appender class.
  • Line 2: We set the file property of the appender to specify the rolling file.
  • Line 3: We injected a time-based rolling policy of type TimeBasedRollingPolicy into the appender. A time-based rolling policy performs a rollover once the date/time pattern is no longer applies to the active log file.
  • Line 4 – Line 6: We set the fileNamePattern, maxHistory, and totalSizeCap properties of the policy. The fileNamePattern property defines a file name pattern for archived log files. The rollover period is inferred from the value of fileNamePattern, which in the code example is set for daily rolling. The maxHistory property sets the maximum number of archive files to keep, before deleting older files asynchronously. The totalSizeCap element sets the total size of all archive files. Oldest archives are deleted asynchronously when the total size cap is exceeded.

To use the rolling file appender, add the appender name in the list passed to the logger() method, like this.

At this point, if you run the test class, a rolling log file named rollingfile.log is created under logs. To simulate a rollover, you can set the system clock one day ahead and run the test class again. A new rollingfile.log is created under logs and the previous file is archived in the logs/archive folder.
Output of Rolling File Appender

Async Appender

An async appender runs in a separate thread to decouple the logging overhead from the thread executing your code. To make an appender async, first call the appender() method passing a name for the async appender and an AsyncAppender object. Then, inject the appender to invoke asynchronously, like this.

The code above makes the RollingFile-Appender appender asynchronous.

Once you define an async appender, you can use it in a logger like any other appender, as shown.

The complete code of the logback.groovy file is this.

Logback.groovy

In the configuration code above, observe Line 34. I included the console appender and passed a false parameter to logger(). I did this to disable additivity. With additivity disabled, Logback will use Console-Appender of the application logger instead of the one configured for the root logger. Review the Logback manual for more information on additivity. If you have noticed in the Groovy code, we intentionally skipped a number of import statements. To reduce unnecessary boilerplate code, Groovy includes several common types and packages by default.

Summary

If you work with both XML and Groovy to configure Logback, you will find the Groovy syntax less verbose, and so more readable. Also, the Groovy syntax being a super-set of Java syntax is more intuitive for Java developers. On the other hand, XML with the years of industry backing is more popular and have a larger user base. You’ll find better IDE support when using XML, since it can be validated against the structure of a XML schema. Groovy doesn’t have this fallback. So while you may be writing snytxually correct Groovy code, it may fail to configure Logback in the way in you intended.

The Logback team provides an online conversion tool to translate a Logback.xml file into the equivalent Logback.groovy configuration file. Although, you cannot expect 100% accuracy, It’s a good tool to use for reference.

Personally, I feel you should not lock yourself into XML or Groovy. XML will provide you highly structured configuration. Groovy gives you freedom to do things programmatically, which you cannot do in a structured XML document. Most of the time, XML configuration will be just fine. But when you have a complex use case for your logging requirements, Groovy is a great tool you can use to configure Logback.

0
Share
spring framework guru
,

The whole purpose of logging gets defeated when the underlying logging framework becomes a bottleneck. Logging frameworks need to be fast, have a small memory footprint, and easily configurable. Logback is a logging framework with those qualities. If you are new to Logback, I suggest going through my introductory post on Logback: Logback Introduction: An Enterprise Logging Framework.

Logback supports configuration through XML and Groovy. In this post, I’ll discuss how to configure Logback using an XML file.

Creating a Logger

We will start by creating an application logger and later configure it through XML. As mentioned here, if we are using Spring Boot, we don’t require any additional dependency declaration on Logback in our Maven POM. We can straightaway start writing logging code.

LogbackConfigXml.java

Our test class uses JUnit to unit test the preceding LogbackConfigXml class.

LogbackConfigXmlTest.java

The Logback Configuration File

For Logback configuration through XML, Logback expects a Logback.xml or Logback-test.xml file in the classpath. In a Spring Boot application, you can put the Logback.xml file in the resources folder. If your Logback.xml file is outside the classpath, you need to point to its location using the Logback.configurationFile system property, like this.

In a Logback.xml file, all the configuration options are enclosed within the <configuration> root element. In the root element, you can set the debug=true attribute to inspect Logback’s internal status. You can also configure auto scanning of the configuration file by setting the scan=true attribute. When you do so, Logback scans for changes in its configuration file. If Logback finds any changes, Logback automatically reconfigure itself with the changes. When auto scanning is enabled, Logback scans for changes once every minute. You can specify a different scanning period by setting the scanPeriod attribute, with a value specified in units of milliseconds, seconds, minutes or hours, like this.

The <configuration> root element can contain one or more properties in local scope, each specified using a <property> element. Such a property exists from the point of its definition in the configuration file until the interpretation/execution of the file completes. Configuration options in other parts of the file can access the property using the ${property_name} syntax. Declare properties in the local scope for values that might change in different environments. For example, path to log files, database connections, SMTP server settings, and so on.

The configuration code above declares two properties, LOG_PATH and LOG_ARCHIVE whose values represent the paths to store log files and archived log files respectively.

At this point, one Logback element worth mentioning is <timestamp>. This element defines a property according to the current date and time – particularly useful when you log to a file. Using this property, you can create a new log file uniquely named by timestamp at each new application launch. The code to declare a timestamp property is this.

In the code above, the datePattern attribute denotes the date pattern used to convert the current time following the conventions defined in SimpleDateFormat.

Next, we’ll look how to use each of the declared properties from different appenders.

Console and File Appenders

You declare one or more appenders with the <appender> element containing the mandatory name and class attributes. The name attribute specifies the appender name that loggers can refer whereas the class attribute specifies the fully qualified name of the appender class. The appender element can contain <layout> or <encoder> elements to define how logging events are transformed. Here is the configuration code to define console and file appenders: