, ,

I have met many developers who refer to tests as “Unit Tests” when they are actually integration tests. In service layers, I’ve seen tests referred as unit tests, but written with dependencies on the actual service, such as a database, web service, or some message server. Those are part of integration testing. Even if you’re just using the Spring Context to auto-wire dependencies, your test is an integration test. Rather than using the real services, you can use Mockito mocks and spies to keep your tests unit tests and avoid the overhead of running integration tests.

This is not to say Integration tests are bad. There is certainly a role for integration tests. They are a necessity.

But compared to unit tests, integration tests are sloooowwwww. Very slow. Your typical unit test will execute in a fraction of a second. Even complex unit tests on obsolete hardware will still complete sub-second.

Integration tests, on the otherhand take several seconds to execute. It takes time to start the Spring Context. It takes time to start a H2 in memory database. It takes time to establish a database connection.

While this may not seem much, it becomes exponential on a large project. As you add more and more tests, the length of your build becomes longer and longer.

No developer wants to break the build. So we run all the tests to be sure. As we code, we’ll be running the full suite of tests multiple times a day. For your own productivity, the suite of tests needs to run quickly.

If you’re writing Integration tests where a unit test would suffice, you’re not only impacting your own personal productivity. You’re impacting the productivity of the whole team.

On a recent client engagement, the development team was very diligent about writing tests. Which is good. But, the team favored writing Integration tests. Frequently, integration tests were used where a Unit test could have been used. The build was getting slower and slower. Because of this, the team started refactoring their tests to use Mockito mocks and spies to avoid the need for integration tests.

They were still testing the same objectives. But Mockito was being used to fill in for the dependency driving the need for the integration test.

For example, Spring Boot makes it easy to test using a H2 in memory database using JPA and repositories supplied by Spring Data JPA.

But why not use Mockito to provide a mock for your Spring Data JPA repository?

Unit tests should be atomic, lightweight, and fast that are done as isolated units. Additionally, unit tests in Spring should not bring up a Spring Context. I have written about the different types of tests in my earlier Testing Software post.

I have already written a series of posts on JUnit and a post on Testing Spring MVC With Spring Boot 1.4: Part 1. In the latter, I discussed unit testing controllers in a Spring MVC application.

I feel the majority of your tests should be unit tests, not integration tests. If you’re writing your code following the SOLID Principles of OOP, your code is already well structured to accept Mockito mocks.

In this post, I’ll explain how to use Mockito to test the service layer of a Spring Boot MVC application. If Mockito is new for you, I suggest reading my Mocking in Unit Tests With Mockito post first.

Using Mockito Mocks and SpiesMockito Mocks vs Spies

In unit test, a test double is a replacement of a dependent component (collaborator) of the object under test. The test double does not have to behave exactly as the collaborator. The purpose is to mimic the collaborator to make the object under test think that it is actually using the collaborator.

Based on the role played during testing, there can be different types of test doubles. In this post we’re going to look at mocks and spies.

There are some other types of test doubles, such as dummy objects, fake objects, and stubs. If you’re using Spock, one of my favorite tricks was to cast a map of closures in as a test double. (One of the many fun things you can do with Groovy!)

What makes a mock object different from the others is that it has behavior verification. Which means the mock object verifies that it (the mock object) is being used correctly by the object under test. If the verification succeeds, you can conclude the object under test will correctly use the real collaborator.

Spies on the other hand, provides a way to spy on a real object. With a spy, you can call all the real underlying methods of the object while still tracking every interaction, just as you would with a mock.

Things get a bit different for Mockito mocks vs spies. A Mockito mock allows us to stub a method call. Which meams we can stub a method to return a specific object. For example, we can mock a Spring Data JPA repository in a service class to stub a getProduct() method of the repository to return a Product object. To run the test, we don’t need the database to be up and running – a pure unit test.

A Mockito spy is a partial mock. We can mock a part of the object by stubbing few methods, while real method invocations will be used for the other. By saying so, we can conclude that calling a method on a spy will invoke the actual method, unless we explicitly stub the method, and therefore the term partial mock.

Let’s look mocks vs spies in action, with a Spring Boot MVC application.

The Application Under Test

Our application contains a single Product JPA entity. CRUD operations are performed on the entity by ProductRepository using a CrudRepository supplied by Spring Data JPA. If you look at the code, you will see all we did was extend the Spring Data JPA CrudRepository to create our ProductRepository. Under the hood, Spring Data JPA provides implementations to manage entities for most common operations, such as saving an entity, updating it, deleting it, or finding it by id.

The service layer is developed following the SOLID design principles. We used the “Code to an Interface” technique, while leveraging the benefits of dependency injection. We have a ProductService interface and a ProductServiceImpl implementation. It is this ProductServiceImpl class that we will unit test.

Here is the code of ProductServiceImpl .

ProductServiceImpl.java

In the ProductServiceImpl class, you can see that ProductRepository is @Autowired in. The repository is used to perform CRUD operations. – a mock candidate to test ProductServiceImpl.

Testing with Mockito Mocks

Coming to the testing part, let’s take up the getProductById() method of ProductServiceImpl. To unit test the functionality of this method, we need to mock the external Product and ProductRepository objects. We can do it by either using the Mockito’s mock() method or through the @Mockito annotation. We will use the latter option since it is convenient when you have a lot of mocks to inject.

Once we declare a mock` with the @Mockito annotation, we also need to initialize it. Mock initialization happens before each test method. We have two options – using the JUnit test runner, MockitoJUnitRunner or MockitoAnnotations.initMocks() . Both are equivalent solutions.

Finally, you need to provide the mocks to the object under test. You can do it by calling the setProductRepository() method of ProductServiceImpl or by using the @InjectMocks annotation.

The following code creates the Mockito mocks, and sets them on the object under test.

Note: Since we are using the Spring Boot Test starter dependency, Mockito core automatically is pulled into our project. Therefore no extra dependency declaration is required in our Maven POM.

Once our mocks are ready, we can start stubbing methods on the mock. Stubbing means simulating the behavior of a mock object’s method. We can stub a method on the ProductRepository mock object by setting up an expectation on the method invocation.

For example, we can stub the findOne() method of the ProductRepository mock to return a Product when called. We then call the method whose functionality we want to test, followed by an assertion, like this.

This approach can be used to test the other methods of ProductServiceImpl, leaving aside deleteProduct() that has void as the return type.

To test the deleteProduct(), we will stub it to do nothing, then call deleteProduct(), and finally assert that the delete() method has indeed been called.

Here is the complete test code for using Mockito mocks:

ProductServiceImplMockTest.java

Note: An alternative to doNothing() for stubbing a void method is to use doReturn(null).

Testing with Mockito Spies

We have tested our ProductServiceImpl with mocks. So why do we need spies at all? Actually, we don’t need one in this use case.

Outside Mockito, partial mocks were present for a long time to allow mocking only part (few methods) of an object. But, partial mocks were considered as code smells. Primarily because if you need to partially mock a class while ignoring the rest of its behavior, then this class is violating the Single Responsibility Principle, since the code was likely doing more than one thing.

Until Mockito 1.8, Mockito spies were not producing real partial mocks. However, after many debates & discussions, and after finding a valid use case for partial mock, support for partial mock was added to Mockito 1.8.

You can partially mock objects using spies and the callRealMethod() method. What it means is without stubbing a method, you can now call the underlying real method of a mock, like this.

Be careful that the real implementation is ‘safe’ when using thenCallRealMethod(). The actual implementation needs be able to run in the context of your test.

Another approach for partial mocking is to use a spy. As I mentioned earlier, all method calls on a spy are real calls to the underlying method, unless stubbed. So, you can also use a Mockito spy to partially mock few stubbed methods.

Here is the code provide a Mockito spy for our ProductServiceImpl .

ProductServiceImplSpyTest.java

In this test class, notice we used MockitoJUnitRunner instead of MockitoAnnotations.initMocks() for our annotations.

For the first test, we expected NullPointerException because the getProductById() call on the spy will invoke the actual getProductById() method of ProductServiceImpl, and our repository implementations are not created yet.

In the second test, we are not expecting any exception, as we are stubbing the save() method of ProductRepository.

The second and third methods are the relevant use cases of a spy in the context of our application– verifying method invocations.

Conclusion

In Spring Boot applications, by using Mockito, you replace the @Autowired components in the class you want to test with mock objects. In addition to unit test the service layer, you will be unit testing controllers by injecting mock services. To unit test the DAO layer, you will mock the database APIs. The list is endless – It depends on the type of application you are working on and the object under test. If your following the Dependency Inversion Principle and using Dependency Injection, mocking becomes easy.

For partial mocking, use it to test 3rd party APIs and legacy code. You won’t require partial mocks for new, test-driven, and well-designed code that follows the Single Responsibility Principle. Another problem is that when() style stubbing cannot be used on spies. Also, given a choice between thenCallRealMethod on mock and spy, use the former as it is lightweight. Using thenCallRealMethod on mock does not create the actual object instance but bare-bones shell instance of the Class to track interactions. However, if you use spy, you create an object instance. As regard spy, use it if you only if you want to modify the behavior of small chunk of API and then rely mostly on actual method calls.

The code for this post is available for download here.

0
Share
,

Containers based deployments are rapidly gaining popularity in the enterprise. One of the more popular container solutions is Docker.

Many view containers as virtual machines. They’re not. Well kind of not. A container is a virtual walled environment for your application. It’s literally a ‘container’ inside the host OS. Thus your application works like it is in it’s own self contained environment, but it’s actually sharing operating system resources of the host computer.  Because of this, containers are more resource efficient than full blown virtual machines. You get more bang for your buck running a bare metal machine with a bunch of containers, than you do running a bare metal machine with a bunch of VMs. This is why massive cloud computing companies running 10’s of thousands of servers are running containers. Google, Facebook, Netflix, Amazon are all big advocates of containers.

Introducing  Docker Containers

To help you visualize the difference, here are a couple images provided by Docker. Here is the bloated architecture of a traditional virtual machine environment. A popular solution you can try is Oracle’s Virtual Box which allows you run a variety of operating systems on your personal machine. I personally use VMWare Fusion to run Windows on my MBP (and I still feel a little dirty every time I do). If you’ve never used either these, I recommend you give them a try.

In this graphic, note how each stack has its own Guest OS.

what-is-docker-diagram

Now for comparison, here is the same stack containerized by Docker. Here you can see how each application does not get its own operating system. This is key to why Docker containers are so efficient. You’re not providing a virtual layer mimic the hardware, for the the guest OS to use. And you’re not running n+1 guest hosts either.

Docker What is a VM

Clearly this is more efficient computing. I’ve seen estimates in the 10-25% range of improved performance. But like with everything else when it comes to computing performance – your mileage may vary. I’d expect lightweight linux VMs to be closer to the 10% side of the scale, and Windows VMs probably closer to the 25% end of the scale – just because the Windows OS is so bloated in comparison.

This leads me to an important distinction about Docker – Linux only. Yes, you can “run” Docker on Windows and OSX – but at this time, you can only do so using a VM running in Virtual Box to run – a Linux VM.

Running Spring Boot in a Docker Container

Introduction

When I first heard about running Spring Boot in a Docker container, I personally thought – “now why would you want to run a JVM in a VM, on a VM?” At first glance, it just seemed like a absolutely terrible idea from a performance standpoint. I doubt if any of these solutions will ever match the performance of a JVM running on a bare metal installation of Linux. But, I’ve shown above, running a Spring Boot application in a Docker container should have minimal performance impact. Certainly less of an impact than running in a VM. Which is exactly what you’re doing running applications in any cloud provider (see image one above).

Installing Docker

I’m not going to get into installing Docker on your OS. There is ample documentation about installing Docker in the internet. Going forward, I’m going to assume you have Docker installed. Since Docker is Linux based, my focus will be on Linux (RHEL / CentOS).

Spring Boot Example Application

For the purposes of this tutorial, let’s start with a simple Spring Boot Application. I’m going to use the completed application from my Mastering Thymeleaf course. This is a simple Spring Boot web application which is perfect for our needs.

If you want to follow this tutorial along step by step, head over to GitHub and checkout this Spring Boot project. Be sure to change to the branch “spring-boot-docker-start”.

Free Spring TutorialBuilding a Spring Boot Docker Image

For us to run Spring Boot in a Docker container, we need to define a Docker image for it. Building Docker images is done through the use of “Dockerfile”s. Dockerfiles are basically a manifest of commands we will use to build and configure our docker container. To configure our Docker image to run our Spring Boot application, we will want to:

  • Start with the latest CentOS image from Docker Hub.
  • Install and configure Oracle Java.
  • Install the Spring Boot artifact – our executable JAR file.
  • Run the Spring Boot application.

I’m using CentOS for its compatibility with RHEL, which is probably the most popular linux distribution used by enterprises. And Oracle’s Java, mainly for the same reason.

Create Our Dockerfile

In our Maven project, we need to create our Dockerfile. In /src/main/docker  create the file Dockerfile .

NOTE: As a Java developer you may be tempted to create the file as DockerFile. Don’t do this. The Maven Plugin we cover later won’t see your file if it’s CamelCase. I learned this lesson the hard way.

CentOS

We’ll start our Docker image off by using the CentOS image from Docker hub.

Dockerfile

Installing Oracle Java

The following lines in our dockerfile will install wget into the image using the yum package installer, download the Oracle Java JDK from Oracle using wget, then configure Java on the machine.

Dockerfile

Installing the Spring Boot Executable Jar

In this section of the Dockerfile, we are:

  • Adding a /tmp volume. Docker will map this to to /var/lib/docker on the host system. This is the directory Spring Boot will configure Tomcat to use as its working directory.
  • The ADD  command adds the Spring Boot executable Jar into our Docker image.
  • The RUN  command is to ‘touch’ the JAR and give it a modified date.
  • The ENTRY  point is what will run the jar file when the container is started.

I learned about these configuration settings from a post from the Pivotal team here.

Dockerfile

Complete Dockerfile

Here is the complete Dockerfile.

Dockerfile

Building the Docker Image Using Maven

Naturally, we could build our Docker image using docker itself. But this is not a typical use case for Spring developers. A typical use case for us would be to use Jenkins to generate the Docker image as part of a CI build. For this use case, we can use Maven to package the Spring Boot executable JAR, then have that build artifact copied into the Docker image.

There’s actually several competing Maven plugins for Docker support. The guys at Spotify have a nice Maven / Docker plugin. In this example, I’m going to show you how to use the Fabric8 Docker plugin for Maven.

Fabric8

Of the Maven plugins for Docker, at the time of writing, Fabric8 seems to be the most robust. For this post, I’m only interested in building a Docker Image for our Spring Boot artifact. This is just scratching the surface of the capabilities of the Fabric8 Maven plugin. This plugin can be used to spool up Docker Images to use for your integration tests for CI builds. How cool is that!?!? But let’s learn to walk before we run!

Here is a typical configuration for the Fabric8 Maven plugin for Docker.

Fabric8 Maven Docker Plugin Configuration

If you’re following along in tutorial, the complete Maven POM now is:

pom.xml

Building the Docker Image

To build the Docker image with our Spring Boot artifact run this command:

The ‘clean’ tells Maven to delete the target directory. While this step is technically optional, if you don’t use it, sooner or later you’re going to get bit in the ass by some weird issue. Maven will always compile your classes with the package command. If you’ve done some refactoring and changed class names or packages, without the ‘clean’ the old class files are left on the disk. And in the words of IBM – “Unpredictable results may occur”.

It is very important to run the package command with the docker:build command. You’ll encounter errors if you try to run these in two separate steps.

While the Docker image is building, you will see the following output in the console:

Docker images are built in layers. The CentOS image from Docker Hub is our first layer. Each command in our Dockfile is another ‘layer’. Docker works by ‘caching’ these layers locally. I think of it as being somewhat like your local Maven repository under ~/.m2. Where Maven will bring down Java artifacts once then cache them for future use.

The first time you build this Docker image, will take longer since all the layers are being downloaded / built. The next time we build this, the only layers that change are the one which adds the new Spring Boot artifact, all commands after this. The layers before the Spring Boot artifact aren’t changing, so the cached version will be used in the Docker build.

Running the Spring Boot Docker Image

Docker Run Command

So far, we have not said anything about port mapping. This is actually done at run time. When we start the Docker container, in the run command, we will tell Docker how to map the ports. In our example we want to map port 8080 of the host machine to port 8080 of the container. This is done with the ‘-p’ parameter, followed with <host port>:<container port>. We also want to use the ‘-d’ parameter. This tells Docker to start the container in the background.

Here is the complete command to run our docker container:

This command will start the Docker container and echo the id of the started container.

Congratulations, your Spring Boot application is up and running!

You should now be able to access the application on port 8080 of your machine.

Working with Running Docker Containers

Viewing Running Docker Containers

To see all the containers running on your machine, use the following command:

View Log Output

Our running Docker containers are far from little black boxes. There’s a lot we can do with them. One common thing we want to do is see the log output. Easy enough. Use this command:

Access a Running Docker Container

Need to ssh into a Docker container? Okay, technically this really isn’t SSH, but this command will give you a bash:

Stopping the Docker Container

Shutting down our Docker container is easy. Just run this command:

Ending Source Code

Just incase you’ve run into trouble, like always, I have a branch in GitHub with the complete working example. You can get the ending source code for this tutorial here on GitHub.

Conclusion

The default executable Jar artifact of Spring Boot is ideal for deploying Spring Boot applications in Docker. As I’ve shown here launching a Spring Boot application in a Docker container is easy to do.

In terms of technology, Docker is still fairly young. At the time of writing, Docker is only about three years old. Yet, it is rapidly catching on. While Docker is widely used by the web giants, it is just starting to trickle down to Fortune 500 enterprises. At the time of writing, Docker is not available natively on OSX or Windows. Yet. Microsoft has committed to releasing native version of Docker for Windows. Which is interesting. A lot things going on around Docker at Red Hat and Pivotal too.

Docker is a fundamental paradigm shift in the way we do things as Spring developers. I assure you, if you’re developing applications in the enterprise using the Spring Framework and have not used Docker, it’s not a matter of if, it is a when.

As a developer Docker brings up some very cool opportunities. Need a Mongo database to work against? No problem, spool up a local Docker container. Need a virtual environment for your Jenkins CI builds. No problem.

I personally have only been working with Docker a short time. I am honestly excited about it. My thoughts on Docker – Now we’re cooking with gas!

4
Share

Its not uncommon for computers to need to communicate with each other. In the early days this was done with simple string messages. Which was problematic. There was no standard language. XML evolved to address this. XML provides a very structured way of sharing data between systems. XML is so structured, many find it too restrictive. JSON is a popular alternative to XML. JSON offers a lighter and more forgiving syntax than XML.

JSON is a text-based data interchange format that is lightweight, language independent, and easy for humans to read and write. In the current enterprise, JSON is used for enterprise messaging, communicating with RESTful web services, and AJAX-based communications. JSON is also extensively used by NoSQL database such as, MongoDB, Oracle NoSQL Database, and Oracle Berkeley DB to store records as JSON documents. Traditional relational databases, such as PostgreSQL is also constantly gaining more JSON capabilities. Oracle Database also supports JSON data natively with features, such as transactions, indexing, declarative querying, and views.

In Java development, you will often need to read in JSON data, or provide JSON data as an output. You could of course do this on your own, or use an open source implementation. For Java developers, there are several options to choose from. Jackson is very popular choice for processing JSON data in Java.

Maven Dependencies for Jackson

The Jackson library is composed of three components: Jackson Databind, Core, and Annotation. Jackson Databind has internal dependencies on Jackson Core and Annotation. Therefore, adding Jackson Databind to your Maven POM dependency list will include the other dependencies as well.

Spring Boot and Jackson

The above dependency declaration will work for other Java projects, but in a Spring Boot application, you may encounter errors such as this.

Jackson Dependency Conflict Error - NoSuchMethod Error

The Spring Boot parent POM includes Jackson dependencies. When you include the version number, thus overriding the Spring Boot curated dependency versions you may encounter version conflicts.

The proper way for Jackson dependency declaration is to use the Spring Boot curated dependency by not including the version tag on the main Jackson library, like this.

NOTE: This problem is highly dependent on the version of Spring Boot you are using.

For more details on this issue, check out my post Jackson Dependency Issue in Spring Boot with Maven Build.

Reading JSON – Data Binding in Jackson

Data binding is a JSON processing model that allows for seamless conversion between JSON data and Java objects. With data binding, you create POJOs following JavaBeans convention with properties corresponding to the JSON data. The Jackson ObjectMapper is responsible for mapping the JSON data to the POJOs. To understand how the mapping happens, let’s create a JSON file representing data of an employee.

employee.json

The preceding JSON is composed of several JSON objects with name-value pairs and a phoneNumbers array. Based on the JSON data, we’ll create two POJOs: Address and Employee. The Employee object will be composed of Address and will contain properties with getter and setter method corresponding to the JSON constructs.

When Jackson maps JSON to POJOs, it inspects the setter methods. Jackson by default maps a key for the JSON field with the setter method name. For Example, Jackson will map the name JSON field with the setName() setter method in a POJO.

With these rules in mind, let’s write the POJOs.

Address.java

Employee.java

With the POJOs ready to be populated with JSON data, let’s use ObjectMapper of Jackson to perform the binding.

ObjectMapperDemo.java

In the ObjectMapperDemo class above, we created an ObjectMapper object and called its overloaded readValue() method passing two parameters. We passed a File object representing the JSON file as the first parameter. And Employee.class as the target to map the JSON values as the second parameter. The readValue() method returns an Employee object populated with the data read from the JSON file.

The test class for ObjectMapperDemo is this.

ObjectMapperDemoTest.java

The output on running the test is this.

Output of ObjectMapperDemo

Simple Data Binding in Jackson

The example above we covered full data binding – a variant of Jackson data binding that reads JSON into application-specific JavaBeans types. The other type is simple data binding where you read JSON into built-in Java types, such as Map and List and also wrapper types, such as String, Boolean, and Number.

An example of simple data binding is to bind the data of employee.json to a generic Map is this.

ObjectMapperToMapDemo.java

In the ObjectMapperToMapDemo class above, notice the overloaded readValue() method where we used a FileInputStream to read employee.json. Other overloaded versions of this method allow you to read JSON from String, Reader, URL, and byte array. Once ObjectMapper maps the JSON data to the declared Map, we iterated over and logged the map entries.

The test class for the ObjectMapperToMapDemo class is this.

ObjectMapperToMapDemoTest.java

The output on running the test is this.
Output of ObjectMapperToMapDemo

With simple data binding, we don’t require writing JavaBeans with properties corresponding to the JSON data. This is particularly useful in situations where we don’t know about the JSON data to process. In such situations, another approach is to use the JSON Tree Model that I will discuss next.

Reading JSON into a Tree Model

In the JSON Tree Model, the ObjectMapper constructs a hierarchical tree of nodes from JSON data. If you are familiar with XML processing, you can relate the JSON Tree Model with XML DOM Model. In the JSON Tree Model, each node in the tree is of the JsonNode type, and represents a piece of JSON data. In the Tree Model, you can randomly access nodes with the different methods that JsonNode provides.

The code to generate a Tree Model of the employee.json file is this.

In the constructor of the JsonNodeDemo class above, we created an ObjectMapper instance and called its readTree() method passing a File object representing the JSON document as parameter. The readTree() method returns a JsonNode object that represents the hierarchical tree of employee.json. In the readJsonWithJsonNode() method, we used the ObjectMapper to write the hierarchical tree to a string using the default pretty printer for indentation.

The output on running the code is this.

Next, let’s access the value of the name node with this code.

In the code above, we called the path() method on the JsonNode object that represents the root node. To the path() method, we passed the name of the node to access, which in this example is name. We then called the asText() method on the JsonNode object that the path() method returns. The asText() method that we called returns the value of the name node as a string.

The output of this code is:

Next, let’s access the personalInformation and phoneNumbers nodes.

Few key things to note in the code above. In Line 4, notice that we called the get() method instead of path() on the root node. Both the methods perform the same functions – they return the specified node as a JsonNode object. The difference is how they behave when the specified node is not present or the node doesn’t have an associated value. When the node is not present or does not have a value, the get() method returns a null value, while the path() method returns a JsonNode object that represents a “missing node“. The “missing node” returns true for a call to the isMissingNode() method. The remaining code from Line 5- Line 9 is simple data binding, where we mapped the personalInformation node to a Map<String, String> object.

In the readPhoneNumbers() method, we accessed the phoneNumbers node. Note that in employee.json, phoneNumbers is represented as a JSON array (Enclosed within [] brackets). After mapping, we accessed the array elements with a call to the elements() method in Line 15. The elements() method returns an Iterator of JsonNode that we traversed and logged the values.

The output on running the code is this.

The complete code of generating the JSON tree model and accessing its nodes is this.

JsonNodeDemo.java

The test class for the JsonNodeDemo class above is this.

JsonNodeDemoTest.java

Writing JSON using Jackson

JSON data binding is not only about reading JSON into Java objects. With the ObjectMapper of JSON data binding, you can also write the state of Java objects to a JSON string or a JSON file.

Let’s write a class that uses ObjectMapper to write an Employee object to a JSON string and a JSON file.

JsonWriterObjectMapper.java

In Line 22 of the code above, we used an ObjectMapper object to write an Employee object to a JSON string using the default pretty printer for indentation. In Line 24, we called the configure() method to configure ObjectMapper to indent the JSON output. In Line 25, we called the overloaded writeValue() method to write the Employee object to the file provided as the first parameter. The other overloaded writeValue() methods allow you to write JSON output using OutputStream and Writer.

The test code for the JsonWriterObjectMapper class is this.

JsonWriterObjectMapperTest.java

In the test class above, we used the JUnit @Before annotation on the setUpEmployee() method to initialize the Address and Employee classes. If you are new to JUnit, checkout my series on JUnit starting from here. In the @Test annotated method, we called the writeEmployeeToJson() method of JsonWriterObjectMapper, passing the initialied Employee object.

The output on running the test is this.
Output of JsonWriterObjectMapper

Spring Support for Jackson

Spring support for Jackson has been improved lately to be more flexible and powerful. If you are developing Spring Restful webservice using Spring RestTemplate API, you can utilize Spring Jackson JSON API integration to send back JSON response. In addition, Spring MVC now has built-in support for Jackson’s Serialization Views. Also, Jackson provides first class support for some other data formats than JSON- Spring Framework and Spring Boot provide built-in support Jackson based XML. In future posts, I will discuss more about advanced JSON-based processing with Jackson- particularly Jackson Streaming Model for JSON, and also Jackson based XML processing.

Conclusion

Jackson is one of the several available libraries for processing JSON. Some others are Boon, GSON, and Java API for JSON Processing.

One advantage that Jackson has over other libraries is its maturity. Jackson has evolved enough to become the preferred JSON processing library of some major web services frameworks, such as Jersey, RESTEasy, Restlet, and Apache Wink. Open source enterprise projects, such as Hadoop and Camel also use Jackson for handling data definition in enterprise integration.

1
Share
,

Recently while working with Jackson within a Spring Boot project, I encountered an issue I’d like to share with you.

Jackson is currently the leading option for parsing JSON in Java. The Jackson library is composed of three components: Jackson Databind, Core, and Annotation. Jackson Databind has internal dependencies on Jackson Core and Annotation. Therefore, adding Jackson Databind to your Maven POM dependency list will include the other dependencies as well. To use the latest Jackson library, you need to add the following dependency in the Maven POM.

The above dependency works well in other Java projects, but unfortunately in a Spring Boot 1.3.x application, you may stumble upon this error.

Jackson Dependency Conflict Error in Spring Boot

You may see several different errors. Here are some additional examples.

This error occurs due to Jackson dependency conflict. We are working on a Spring Boot project and it’s inheriting from the Spring Boot parent POM that includes Jackson. Without any Jackson dependency in the project POM, let’s print the Maven dependency tree to view the in-built Jackson dependencies.

The output is this.

As you can see above, the Spring Boot parent POM uses an older version of Jackson (2.6.5).

Now, if we add the Jackson dependency to our Maven POM using the version like this:

Maven will pulling in older versions of Jackson-annotation and Jackson-core and overriding the newer ones. We can see this by running the dependency:tree command again.

I did not expect Maven to behave this way in dependency resolution. The POM for the primary Jackson artifact does call for the proper version. However, this seems to be getting overridden by the versions specified explicitly in the Spring Boot parent POM.

Ideally, when working with Spring Boot, is to leverage the curated dependencies in the Spring Boot parent POM. In this case, we drop the version for the Jackson dependency so it will get inherited from the Spring Boot Parent POM.

Now, the version will be inherited from the parent POM, and the issue will be resolved.

But what if we want to use a newer version of Jackson? The proper way is to exclude the inherent dependencies, and explicitly add their new versions, like this.

This POM configuration will override the Jackson dependencies set in the Spring Boot parent POM.

This post is specific to Spring Boot version 1.3.3. Naturally, the Spring Boot team will be evolving the version of Jackson used in future releases.

For developers accustomed to working with Maven dependendencies, it’s a very easy mistake to include the version in the dependency declaration. This can cause some unintended issues due to version conflicts. It is a little bit of a paradigm shift for experienced developers to depend on the Spring Boot parent POM. Some won’t want to relinquish control, but in the long run I expect you will be better of leveraging the Spring Boot curated dependencies.

3
Share
, ,

When it comes to logging in enterprise applications, logback makes an excellent choice – it’s simple and fast, has powerful configuration options, and comes with a small memory footprint. I have introduced logback in my introductory post, Logback Introduction: An Enterprise Logging Framework. YAML is just one option you can use for Spring Boot configuration. In a series of posts on logback, I’ve also discussed how to configure Logback using XML and Groovy and how to use Logback in Spring Boot applications. The posts are available as:

In my earlier post on Using Logback with Spring Boot, I used a properties file to configure logback. In this post, I’ll discuss how to configure Logback using Spring Boot’s YAML configuration file. If you’re a seasoned user of the Spring Framework, you’ll find YAML a relatively new configuration option available to you when using Spring Boot.

Creating a Logger

We’ll use a simple Spring Boot web application and configure logback with YAML in that application. Please refer my previous post, where I wrote about creating a web application using Spring Boot. This post expands upon concepts from the previous post, but is focused on the use of YAML configuration with Spring Boot.

The application from the previous post contains a controller, IndexController to which we’ll add logging code, like this.

IndexController.java

Since Logback is the default logger under Spring Boot, you do not need to include any additional dependencies for Logback or SLF4J.

Run the SpringBootWebApplication main class. When the application starts, access it from your browser with the URL, http://localhost:8080

The logging output on the IntelliJ console is this.

Default Logging Output

In the output above, the logging messages from IndexController are sent to the console by the logback root logger. Notice that the debug message of IndexController is not getting logged. Logback by default will log debug level messages. However, the Spring Boot team provides us a default configuration for Logback in the Spring Boot default logback configuration file, base.xml. In addition, Spring Boot provides provide two preconfigured appenders through the console-appender.xml and file-appender.xml files. The base.xml file references both of them.

The code of the base.xml file from the spring-boot github repo is this.

Here you can see that Spring Boot has overridden the default logging level of logback by setting the root logger to INFO, which is the reason we did not see the debug messages in the example above. As we’ll see in the next section, changing log levels in Spring Boot is very simple.

YAML Configuration via Spring Boot’s application.yml File

In a Spring Boot application, you can externalize configuration to work with the same application code in different environments. The application.yml file is one of the many ways to externalize configuration. Let’s use it to externalize logging configuration.

If you wish to use YAML for your Spring configuration, you simply need to create a YAML file. Spring Boot will look for a  application.yml file on the classpath. In the default structure of a Spring Boot web application, you can place the file under the Resources directory. To parse YAML files, you need a YAML parser. Out of the box, Spring Boot uses SankeYAML, an YAML parser. There is nothing you need to do to enable YAML support in Spring Boot. By default under Spring Boot, YAML is ready to go.

Here is an example of an application.yml file with basic configurations of logging levels.

In the configuration code above, we set the log levels of the Spring framework, any application logger of the guru.springframework.controllers package and its sub-packages, and hibernate to DEBUG. Although, we are not using Hibernate in our example application, I have added the Hibernate logging configuration for demonstration purposes so you can see how to configure logging for various Java packages.

When you run the application, you’ll notice DEBUG messages of the Spring frameworks’ startup on the console. When you access the application, notice the log messages of the IndexController now include the debug message.

Logging Level Configuration in YAML with Spring Boot
At this point, log messages are only being sent to the console. You can configure Spring Boot to additionally log messages to log files. You can also set the patterns of log messages both for console and file separately, like this.

With the updated YAML configuration, here is an example of the logging output.

Logging to File and Console with updated YAML configuration

Spring Active Profile Properties in YAML

Spring Profiles are commonly used to configure Spring for different deployment environments. For example, while developing in your local machine, it is common to set the log level to DEBUG. This will give you detailed log messages for your development use. While on production, its typical set the log level to WARN or above. This is to avoid filling your logs with excessive debug information and incurring the overhead of excessive logging.

You can segregate a YAML configuration into separate profiles with a spring.profiles key for each profile. Then, add the required logging configuration code to each profile and ensure that the profile lists are separated by the --- lines. In the same file, you can use the spring.profiles.active key to set the active profile. However, this is not mandatory. You can also set the active profile to use programmatically or passing it as a system property or JVM argument while running the application.

The complete application.yml file with logging configuration based on Spring profiles is this.

In the configuration code above, we defined two profiles: dev and production with different logging configurations. We also set the active profile to dev.

When you run and access the application, the logging configuration of the dev profile will be used and the logging outputs will be similar to this.
Logging Output of dev Profile

Now let’s make production the active profile by passing the -Dspring.profiles.active=production JVM argument.

In IntelliJ, select Run-> Edit Configurations, and set the JVM argument in the Run/Debug Configurations dialog box that appears, like this.
Run Debug Configurations Dialog Box

The logging output on accessing the application with production as the active profile is this.

Logging Output of production Profile

Separating Profiles in YAML Configuration Files

A Spring Boot configuration file is not limited to logging configurations only. Typically, several different types of configurations go into the different profiles of an enterprise application. Configurations can be of bean registrations, database connection settings, SMTP settings, etc. spread across development, testing, staging, production, and other profiles.

It’s both tedious and error prone to maintain a single file with multiple profiles with each profile containing different types of configuration settings. Remember, much more time is spent reading code and configuration files than is spent writing it. At some point in the future, yourself, or someone else, will be reading or updating the configuration files. And for monolithic configuration files with low readability, the chances of errors creeping in is high. Spring addresses such challenges by allowing separate configuration files – one for each profile. With separate configuration files, you improve the long term maintainability of your application.

Each of such configuration file must follow the application-.yml naming convention. For example, for the dev and production profiles, you need the application-dev.yml and application-production.yml files in the classpath. You should also add a application-default.yml file containing default configurations. When no active profile is set, Spring Boot falls back to the default configurations in application-default.yml.

It’s important to note that if you have a application.yml  file (with no suffix) on your path, that it will always be included by Spring, regardless of what profiles are or not active.

The project structure of the Spring Boot web application with different profile-specific configuration files is this.
Profile-specific YAML Configuration Files for Spring Boot

Following are the code for each of the configuration files.

application-default.yml

application-dev.yml

application-production.yml

Test the application by first starting it without any profile, then with the dev profile, and finally the production profile. Ensure that the expected configurations are being used for the different environments.

Conclusion

YAML configuration file in Spring Boot provides a very convenient syntax for storing logging configurations in a hierarchical format. YAML configuration, similar to Properties configuration cannot handle some advanced features, such as different types of appender configurations, and also encoders and layout configurations.

Functionally, YAML is nearly the same as using a traditional properties file. Personally, I find YAML fun to write in. It feels more expressive than the old school properties files and it has a nice clean syntax. Often you don’t need many of the more advanced logging features of logback. So you’re fine using the simplicity of the YAML file configuration. For advanced logging configurations using XML and Groovy, explore my earlier posts on them available here, and here.

I have encountered one issue with using YAML files for Spring Boot configuration. When setting up a JUnit test outside of Spring Boot, it was problematic to read the YAML properties file with just Spring. Remember, YAML support is specific to Spring Boot. I suspect at some point it will get included into the core functionality of Spring (If it has not already been).

1
Share
, ,

Logback makes an excellent logging framework for enterprise applications – it’s fast, have simple but powerful configuration options, and comes with a small memory footprint. I introduced logback in my introductory post, Logback Introduction: An Enterprise Logging Framework. In a series of posts on Logback, I’ve also discussed how to configure logback using XML and Groovy. The posts are available as Logback Configuration: using XML and Logback Configuration: using Groovy.

In this post, I’ll discuss how to use Logback with Spring Boot. While there are a number of logging options for Java, the Spring Boot chose to use Logback for the default logger. Like many things in Spring Boot, Logback by default gets configured with sensible defaults. Out of the box, Spring Boot makes Logback easy to use.

Creating Loggers

In a previous post, I wrote about creating a web application using Spring Boot. We’ll configure logback for this application. The application contains a controller, IndexController to which we’ll add logging code. The code of IndexController is this.

Let’s add a SpringLoggingHelper class with logging code to the application. Although this class doesn’t do anything except emitting logging statements, it will help us understand configuring logging across different packages. Here is the code of SpringLoggingHelper :

In both the classes above, we wrote logging code against the SLF4J API. SLF4J is a façade for commonly used logging frameworks, such as Java Util Logging, Log4J 2, and Logback. By writing against SLF4J, our code remains decoupled from Logback, thus providing us the flexibility to plug-in a different logging framework, if required later.

If you are wondering about SLF4J and Logback dependencies, you don’t need to specify any. Spring Boot contains them too. Assuming you’re using Maven or Gradle to manage you Spring Boot project, the necessary dependencies are part of the dependencies under Spring Boot.

Run the SpringBootWebApplication main class. When the application starts, access it from your browser with the URL, http://localhost:8080

The logging output on the IntelliJ console is this.

Logging Output with Default Configuration in Spring Boot

We haven’t written any configuration for Logback. The output of both the IndexController and SpringLoggingHelper classes are from the logback root logger. Notice that the debug messages are not getting logged. Logback by default will log debug level messages. However, the Spring Boot team provides us a default configuration for Logback in the Spring Boot default logback configuration file, base.xml.  In addition, Spring Boot provides provide two  preconfigured appenders through the console-appender.xml and file-appender.xml files. The base.xml file references both of them.

Here is the code of the base.xml file from the spring-boot github repo.

Here you can see the Spring Boot has overridden the default logging level of Logback by setting the root logger to INFO, which is the reason we did not see the debug messages in the example above. As we’ll see in the next section, changing log levels in Spring Boot is very simple.

Free Spring Framework Tutorial
Check out my FREE Introduction to Spring Course

Configuration via Spring Boot’s application.properties File

In a Spring Boot application, you can externalize configuration to work with the same application code in different environments. The application.properties file is likely the most popular of several different ways to externalize Spring Boot configuration properties. In the default structure of a Spring Boot web application, you can locate the application.properties file under the Resources folder. In the application.properties file, you can define log levels of Spring Boot, application loggers, Hibernate, Thymeleaf, and more. You can also define a log file to write log messages to in addition to the console.

Here is an example of an application.properties file with logging configurations.

Note: There is also a logging.path property to specify a path for a logging file. If you use it, Spring Boot creates a spring.log file in the specified path. However, you cannot specify both the logging.file and logging.path properties together. If done, Spring Boot will ignore both.

When you run the main class now and access the application, log messages from IndexController and SpringLoggingHelper are logged to the console and the logs/spring-boot-logging.log file.
Spring Boot Logging Output with application.properties

In the output, notice that debug and higher level messages of IndexController got logged to the console and file. This was because in the application.properties file, we specified DEBUG as the log level for the guru.springframework.controllers package that IndexController is part of. Since we did not explicitly configure the SpringLoggingHelper class, the default configuration of base.xml file was used. Therefore, only INFO and higher level messages of SpringLoggingHelper got logged.

You can see how simple this is to use when you need to get more detailed log messages for a specific class or package.

Logback Configuration through an External File

Logback configuration through application.properties file will be sufficient for many Spring Boot applications. However, large enterprise applications are likely to have far more complex logging requirements. As I mentioned earlier, Logback supports advanced logging configurations through XML and Groovy configuration files.

In a Spring Boot application, you can specify a Logback XML configuration file as logback.xml or logback-spring.xml in the project classpath. The Spring Boot team however recommends using the -spring variant for your logging configuration,   logback-spring.xml is preferred over  logback.xml. If you use the standard logback.xml configuration, Spring Boot may not be able to completely control log initialization.

Here is the code of the logback-spring.xml file.

In the configuration code above, we included the base.xml file in Line 3, notice that we didn’t configured any appenders. Rather we relied on the CONSOLE and FILE appenders which are provided by Spring Boot.

With the updated Spring Boot Logback configuration, our logging output now looks like this:

Logging Output of Spring Boot XML Configuration

Note: Spring Boot expects the logback-spring.xml configuration file to be on the classpath. However, you can store it in a different location and point to it using the logging.config property in application.properties.

Spring Boot Profiles in Logging

While developing in your local machine, it is common to set the log level to DEBUG. This will give you detailed log messages for your development use. While on production, its typical set the log level to WARN or above. This is to avoid filling your logs with excessive debug information and logging overhead while running in production. While logging is very efficient, there is still a cost.

Spring Boot has addressed these requirements by extending Spring profiles for logback configuration with the <springProfile> element. Using this element in your logback-spring.xml file, you can optionally include or exclude sections of logging configuration based on the active Spring profile.

Note: Support for <springProfile> in logback configuration is available from SpringBoot 1.3.0.M2 milestone onwards.

Here is an XML example to configure Logback using active Spring profiles.

In the configuration code above, for the dev and staging profiles, we configured the guru.springframework.controllers logger to log DEBUG and higher level messages to the console. For the production profile, we configured the same logger to log WARN and higher level messages to a file.

To pass a profile to the application, run the application with the -Dspring.profiles.active= JVM argument.

For local development, in IntelliJ, select Run-> Edit Configurations, and set the JVM argument in the Run/Debug Configurations dialog box, like this.

setting active profiles for logging in IntelliJ

Now, when we run the application with the dev profile, we will see the following log output.

Logging Output with Spring Active Profiles

In the output above, observe the logging output of IndexController. DEBUG and higher log messages got logged to console based on the configuration of the dev profile. You can restart the application with the production profile to ensure that WARN and higher log messages gets logged to the file.

Conditional Processing of Configuration File

Logback supports conditional processing of configuration files with the help of the Janino library. You can use <if>, <then> and <else> elements in a configuration file to target several environments. To perform conditional processing, add the Janino dependency to your Maven POM, like this.

The complete logback-spring.xml file with conditional processing logic is this.

In the code above, we specified a condition in the <if> element to check whether the current active profile contains dev. If the condition evaluates to true, the configuration code within the <then> element executes. In the <then> element, we configured guru.springframework.helpers to log DEBUG and higher messages to console. We used the <else> element to configure the logger to log WARN and higher messages to the log file. The <else> element executes for any profiles other than dev.

When you run the application with the production profile and access it, both loggers will log WARN and higher messages to the log file, similar to this.
Logging Output with production Profile

For the dev profile, both loggers will log DEBUG and higher messages to the console, similar to this.
Logging Output for dev Profile

Logback Auto-Scan Issue with Spring Boot

In a logback-spring.xml file, you can enable auto-scan of the configuration by setting the scan="true" attribute. With auto-scan enabled, Logback scans for changes in the configuration file. For any changes, Logback automatically reconfigure itself with them. You can specify a scanning period by passing a time period to the scanPeriod attribute, with a value specified in units of milliseconds, seconds, minutes or hours.
For example, this code tells Logback to scan logback-spring.xml after every 10 seconds.

One limitation of Spring Boot Logback is that with springProfile and springProperty, setting auto-scan results in error.

The error occurs because of incompatibility issues. Spring Boot uses the JoranConfigurator subclass to support springProfile and springProperty. Unfortunately, Logback’s ReconfigureOnChangeTask doesn’t provide a hook to plug it in.

free spring framework tutorial
Checkout my FREE Introduction to Spring Course!

Conclusion

The popularity of Logback is trending in the open source community. A number of popular open source projects use Logback for their logging needs. Apache Camel, Gradle, and SonarQube are just a few examples.

Logback is clearly has the capabilities to handle the needs of logging in a complex enterprise application. So, it’s no wonder the Spring Boot team selected Logback for the default logging implementation. As you’ve seen in this post, the Spring Boot team has provided a nice integration with Logback. Out of the box, Logback is ready to use with Spring Boot. In this post, you’ve seen how easy it is to configure Logback in Spring Boot as your logging requirements evolve.

10
Share