GSON is a very popular Java library for work with JSON.

JavaScript Object Notation (JSON) is a lightweight data exchange format. Like XML, JSON provides a way of representing object that is both human readable and machine processable.

JSON is the most commonly used data exchange format on the Web. In the Java ecosystem, several libraries which that you can use to serialize Java objects to JSON, transmit the JSON data over a network, and deserialize the JSON back to Java objects.

In this post, we’ll take a look at using the GSON library, which stands for Google JSON.

The Maven POM

In order to use GSON, you need the JAR files of the GSON library in your project classpath. If you are using Maven, include the GSON dependency in your Maven POM.

The code to add the GSON dependency is this:

The POJO

For the sample application, let’s create a Product POJO with few fields to represent product information.

Here is the code of the Product class.

Product.java

Converting Java Objects to JSON Representations

Before you start using GSON in your application, you need to first create an instance of Gson. There are two ways to do so:

  • Use the Gson class to create a new instance.
  • Create a GsonBuilder instance and call the create() method on it.

Use the GsonBuilder when you want to configure your Gson object. Otherwise, the Gson class will itself serve you.

Once you have a Gson object, you can call the toJson() method. This method takes a Java object and returns the corresponding JSON, as a String.

The following code shows how to covert the Product POJO into JSON.

Here is a unit test to test the simpleJSON() method.

Above, I am using JUnit as the test framework. If you are new to JUnit, I checkout my series of posts on Unit Testing using JUnit.

The output on running the test in IntelliJ is this.

JSON Serialization with GSON

For pretty printing of the JSON, using Gson use this code:

In the code above, the setPrettyPrinting() method on the GsonBuilder creates the Gson instance with pretty printing enabled.

The code to test the simpleJsonWithPrettyPrinting() method is this.

Here is the test output:
JSON Serialization with GSON with Preety Printing

Converting JSON Strings to Java Objects Using GSON

Gson also allows you to convert JSON data into Java objects.

Consider that you have the following JSON string:

You can parse this JSON string into a Product object using the fromJson() method of Gson as shown.

The first parameter of the fromJson() method is the source JSON that must be converted into Product. In our example it is the json variable of type String. The second parameter is the Product object (POJO) that will be initialized with the JSON.

Note: Ensure that the POJO to which the JSON is converted to has a no-argument constructor. This is required so that Gson can create an instance of this class.

The test code to test the jsonToObject() method is this.

Here is test output from the above unit test:

JSON Deserialization with GSON

Excluding Fields from being Serialized or Deserialized

At times you might not want certain fields of a POJO to be serialized or deserialized. GSON allows you to exclude such fields from your POJO.

One of the several ways of excluding fields is by using the transient keyword.

If you declare any field in the Java class as transient, Gson ignores it during both serialization and deserialization.

The code to declare a field as transient is this.

When you serialize the object to JSON, you will find that the version field of the POJO is not serialized.

To summarize, a field marked as transient ensures that it will neither be serialized nor deserialized.

However, at times you need more control. You might need that a field should be serialized but never deserialized, and vice-versa.

For such requirements, use the @Expose annotation of Gson. This annotation tells whether or not to include a field during either serialization, deserialization, or both. For example, using the @Expose annotation, you can tell that the version field should only be serialized, and not deserialized back. Similarly the other way around.

The @Expose annotation takes one, both, or none of the two elements: serialize and deserialize, both of which are of Boolean type.

The Product POJO fields after applying the @Expose annotation is this.

In this code, the imageUrl field is marked with the @Expose annotation with the serialize element set to false and deserialize set to true. Therefor the imageUrl field will only be deserialized from JSON, but will not be serialized.

The price field is marked with @Expose, and therefore by default, it’s both serialize and deserialize elements are set to true. Therefore the The price field will both be serialized and deserialized.

You need to explicitly tell Gson to use the @Expose annotation like this.

This code calls the excludeFieldsWithoutExposeAnnotation() method on the GsonBuilder object to exclude all fields without the @Expose annotation.

The test code is this:

The output of the test in IntelliJ is this.
JSON Serialization with GSON with Expose Fields

Custom Serialization and Deserialization

So far, we have converted JSON into Java object and vice versa using the default Gson implementation.

However, at times, you may want to configure the serialization and deserialization processes. For example, you might need to specifically map a particular POJO field with a JSON key having a different name.

To do so, you can use the JsonSerializer and JsonDeserializer interfaces.

Creating a Custom Serializer

Consider the following fields in a POJO that you need to serialize.

The POJO needs to be serialized to this JSON:

By default, Gson uses the field names as keys in the JSON. For example, the class has a field named productId, while the same field is represented as product-id in the required JSON.

To serialize the Java object into the expected JSON, you need to create a custom serializer class that implements the JsonSerializer interface. The code shows a custom serializer class.

The Type parameter of the JsonSerializer interface is the type of the object to be serialized.
Here an instance of Product is serialized.

The return type of the overridden serialize() method is a JsonElement. The JsonElement can be of four concrete types. One among them is JsonObject, a key-value pair where the value itself is a type of JsonElement.

To serialize the Product object, you need an instance of JsonElement. This example uses JsonObject. This object is populated with the fields as required in the JSON using the addProperty() method. Finally, the serialize() method returns the populated JsonObject.

To use this serializer, you need to register it through the GsonBuilder.

You can register and use the custom serializer, like this:

By registering the CustomProductSerializer class, you are telling Gson to use the custom serializer class whenever an object of Product class is serialized.

In this code, the registerTypeAdapter() method of GsonBuilder registers the custom serializer class, CustomProductSerializer.

The code to test the custom serializer is this.

The output on running the test is this.
Custom JSON Serialization with GSON

Creating a Custom Deserializer

Consider the following JSON that you need to deserialize into the Product object.

By default, Gson parses the JSON when it finds the fields in the object class with same name as it holds. So Gson by default would expect the fields of the Product class to be product-id, description, image-url, and price.

However, it is common to have different fields in the POJO class. In our example, the JSON has a key product-id while the productId field represents the same in the Product class. Same is with the image-url key whose corresponding field in Product is imageUrl.

As a result, the default serializer of Gson will not be able to deserialize the product-id and image-url keys to their corresponding fields in Product.

In order to parse this JSON into an object of Product class you can use a custom deserializer that implements the JsonDeserializer interface.

The code of the custom deserializer is this.

The JsonDeserializer interface takes a Type parameter which is the type of Java class to which the JSON will be converted to. Here it is the Product class.

The CustomProductSerializer class overrides the deserialize() method of JsonDeserializer which returns an instance of Product.

In the deserialize() method, the call to the getAsJsonObject() method converts the JsonElement passed as argument to deserialize() into JsonObject. The values within the JsonObject are themselves JsonElement which can be retrieved by their names. To convert these elements into their respective reference types, the getXXX() methods are called.

To use this deserializer, you need to register it through the GsonBuilder. The code to register and use the custom deserializer is this:

By registering the CustomProductDeserializer class you tell Gson to use the custom deserializer class whenever a JSON is deserialized to a Product type.

Line 5 registers the custom deserializer by calling the registerTypeAdapter() on the GsonBuilder.

Line 7 uses a Reader implementation of type InputStreamReader to read a JSON file, product.json that contains the JSON to be deserialized.

Line 8 calls the fromJson() method on the Gson instance to parse the JSON into a Product object.

The product.json file is this.

The code to test the custom deserializer is this.

The output on running the test is this.

Using GSON with Spring Boot

Out of the box, Spring Boot uses Jackson as the JSON parser library. If you wish to learn more about Jackson, check out this blog post.

But you can seamlessly pull in Gson in your Java application for JSON processing. To include Gson, you need to configure the GsonHttpMessageConverter. Whenever a request hits the controller, the @RestController annotation asks Spring to directly render the object as model to the response.

Spring looks for HttpMessageConverter to convert the Java object into JSON.

Spring by default configures the MappingJackson2HttpMessageConverter for this conversion. But to use Gson you need to configure Spring Boot to use GsonHttpMessageConverter instead.

The Maven POM

To use Gson in your Spring Boot application, declare Gson in your Spring Boot Maven POM.

To avoid conflicts with the use of Jackson and Gson, use the annotation @EnableAutoConfiguration(exclude = { JacksonAutoConfiguration.class }) in your application class and exclude the Jackson dependency from your POM.

To exclude Jackson dependency from the spring-boot-starter-web dependency add this in your POM.

Validate the GSON Converter

To verify that our Spring Boot application is indeed using the Gson converter, let’s create a controller class with a request handler method that handles the request to the default request mapping to the root /.

Let us run the application with debug enabled. Include the following property in your application.properties file to see the logging output.

When you access the application from the browser, the console output is this:

As you can see, the Spring Framework is using GSON’s GsonHttpMessageConverter to convert Product to JSON instead of the default Jackson library.

Summary

As you can see in this post, Spring Boot is very flexible in allowing JSON parsers, like GSON to be plugged in for parsing JSON.

Gson is a complete JSON parser with extensive support of Java Generics. Gson supports custom representations for objects and arbitrarily complex objects having deep inheritance hierarchies.

In terms of usage, I found Gson to be fairly simple. The API is intuitive and with minimal changes to source code, you can address complex serialization and deserialization requirements of JSON to and from Java objects.

1
Share
,

In this tutorial, we will be building a demo web application for a Dog Rescue organization that uses JdbcTemplate and Thymeleaf. For this example, we will be using a MySQL database. However, this example is not limited to MySQL and the database could be swapped out for another type with ease.

You can browse and download the code on Github as you follow this example.

1 – Project Structure

The project uses a typical Maven structure. You may notice I am using Spring Tool Suite, which JT is not a fan of!

Project structure of Spring Boot Thymeleaf JdbcTemplate tutorial

 

2 – Dependencies

Besides typical Spring Boot Starter dependencies, we include Thymeleaf and MySQL connector.

 

3 – Configuration

We configure all our datasource information here in the application.properties. Later we will autowire this for our JdbcTemplate use.

application.properties

4 – Database Initialization

When our application starts, these SQL file will be automatically detected and ran. In our case, we will drop the table “dog” every time the application starts, create a new table named “dog” and then insert the values shown in data.sql.

You may recall that “vaccinated” is a Boolean value in Java. In MySQL Boolean is a synonym for TINYINT(1), so we can use this data type for the column.

schema.sql

data.sql

5 – Model/Entity

Here we define the characteristics of a dog that we want to know for our Dog Rescue. The getters and setters were auto created and it is suggested to do this to save time.

6 – Repository

We extend the CrudRepository for our DogRepository. The only additional method we create is a derived query for finding a dog by name.

7 – Service

Using the SOLID principles that JT discusses on the site here :SOLID Principles , we build a service interface and then implement that interface.

DogService.java

DogServiceImpl.java

Here we implement the methods mentioned in DogService.java.

  • addADog – is an example of how to add a record using JdbcTemplate’s update method. It takes three parameters: String, Date and Boolean.
  • deleteADOG – is an example of how to delete a record using JdbcTemplate’s update method. It takes two parameters: Long (id) and String (name).
  • List atriskdogs – is an example of how to select records using JdbcTemplate’s query method. This uses a
    ResultSetExtractor. It takes one parameter: Date. The method returns records of dogs that were rescued before that date that have not been vaccinated (Boolean value of false).
  • long getGeneratedKey – is an example of how to insert records using JdbcTemplate’s query method with PreparedStatementCreator and retrieve a generated key. It takes the same parameters as the other insert example: String, Date and Boolean.

8 – Controller

DogController.java

    • @GetMapping(value = “/”) – there is an optional requirement to provide a search value of type Date in yyyy-MM-dd format. This variable is called q (for “query”) and if it is not null then we will create an ArrayList of all dogs rescued before that date who have not been vaccinated. This ArrayList is called dogModelList and added as an attribute known as “search”. This attribute will be used in our Thymeleaf template.
      Because of its ease of use,
      we use the built in findall method of the CrudRepository to create a list of all dogs in the repository and add it as an attribute, which will also be used by Thymeleaf.
    • @PostMapping(value = “/”) – we request all the parameters that will be passed in our HTML form. We use these values to add a dog to our database.
    • @PostMapping(value = “/delete”) – we request the parameters needed to delete a dog. After the dog is deleted, we redirect the user back to our homepage.
    • @PostMapping(value = “/genkey”) – this is the mapping for inserting a record that returns a generated key. The generated key is printed to standard out in our example.

9 – Thymeleaf template

As this is a basic example application to demonstrate approaches to JdbcTemplate, JPA, Thymeleaf, and other technologies, we have just this one page with a minimalist user interface.

      • Using th:each we are able to iterate through all the records in our dog table
      • Using th:text with the variable and field name, we can display the record. I.E. th:text=”${dogs.id}
      • Using th:if=”${not #lists.isEmpty(search), we prevent the web page from showing the table of search results for dogs at risk (not vaccinated) unless there are results to be shown.
      • With our forms, we map the request to a specific URI and specify names for the inputs of our form that match the parameters in our controller.

index.html

10 – @SpringBootApplication

Our class with the main method has nothing unique in it. The @SpringBootApplication annotation takes care of autodetecting beans that are registered with the various stereotype annotations, such as @Service, etc.

11 – Demo

Landing page

So, I have navigated to localhost:8080 as I did not change the default ports for this application. When I land on the page, you can see that it is displaying the current dogs in our database.

Dog Rescue homepage

Find Dogs That Need Vaccines

Imagine that instead of three dogs in this database we had a larger, less manageable number. Having a feature that allows employees of a dog rescue to find dogs that need vaccinations would be useful if there were more dogs.

The search functionality takes a date and shows dogs that were rescued before that date that have not been vaccinated.

Although we know right now that Buddy is the only dog without his vaccinations, let’s show how this works.

Search for dogs without vaccine

Thymeleaf conditional statement

Add A Dog

As we know, the ID is autogenerated. So we can add all the fields minus the ID and successfully still a Dog to the database.
Adding a dog in thymeleaf template

Table in thymeleaf

Delete A Dog

We remove a dog from the database by using the primary ID but also ask for the name of the dog to verify it is the correct one.

We redirect the user back to the index, so it displays the table of dogs minus the one deleted. Below you can see I have removed “Pooch”.
Delete JdbcTemplate Thymeleaf

Spring Boot Delete Dog

Add A Dog And Retrieve Generated Key

Sometimes we need to retrieve the generated key from our database for other uses. Here in this example, we insert a dog named “Lassie” and retrieve the generated key.

Insert Lassie in database

In the console, this is printed

Our table is once again updated

updated table

Download the code from Github

If you’d like, you can view and download the code from Github

About Michael

Michael Good is a software engineer located in the Washington DC area that is interested in Java, cyber security, and open source technologies. Follow his personal blog to read more from Michael.

0
Share
, ,

I have met many developers who refer to tests as “Unit Tests” when they are actually integration tests. In service layers, I’ve seen tests referred as unit tests, but written with dependencies on the actual service, such as a database, web service, or some message server. Those are part of integration testing. Even if you’re just using the Spring Context to auto-wire dependencies, your test is an integration test. Rather than using the real services, you can use Mockito mocks and spies to keep your tests unit tests and avoid the overhead of running integration tests.

This is not to say Integration tests are bad. There is certainly a role for integration tests. They are a necessity.

But compared to unit tests, integration tests are sloooowwwww. Very slow. Your typical unit test will execute in a fraction of a second. Even complex unit tests on obsolete hardware will still complete sub-second.

Integration tests, on the otherhand take several seconds to execute. It takes time to start the Spring Context. It takes time to start a H2 in memory database. It takes time to establish a database connection.

While this may not seem much, it becomes exponential on a large project. As you add more and more tests, the length of your build becomes longer and longer.

No developer wants to break the build. So we run all the tests to be sure. As we code, we’ll be running the full suite of tests multiple times a day. For your own productivity, the suite of tests needs to run quickly.

If you’re writing Integration tests where a unit test would suffice, you’re not only impacting your own personal productivity. You’re impacting the productivity of the whole team.

On a recent client engagement, the development team was very diligent about writing tests. Which is good. But, the team favored writing Integration tests. Frequently, integration tests were used where a Unit test could have been used. The build was getting slower and slower. Because of this, the team started refactoring their tests to use Mockito mocks and spies to avoid the need for integration tests.

They were still testing the same objectives. But Mockito was being used to fill in for the dependency driving the need for the integration test.

For example, Spring Boot makes it easy to test using a H2 in memory database using JPA and repositories supplied by Spring Data JPA.

But why not use Mockito to provide a mock for your Spring Data JPA repository?

Unit tests should be atomic, lightweight, and fast that are done as isolated units. Additionally, unit tests in Spring should not bring up a Spring Context. I have written about the different types of tests in my earlier Testing Software post.

I have already written a series of posts on JUnit and a post on Testing Spring MVC With Spring Boot 1.4: Part 1. In the latter, I discussed unit testing controllers in a Spring MVC application.

I feel the majority of your tests should be unit tests, not integration tests. If you’re writing your code following the SOLID Principles of OOP, your code is already well structured to accept Mockito mocks.

In this post, I’ll explain how to use Mockito to test the service layer of a Spring Boot MVC application. If Mockito is new for you, I suggest reading my Mocking in Unit Tests With Mockito post first.

Using Mockito Mocks and SpiesMockito Mocks vs Spies

In unit test, a test double is a replacement of a dependent component (collaborator) of the object under test. The test double does not have to behave exactly as the collaborator. The purpose is to mimic the collaborator to make the object under test think that it is actually using the collaborator.

Based on the role played during testing, there can be different types of test doubles. In this post we’re going to look at mocks and spies.

There are some other types of test doubles, such as dummy objects, fake objects, and stubs. If you’re using Spock, one of my favorite tricks was to cast a map of closures in as a test double. (One of the many fun things you can do with Groovy!)

What makes a mock object different from the others is that it has behavior verification. Which means the mock object verifies that it (the mock object) is being used correctly by the object under test. If the verification succeeds, you can conclude the object under test will correctly use the real collaborator.

Spies on the other hand, provides a way to spy on a real object. With a spy, you can call all the real underlying methods of the object while still tracking every interaction, just as you would with a mock.

Things get a bit different for Mockito mocks vs spies. A Mockito mock allows us to stub a method call. Which meams we can stub a method to return a specific object. For example, we can mock a Spring Data JPA repository in a service class to stub a getProduct() method of the repository to return a Product object. To run the test, we don’t need the database to be up and running – a pure unit test.

A Mockito spy is a partial mock. We can mock a part of the object by stubbing few methods, while real method invocations will be used for the other. By saying so, we can conclude that calling a method on a spy will invoke the actual method, unless we explicitly stub the method, and therefore the term partial mock.

Let’s look mocks vs spies in action, with a Spring Boot MVC application.

The Application Under Test

Our application contains a single Product JPA entity. CRUD operations are performed on the entity by ProductRepository using a CrudRepository supplied by Spring Data JPA. If you look at the code, you will see all we did was extend the Spring Data JPA CrudRepository to create our ProductRepository. Under the hood, Spring Data JPA provides implementations to manage entities for most common operations, such as saving an entity, updating it, deleting it, or finding it by id.

The service layer is developed following the SOLID design principles. We used the “Code to an Interface” technique, while leveraging the benefits of dependency injection. We have a ProductService interface and a ProductServiceImpl implementation. It is this ProductServiceImpl class that we will unit test.

Here is the code of ProductServiceImpl .

ProductServiceImpl.java

In the ProductServiceImpl class, you can see that ProductRepository is @Autowired in. The repository is used to perform CRUD operations. – a mock candidate to test ProductServiceImpl.

Testing with Mockito Mocks

Coming to the testing part, let’s take up the getProductById() method of ProductServiceImpl. To unit test the functionality of this method, we need to mock the external Product and ProductRepository objects. We can do it by either using the Mockito’s mock() method or through the @Mockito annotation. We will use the latter option since it is convenient when you have a lot of mocks to inject.

Once we declare a mock` with the @Mockito annotation, we also need to initialize it. Mock initialization happens before each test method. We have two options – using the JUnit test runner, MockitoJUnitRunner or MockitoAnnotations.initMocks() . Both are equivalent solutions.

Finally, you need to provide the mocks to the object under test. You can do it by calling the setProductRepository() method of ProductServiceImpl or by using the @InjectMocks annotation.

The following code creates the Mockito mocks, and sets them on the object under test.

Note: Since we are using the Spring Boot Test starter dependency, Mockito core automatically is pulled into our project. Therefore no extra dependency declaration is required in our Maven POM.

Once our mocks are ready, we can start stubbing methods on the mock. Stubbing means simulating the behavior of a mock object’s method. We can stub a method on the ProductRepository mock object by setting up an expectation on the method invocation.

For example, we can stub the findOne() method of the ProductRepository mock to return a Product when called. We then call the method whose functionality we want to test, followed by an assertion, like this.

This approach can be used to test the other methods of ProductServiceImpl, leaving aside deleteProduct() that has void as the return type.

To test the deleteProduct(), we will stub it to do nothing, then call deleteProduct(), and finally assert that the delete() method has indeed been called.

Here is the complete test code for using Mockito mocks:

ProductServiceImplMockTest.java

Note: An alternative to doNothing() for stubbing a void method is to use doReturn(null).

Testing with Mockito Spies

We have tested our ProductServiceImpl with mocks. So why do we need spies at all? Actually, we don’t need one in this use case.

Outside Mockito, partial mocks were present for a long time to allow mocking only part (few methods) of an object. But, partial mocks were considered as code smells. Primarily because if you need to partially mock a class while ignoring the rest of its behavior, then this class is violating the Single Responsibility Principle, since the code was likely doing more than one thing.

Until Mockito 1.8, Mockito spies were not producing real partial mocks. However, after many debates & discussions, and after finding a valid use case for partial mock, support for partial mock was added to Mockito 1.8.

You can partially mock objects using spies and the callRealMethod() method. What it means is without stubbing a method, you can now call the underlying real method of a mock, like this.

Be careful that the real implementation is ‘safe’ when using thenCallRealMethod(). The actual implementation needs be able to run in the context of your test.

Another approach for partial mocking is to use a spy. As I mentioned earlier, all method calls on a spy are real calls to the underlying method, unless stubbed. So, you can also use a Mockito spy to partially mock few stubbed methods.

Here is the code provide a Mockito spy for our ProductServiceImpl .

ProductServiceImplSpyTest.java

In this test class, notice we used MockitoJUnitRunner instead of MockitoAnnotations.initMocks() for our annotations.

For the first test, we expected NullPointerException because the getProductById() call on the spy will invoke the actual getProductById() method of ProductServiceImpl, and our repository implementations are not created yet.

In the second test, we are not expecting any exception, as we are stubbing the save() method of ProductRepository.

The second and third methods are the relevant use cases of a spy in the context of our application– verifying method invocations.

Conclusion

In Spring Boot applications, by using Mockito, you replace the @Autowired components in the class you want to test with mock objects. In addition to unit test the service layer, you will be unit testing controllers by injecting mock services. To unit test the DAO layer, you will mock the database APIs. The list is endless – It depends on the type of application you are working on and the object under test. If your following the Dependency Inversion Principle and using Dependency Injection, mocking becomes easy.

For partial mocking, use it to test 3rd party APIs and legacy code. You won’t require partial mocks for new, test-driven, and well-designed code that follows the Single Responsibility Principle. Another problem is that when() style stubbing cannot be used on spies. Also, given a choice between thenCallRealMethod on mock and spy, use the former as it is lightweight. Using thenCallRealMethod on mock does not create the actual object instance but bare-bones shell instance of the Class to track interactions. However, if you use spy, you create an object instance. As regard spy, use it if you only if you want to modify the behavior of small chunk of API and then rely mostly on actual method calls.

The code for this post is available for download here.

1
Share
,

Containers based deployments are rapidly gaining popularity in the enterprise. One of the more popular container solutions is Docker.

Many view containers as virtual machines. They’re not. Well kind of not. A container is a virtual walled environment for your application. It’s literally a ‘container’ inside the host OS. Thus your application works like it is in it’s own self contained environment, but it’s actually sharing operating system resources of the host computer.  Because of this, containers are more resource efficient than full blown virtual machines. You get more bang for your buck running a bare metal machine with a bunch of containers, than you do running a bare metal machine with a bunch of VMs. This is why massive cloud computing companies running 10’s of thousands of servers are running containers. Google, Facebook, Netflix, Amazon are all big advocates of containers.

Introducing  Docker Containers

To help you visualize the difference, here are a couple images provided by Docker. Here is the bloated architecture of a traditional virtual machine environment. A popular solution you can try is Oracle’s Virtual Box which allows you run a variety of operating systems on your personal machine. I personally use VMWare Fusion to run Windows on my MBP (and I still feel a little dirty every time I do). If you’ve never used either these, I recommend you give them a try.

In this graphic, note how each stack has its own Guest OS.

what-is-docker-diagram

Now for comparison, here is the same stack containerized by Docker. Here you can see how each application does not get its own operating system. This is key to why Docker containers are so efficient. You’re not providing a virtual layer mimic the hardware, for the the guest OS to use. And you’re not running n+1 guest hosts either.

Docker What is a VM

Clearly this is more efficient computing. I’ve seen estimates in the 10-25% range of improved performance. But like with everything else when it comes to computing performance – your mileage may vary. I’d expect lightweight linux VMs to be closer to the 10% side of the scale, and Windows VMs probably closer to the 25% end of the scale – just because the Windows OS is so bloated in comparison.

This leads me to an important distinction about Docker – Linux only. Yes, you can “run” Docker on Windows and OSX – but at this time, you can only do so using a VM running in Virtual Box to run – a Linux VM.

Running Spring Boot in a Docker Container

Introduction

When I first heard about running Spring Boot in a Docker container, I personally thought – “now why would you want to run a JVM in a VM, on a VM?” At first glance, it just seemed like a absolutely terrible idea from a performance standpoint. I doubt if any of these solutions will ever match the performance of a JVM running on a bare metal installation of Linux. But, I’ve shown above, running a Spring Boot application in a Docker container should have minimal performance impact. Certainly less of an impact than running in a VM. Which is exactly what you’re doing running applications in any cloud provider (see image one above).

Installing Docker

I’m not going to get into installing Docker on your OS. There is ample documentation about installing Docker in the internet. Going forward, I’m going to assume you have Docker installed. Since Docker is Linux based, my focus will be on Linux (RHEL / CentOS).

Introduction to Docker CourseSpring Boot Example Application

For the purposes of this tutorial, let’s start with a simple Spring Boot Application. I’m going to use the completed application from my Mastering Thymeleaf course. This is a simple Spring Boot web application which is perfect for our needs.

If you want to follow this tutorial along step by step, head over to GitHub and checkout this Spring Boot project. Be sure to change to the branch “spring-boot-docker-start”.

Building a Spring Boot Docker Image

For us to run Spring Boot in a Docker container, we need to define a Docker image for it. Building Docker images is done through the use of “Dockerfile”s. Dockerfiles are basically a manifest of commands we will use to build and configure our docker container. To configure our Docker image to run our Spring Boot application, we will want to:

  • Start with the latest CentOS image from Docker Hub.
  • Install and configure Oracle Java.
  • Install the Spring Boot artifact – our executable JAR file.
  • Run the Spring Boot application.

I’m using CentOS for its compatibility with RHEL, which is probably the most popular linux distribution used by enterprises. And Oracle’s Java, mainly for the same reason.

Create Our Dockerfile

In our Maven project, we need to create our Dockerfile. In /src/main/docker  create the file Dockerfile .

NOTE: As a Java developer you may be tempted to create the file as DockerFile. Don’t do this. The Maven Plugin we cover later won’t see your file if it’s CamelCase. I learned this lesson the hard way.

CentOS

We’ll start our Docker image off by using the CentOS image from Docker hub.

Dockerfile

Installing Oracle Java

The following lines in our dockerfile will install wget into the image using the yum package installer, download the Oracle Java JDK from Oracle using wget, then configure Java on the machine.

Dockerfile

Installing the Spring Boot Executable Jar

In this section of the Dockerfile, we are:

  • Adding a /tmp volume. Docker will map this to to /var/lib/docker on the host system. This is the directory Spring Boot will configure Tomcat to use as its working directory.
  • The ADD  command adds the Spring Boot executable Jar into our Docker image.
  • The RUN  command is to ‘touch’ the JAR and give it a modified date.
  • The ENTRY  point is what will run the jar file when the container is started.

I learned about these configuration settings from a post from the Pivotal team here.

Dockerfile

Complete Dockerfile

Here is the complete Dockerfile.

Dockerfile

Building the Docker Image Using Maven

Naturally, we could build our Docker image using docker itself. But this is not a typical use case for Spring developers. A typical use case for us would be to use Jenkins to generate the Docker image as part of a CI build. For this use case, we can use Maven to package the Spring Boot executable JAR, then have that build artifact copied into the Docker image.

There’s actually several competing Maven plugins for Docker support. The guys at Spotify have a nice Maven / Docker plugin. In this example, I’m going to show you how to use the Fabric8 Docker plugin for Maven.

Fabric8

Of the Maven plugins for Docker, at the time of writing, Fabric8 seems to be the most robust. For this post, I’m only interested in building a Docker Image for our Spring Boot artifact. This is just scratching the surface of the capabilities of the Fabric8 Maven plugin. This plugin can be used to spool up Docker Images to use for your integration tests for CI builds. How cool is that!?!? But let’s learn to walk before we run!

Here is a typical configuration for the Fabric8 Maven plugin for Docker.

Fabric8 Maven Docker Plugin Configuration

If you’re following along in tutorial, the complete Maven POM now is:

pom.xml

Building the Docker Image

To build the Docker image with our Spring Boot artifact run this command:

The ‘clean’ tells Maven to delete the target directory. While this step is technically optional, if you don’t use it, sooner or later you’re going to get bit in the ass by some weird issue. Maven will always compile your classes with the package command. If you’ve done some refactoring and changed class names or packages, without the ‘clean’ the old class files are left on the disk. And in the words of IBM – “Unpredictable results may occur”.

It is very important to run the package command with the docker:build command. You’ll encounter errors if you try to run these in two separate steps.

While the Docker image is building, you will see the following output in the console:

Docker images are built in layers. The CentOS image from Docker Hub is our first layer. Each command in our Dockfile is another ‘layer’. Docker works by ‘caching’ these layers locally. I think of it as being somewhat like your local Maven repository under ~/.m2. Where Maven will bring down Java artifacts once then cache them for future use.

The first time you build this Docker image, will take longer since all the layers are being downloaded / built. The next time we build this, the only layers that change are the one which adds the new Spring Boot artifact, all commands after this. The layers before the Spring Boot artifact aren’t changing, so the cached version will be used in the Docker build.

Running the Spring Boot Docker Image

Docker Run Command

So far, we have not said anything about port mapping. This is actually done at run time. When we start the Docker container, in the run command, we will tell Docker how to map the ports. In our example we want to map port 8080 of the host machine to port 8080 of the container. This is done with the ‘-p’ parameter, followed with <host port>:<container port>. We also want to use the ‘-d’ parameter. This tells Docker to start the container in the background.

Here is the complete command to run our docker container:

This command will start the Docker container and echo the id of the started container.

Congratulations, your Spring Boot application is up and running!

You should now be able to access the application on port 8080 of your machine.

Working with Running Docker Containers

Viewing Running Docker Containers

To see all the containers running on your machine, use the following command:

View Log Output

Our running Docker containers are far from little black boxes. There’s a lot we can do with them. One common thing we want to do is see the log output. Easy enough. Use this command:

Access a Running Docker Container

Need to ssh into a Docker container? Okay, technically this really isn’t SSH, but this command will give you a bash:

Stopping the Docker Container

Shutting down our Docker container is easy. Just run this command:

Ending Source Code

Just in case you’ve run into trouble, like always, I have a branch in GitHub with the complete working example. You can get the ending source code for this tutorial here on GitHub.

Conclusion

The default executable Jar artifact of Spring Boot is ideal for deploying Spring Boot applications in Docker. As I’ve shown here launching a Spring Boot application in a Docker container is easy to do.

In terms of technology, Docker is still fairly young. At the time of writing, Docker is only about three years old. Yet, it is rapidly catching on. While Docker is widely used by the web giants, it is just starting to trickle down to Fortune 500 enterprises. At the time of writing, Docker is not available natively on OSX or Windows. Yet. Microsoft has committed to releasing a native version of Docker for Windows. Which is interesting. A lot things going on around Docker at Red Hat and Pivotal too.

Docker is a fundamental paradigm shift in the way we do things as Spring developers. I assure you, if you’re developing applications in the enterprise using the Spring Framework and have not used Docker, it’s not a matter of if, it is a when.

As a developer, Docker brings up some very cool opportunities. Need a Mongo database to work against? No problem, spool up a local Docker container. Need a virtual environment for your Jenkins CI builds. No problem.

I personally have only been working with Docker a short time. I am honestly excited about it. My thoughts on Docker – Now we’re cooking with gas!

4
Share

Its not uncommon for computers to need to communicate with each other. In the early days this was done with simple string messages. Which was problematic. There was no standard language. XML evolved to address this. XML provides a very structured way of sharing data between systems. XML is so structured, many find it too restrictive. JSON is a popular alternative to XML. JSON offers a lighter and more forgiving syntax than XML.

JSON is a text-based data interchange format that is lightweight, language independent, and easy for humans to read and write. In the current enterprise, JSON is used for enterprise messaging, communicating with RESTful web services, and AJAX-based communications. JSON is also extensively used by NoSQL database such as, MongoDB, Oracle NoSQL Database, and Oracle Berkeley DB to store records as JSON documents. Traditional relational databases, such as PostgreSQL is also constantly gaining more JSON capabilities. Oracle Database also supports JSON data natively with features, such as transactions, indexing, declarative querying, and views.

In Java development, you will often need to read in JSON data, or provide JSON data as an output. You could of course do this on your own, or use an open source implementation. For Java developers, there are several options to choose from. Jackson is very popular choice for processing JSON data in Java.

Maven Dependencies for Jackson

The Jackson library is composed of three components: Jackson Databind, Core, and Annotation. Jackson Databind has internal dependencies on Jackson Core and Annotation. Therefore, adding Jackson Databind to your Maven POM dependency list will include the other dependencies as well.

Spring Boot and Jackson

The above dependency declaration will work for other Java projects, but in a Spring Boot application, you may encounter errors such as this.

Jackson Dependency Conflict Error - NoSuchMethod Error

The Spring Boot parent POM includes Jackson dependencies. When you include the version number, thus overriding the Spring Boot curated dependency versions you may encounter version conflicts.

The proper way for Jackson dependency declaration is to use the Spring Boot curated dependency by not including the version tag on the main Jackson library, like this.

NOTE: This problem is highly dependent on the version of Spring Boot you are using.

For more details on this issue, check out my post Jackson Dependency Issue in Spring Boot with Maven Build.

Reading JSON – Data Binding in Jackson

Data binding is a JSON processing model that allows for seamless conversion between JSON data and Java objects. With data binding, you create POJOs following JavaBeans convention with properties corresponding to the JSON data. The Jackson ObjectMapper is responsible for mapping the JSON data to the POJOs. To understand how the mapping happens, let’s create a JSON file representing data of an employee.

employee.json

The preceding JSON is composed of several JSON objects with name-value pairs and a phoneNumbers array. Based on the JSON data, we’ll create two POJOs: Address and Employee. The Employee object will be composed of Address and will contain properties with getter and setter method corresponding to the JSON constructs.

When Jackson maps JSON to POJOs, it inspects the setter methods. Jackson by default maps a key for the JSON field with the setter method name. For Example, Jackson will map the name JSON field with the setName() setter method in a POJO.

With these rules in mind, let’s write the POJOs.

Address.java

Employee.java