If you are still using System.out to print debugging or diagnostic information in your application, it’s time to look for a more elegant and efficient solution in the form of a logging framework. Although there are lots of logging frameworks around for Java applications, Log4J is one of the most widely adopted one because of the simplicity and flexibility it provides.

Note: The Log4J version 1 was first released on 1999 and quickly became the most used logging framework ever. But due to some inherent architectural flaws, Apache announced End of Life of Log4j version 1 on August 2015 and encouraged users to upgrade to Log4j 2 – A framework that is far more reliable, faster, and much easier to develop and maintain. Log4J 2 is almost a compete changed framework with a different API and support for different configuration files having different syntax. Therefore, from here onward, I will refer the framework as Log4J 2

Log4J 2 is an open source logging package distributed under the Apache Software License. The advantages it provides over System.out is monumental. Log4J 2 allows you to define different levels of importance, such as ERROR, WARN, INFO, and DEBUG for log messages. Log4J 2 also allows you to define one or more destinations, such as console, file, database, and SMTP server to send log messages. And the great thing is that using Log4J 2, you can perform logging asynchronously.

Additionally, Log4J 2 allows you to control logging on a class-by-class basis. For example, one class of an application can redirect logs to the console while another to a file. By contrast, a programmer can only control System.out at the granularity of the entire application. If a programmer redirects System.out, the redirect happens for the whole application.

Another important feature of Log4J 2 is that it’s easy to enable or disable only some type of log message. For example, you can configure Log4J 2 to disable every debug message in production.

So how Log4J 2 do all this? It’s through the loggers, appenders, and layouts  – the components of the Log4J 2 API. These components work together to provide developer full control on how messages are logged, formatted, and where they are reported.


Loggers are the key objects in Log4J 2 that are responsible for capturing logging information. Loggers are stored in a namespace hierarchy and a root logger, an implementation of the Logger interface, sits at the top of the hierarchy. Logger names are case-sensitive and they follow the hierarchical naming rule.

You can retrieve the root logger by calling the LoggerManager.getRootLogger() method. For all other loggers, you can instantiate and retrieve them by calling LoggerManager.getLogger(String loggerName) passing the name of the desired logger as a parameter. Although, you can specify any name for a logger, its recommend to name the logger with the fully qualified name of the class. In a large application, with thousands of log statements, it is easy to identify the origin of a log message as the log output bears the name of the generating logger. Since it is a common practice to name loggers after their owning class, Log4J 2 provides the overloaded convenience method LogManager.getLogger(). This method, by default, uses the fully qualified class name of the owning class.

Loggers can be assigned levels in the following order.

Log Levels

As you can see in the figure above, TRACE is the lowest level and the level moves up through DEBUG, INFO, WARN, ERROR, till FATAL – the highest level. What this means is that if you set the logger level to ERROR then only the ERROR and FATAL level log messages will be displayed and the rest will be ignored.

In addition to the levels I mentioned, there are two special levels:

  • ALL: Turns on all levels.
  • OFF: Turns off all levels.

While developing in your local machine, it is common to set the log level to DEBUG. This will give you detailed log messages for your development use. While on production, its typical set the log level to ERROR. This is to avoid filling your logs with excessive debug information. And while logging is very efficient, there is still a cost.

In your application, once you have retrieved a logger, you call one of the printing methods debug(), info(), warn(), error(), fatal(), and log() on the logger to log messages. These messages are contained in the Logger interface and part of the root logger that all Log4J 2 loggers inherit.


Once you capture logging information through a logger, you need to send it to an output destination. The output destination is called an appender, and it is attached to the logger. Log4J 2 provides appenders for console, files, GUI components, remote socket servers, JMS, NT Event Loggers, and remote UNIX Syslog daemons.

Appenders are inherited additively from the logger hierarchy. This means, if the console appender is attached to the root logger, all child loggers will inherently use the console appender. If you have a child logger named Foo attached with a file appender, then Foo will use both the console and file appenders, unless you explicitly ask it not to do so by setting the additivity attribute to false.


In addition to specifying your preferred output destination, you can also specify the format of log messages. You can do so by associating a layout with the appender. Some key layouts that Log4J 2 provides are PatternLayout, Htmlayout, JsonLayout, and XmlLayout. If you want to format logging data in an application-specific way, you can create your own layout class extending from the abstract AbstractStringLayout class – the base class for all Log4J 2 layouts that result in a String.

Using Log4J 2

Let’s jumpstart to create a trivial application to use Log4J 2 and start logging. For the application, I have used Spring Boot and started out with a Spring Boot starter POM. If you are new to Spring Boot, you can start with my introductory post on Spring Boot here.

As the Spring Boot starter POM uses Logback for logging, you need to exclude it and add the Log4J 2 dependencies.

Here is the code of the POM file to use Log4J 2 in a Spring Boot application.

We can now start logging messages in the classes of the program. Let’s write a class for that.


In the class we wrote above, we retrieved a logger through a call to getLogger(). We then called the various printing methods on the logger.

Now let’s write a test class.


On running the test class, the output in the IntelliJ console is this.

Log4J 2 Output in IntelliJ

You may have noticed that I did not specify an appender or layout. I did not configure either one, and Log4J 2 rightfully pointed that out with an error message, as shown in the figure above. Rather, I relied upon defaults inherited from the Log4J 2 root logger. The Log4J 2 root logger is associated with the console appender (ConsoleAppender class) by default, and our logger inherited it. Therefore the log messages were sent to the IntelliJ console. If you notice, only the error and fatal messages got logged. This happened because by default Log4j 2 assigns the root logger level to ERROR and without external configuration, messages of the lower levels (WARN, INFO, and DEBUG) were not send to the destination. Also, the root logger by default uses PatternLayout, which our logger inherited and used.


In this post, I have only scratched the surface of Log4J 2. You will realize the power of Log4J 2 when you start working with external configuration files. These configuration files can be .properties, XML, YAML, and JSON files containing Log4J 2 configuration options. This means, you can set and change the configuration options without having to modify and recompile the application. In upcoming posts, I will discuss using external configuration files to help you explore what a powerful logging tool Log4J 2 is.

There are a number of logging solutions available for Java. Each has its own merits and drawbacks. Yet all of them are far better options than using System.out.println()!  Printing out to the console, simply is not an enterprise class solution. Often in the enterprise, your log files need to be secured, and are often indexed by monitoring tools such as Splunk. Professional Java developers will use a logging framework such as Log4J 2.

Related Posts on Log4J 2


The latest TIOBE index has Java language moving strongly into the #1 programming language for January 2016. If you’re not familiar with the TIOBE Index, it’s an index that looks at searches on the major search engines, blogs, forums, and Youtube (Did you know Youtube is now the second biggest search engine?) The “Popularity of Programming Language” index uses a slightly different approach, also has Java remaining at the #1 position for January 2016. Both indexes are giving Java over 20% of the market.

The Java Language Into the Future

I’ve read a lot of articles predicting the demise of the Java language. I don’t see that happening anytime soon. The Java language continues to evolve with the times. Java 7 was a fairly boring release. Java 8, however, has a number of exciting features. Java 8 lambdas are a really neat new feature to Java. It’s a feature that is long overdue. But I have to give kudos to the Java team. They did a real nice job of implementing lambdas.

It’s these new features that allow Java to evolve and remain relevant as modern programming languages. Functional programming has been a big buzz in the last few years. Guess what, with Java 8 and lambdas, you can do functional programming in Java now.

The JVM is the crown jewel of the Java community. With each release, the JVM becomes more stable and faster. Early releases of Java were dreadfully slow. Today, Java often approaches the performance of native code.

Another fun trend in the Java community is the rise of alternative JVM languages. That same Java runs more than just Java. My personal favorite alternative JVM languages are Groovy and Scala. Both are trending nicely in the programming indexes. And you’re seeing greater support for Groovy and Scala in Spring too. (Expect to see more posts on both in 2016!) If you account for these two alternative JVM languages, you can see how Java is truly dominating the Microsoft languages in the marketplace.

It’s going to be interesting to see what the future holds. I’m personally interested in the Swift programming language. Could Swift someday dethrone Java from the #1 spot? I think that’s going to depend on how the Swift open source community develops. I thought about building an enterprise class application in Swift. Is there a DI / IoC framework like Spring for Swift? No, not yet. An ORM like Hibernate for Swift? No, not yet. And Enterprise Integration framework like Spring Integration or Apache Camel for Swift? No, not yet. I find languages like Swift and Go very interesting, but they just don’t have the open source ecosystem which Java has. If a language is going to dethrone Java from the top, it’s going to need a thriving open source community behind it.

Like Java, all the popular open source projects continue to evolve with the language. So any calls for the demise of Java are premature. The future for the Java language is bright. The future for the Java community is brite!

java duke in shades


I woke this morning to an email linked to a blog post warning me if I’m using Java, this is the next heartbleed bug I need to be worried about. That’s a pretty serious headline, so I checked out the post.

Heartbleed Bug

For those not familiar with the heartbleed bug, it was a pretty significant vulnerability in OpenSSL allowing attackers to run code on the server. OpenSSL is a popular open source library, used in many flavors of linux, and used by web servers to support https communications. Thousands of applications were affected. Hundreds of thousands (perhaps millions) of web sites were left vulnerable. The heartbleed bug was serious business. Internet facing machines could be compromised. This last part is very important about the serious nature of the heartbleed bug, since the vulnerability could be exploited from the internet.

Heartbleed Bug

Java Serialization Exploites

Serialization in Java has long been known as a dance with the devil. In a nutshell, you’re taking a Java object, saving that object and its current state in binary format, transmitting the binary content to another server (and JVM), then restoring the object. The Java “heartbleed” exploit works by manipulating the binary content of the serialized object, then sending it to the receiving JVM. The unsuspecting JVM can then be tricked into executing code injected by the attacker.

A popular library from Apache is vulnerable to being compromised. The Apache Commons Collections library is used in a number of popular Java application servers and applications. This library is of particular concern because code is executed when the library deserializes objects. But this is just one feature of the library. A large part of its usage is around Java collections. You’ll find Apache Commons Collections in the dependencies of many Spring projects, but the vulnerable InvokerTransformer class is not used by Spring.

The only internet facing threat I know of is the practice of storing Serialized Java objects as cookies on the client. Let me be blunt, if you’re not encrypting the binary cookie data before you send it to the client, you’re an idiot and deserve to get hacked.

The rest of the exploits I looked at required access to the network behind the internet firewall. This is a significant reduction in the threat. If an attacker is inside your internet firewall, you’ve already been compromised.

Okay, let’s say the attacker is inside, now the server needs to be listening for, and accepting serialized objects on a network port. Fairly common for RMI. And yes, this can be exploited.

Java Serialization Exploites in the Modern Enterprise

But now let’s look at the modern enterprise. Compliance with regulations such as PCI, SOX, and SAS-70 have driven a number of security measures into the enterprise.

For example, PCI compliance is a fan of encrypting data in flight. Just by encrypting the communication channel neuters this exploit.

But compliance doesn’t stop there. In the old days, I could get on the corporate network and access production systems. As long as I was on the network, I could get to the servers. Even ‘sniff’ for traffic going to said production servers.

Not anymore. Modern production systems live in private networks. Often the networks are layered. They are firewalled to minimal access. If you have a memcache server, only the associated application servers will be allowed to communicate with it. Disgruntled Sally hacker on her PC in the call center will not have network access to the production servers.

Keep in mind, I’m referring to large enterprises. You still see a lot more security hardening in the financial services sector than you do in other industries, such as retail. But regulatory compliance is driving even retailers to adopt better IT security practices. (Target hack anyone?) A small business is not going to have the depth on the IT bench as a large business, but it’s still no excuse to not follow best practices.

Java Serialization Risks and the Spring Framework

As I mentioned above, Spring does not use Apache Commons Collections, and does not use the Apache InvokerTransformer class. But this vulnerability is specific to Java serialization, and not limited to the Apache InvokerTransformer class. The Spring team did identify a similar risk in the Spring Framework code. This has been fixed in Spring versions 4.1.9 and 4.2.3. You can read details about the fix here.

Java Heartbleed Bug?

But is this the Java “heartbleed” bug? No, not even close. The risk is nowhere close to the original heartbleed bug. Let me be even more blunt. If you’re not following best practices in your coding, and you’re not adopting IT security best practices popularized by PCI, SOX, and SAS-70 compliance – you’re a lazy moron and deserve to get hacked.

People referring to this as Java’s heartbleed bug are overly alarmist. It is a vulnerability we need to be aware of. Good coding practices and IT security best practices significantly mitigate the risks in Java serialization.

In Java development, you’re also seeing a move away from using RMI and serialization in general. For RMI, the trend is to use web services, in which the object is converted to XML or JSON. For JMS, you often see XML or JSON again over serialized objects. JMX, that is going to be secured.

This is not Java’s heartbleed bug. Millions of websites are not as risk. The risk from internet exposed machines is small. Risks around Java serialization are very narrow. There are very specific use cases you need to be concerned with. Relax, the sky isn’t falling. Whew.

keep calm and use java


ArrayList is one of the most commonly used collection classes of the Java Collection Framework because of the functionality and flexibility it provides. ArrayList is a List implementation that internally implements a dynamic array to store elements. Therefore, an ArrayList can dynamically grow and shrink as you add and remove elements to and from it. It is likely that you have used ArrayList, therefore I will skip the basics. If you are not familiar with ArrayList yet, you can go through its API documentation here, which is very descriptive and easy to understand to perform basic operations on ArrayList.

In this post, I will discuss one of the most important operation on ArrayList that you will most likely require implementing during enterprise application development. It’s sorting the elements of an ArrayList.

Sorting an ArrayList of String Objects

Consider an ArrayList that stores country names as String objects. To sort the ArrayList, you need to simply call the Collections.sort() method passing the ArrayList object populated with country names. This method will sort the elements (country names) of the ArrayList using natural ordering (alphabetically in ascending order). Lets’s write some code for it.


In the class above, we initialized an ArrayList object in the constructor. In the sortAscending() method, we called the Collections.sort() method passing the initialized ArrayList object and returned the sorted ArrayList. In the sortDescending() method we called the overloaded Collections.sort() method to sort the elements in descending order. This version of Collections.sort() accepts the ArrayList object as the first parameter and a Comparator object that the Collections.reverseOrder() method returns as the second parameter. We will come to Comparator a bit later. To test the sorting functionality, we will write some test code.


In the test code above, we created a ArrayList object and added five String objects that represent the names of five countries to it. We then called the getArrayList(), sortAscending(), and sortDescending() methods and printed out the ArrayList objects that the methods return.

The output is this.

At this point, it might appear that sorting elements of an ArrayList is very simple. We only need to call the Collections.sort() method passing the ArrayList object whose elements needs to be sorted. But, there is more to sorting ArrayLists as you encounter additional scenarios.

The Collections.sort() method sorts ArrayList elements or elements of any other List implementation provided the elements are comparable. What this means programmatically is that the classes of the elements need to implement the Comparable interface of the java.lang package. As the String class implements the Comparable interface, we were able to sort the ArrayList of country names. Some other classes standard to Java which implement the Comparable interface include the primitive wrapper classes, such as Integer, Short, Double, Float, and Boolean. BigInteger, BigDecimal, File, and Date are also examples of classes that implement Comparable.

Sorting an ArrayList using Comparable

Comparable is an interface with a single compareTo() method. An object of a class implementing Comparable is capable of comparing itself with another object of the same type. The class implementing Comparable needs to override the compareTo() method. This method accepts an object of the same type and implements the logic for comparing this object with the one passed to compareTo(). The compareTo() method returns the comparison result as an integer that has the following meanings:

  • A positive value indicates that this object is greater than the object passed to compareTo().
  • A negative value indicates that this object is less than the object passed to compareTo().
  • The value zero indicates that both the objects are equal.

Let’s take an example of a JobCandidate class whose objects we want to store in a ArrayList and later sort them. The JobCandidate class has three fields: name and gender of type String and age that is an integer. We want to sort JobCandidate objects stored in the ArrayList based on the age field. To do so, we will need to write the JobCandidate class to implement Comparable and override the compareTo() method.

The code of the JobCandidate class is this.


In the overridden compareTo() method of the JobCandidate class above, we implemented the comparison logic based on the age field. I have seen many programmers restoring to the shortcut version of returning the comparison result as return (this.getAge() - candidate.getAge());. Although using this return statement might appear tempting and will not anyhow affect our example, my advice is to stay away from it. Imagine, the result of comparing integer values where one or both of them are negative values. It can lead to bugs that will make your application behave erratically and more than that, such bugs being subtle, are extremely difficult to detect especially in large enterprise applications. Next, we’ll write a helper class which will sort ArrayList objects containing JobCandidate elements for clients. 


In the JobCandidateSorter class we initialized a ArrayList object that client will pass through the constructor while instantiating JobCandidateSorter. We then wrote the getSortedJobCandidateByAge() method. In this method, we called Collections.sort() passing the initialized ArrayList. Finally, we returned back the sorted ArrayList.

Next, we will write a test class to test our code.


In the test class above, we created four JobCandidate objects and added them to an ArrayList. We then instantiated the JobCandidateSorter class passing our ArrayList to its constructor. Finally, we called the getSortedJobCandidateByAge() method of JobCandidateSorter and printed out the sorted ArrayList that the method returns. The output on running the test is this.

Sorting ArrayList using Comparable is a common approach. But, you need to be aware of certain constraints. The class whose object you want to sort must implement Comparable and override the compareTo() method. This essentially means you would only be able to compare the objects based on one field (which was age in our example). What if the requirements state you need to be able to sort JobCandidate objects by name and also by age? Comparable is not the solution. In addition, comparison logic is part of the class whose objects needs to be compared, which eliminates any chance of reusability of the comparison logic. Java addresses such comparison requirements used in sorting by providing the Comparator interface in the java.util package.

Sorting an ArrayList using Comparator

The Comparator interface similar to the Comparable interface provides a single comparison method named compare(). However, unlike the compareTo() method of Comparable, the compare() method takes two different objects of the same type for comparison.
We will use Comparator to sort objects of the same JobCandidate class we used earlier but with few differences. We will allow sorting JobCandidate objects both by name and age by implementing Comparator as anonymous inner classes.

Here is the code of the JobCandidate class using Comparator.


In the class above, from Line 29 – Line 35, we wrote an anonymous class and implemented the compare() method that will allow sorting JobCandidate objects by age in descending order. From Line 37 – Line 42, we again wrote an anonymous class and implemented the compare() method that will allow sorting JobCandidate objects by name in ascending order. We will now write a class that will sort the elements of the ArrayList for clients.


In the class above, we wrote the getSortedJobCandidateByAge() method. In this method we called the overloaded version of Collections.sort() passing the ArrayList object to be sorted and the Comparator object that compares age. In the getSortedJobCandidateByName() method, we again called the overloaded version of Collections.sort() passing the ArrayList object to be sorted and the Comparator object to compare names.

Let’s write a test class to test our code.


In the test class we populated JobCandidate objects in an ArrayList and created a JobCandidateSorter object in the JUnit setup() method annotated with @Before. If you are new to JUnit, you can refer my post covering JUnit annotations (Part of a series on unit testing with JUnit) here. In the testGetSortedJobCandidateByAge() test method we called the getSortedJobCandidateByAge() method and printed out the sorted ArrayList that the method returns. In the testGetSortedJobCandidateByName() test method, we called the getSortedJobCandidateByName() method and printed out the sorted ArrayList that the method returns. The output of the test is this.


In this post we looked at different approaches of sorting elements of ArrayList. One using Comparable and the other using Comparator. The approach to choose has always been a cause of confusion for programmers. What you should essentially remember is that a Comparable object can say “I can compare myself with another object” while a Comparator object can say “I can compare two different objects”. You cannot say that one interface is better than the other. The interface you choose depends upon the functionality you need to achieve.


As a Java programmer, you will often need to work with date and time. In this post, we will learn how to get the current time of the day in Java applications. In Java, there are several ways for this and the primary classes that programmers typically use are the Date and Calendar classes. You can use both the classes to calculate the current time of the day and then use the SimpleDateFormat class to format the time according to your requirements.

Using the Date and Calendar Classes

To calculate the current time you create a Date object, specify a format to present the current time of the day, and finally use the SimpleDateFormat class to format the time as you specified.

Here is the code.

In the code above, we created a Date object and specified the format for the time. In the format string, the lower case hh represents a 12 hour format while mm and ss represents minutes in hour and seconds in minutes respectively. Also, a in the format string is the am/pm marker. We then created a SimpleDateFormat object initialized with the format string and called the format() method on it passing the Date object. The format() method returns the current time in the format we specified.

Using the Calendar class is equally simple. Here is the code.

In the code above, we called the getInstance() factory method of the Calendar class to obtain a Calendar object and then called the getTime() method on it. This method returns a Date object. The remaining code is the same as I explained in the preceding Date example. The only difference you should observe is the format string. Here we specified hour as HH (Upper case) which will give us the time in 24 hour format.

Here is the complete code.


We will next write a test class.



The output is this.

By now, you might be thinking that calculating current time in Java can never be easier. But the Date and Calendar classes have some inherent disadvantages. When you start developing enterprise applications, the disadvantages become more obvious. Primarily, both the classes are not thread-safe, which might lead to concurrency issues. Also, both the classes have the limitation of calculating time to milliseconds only. In mission-critical enterprise applications, it is often required to represent time to nanosecond precision.
In addition, even though the Date class is not itself deprecated, due to poor API design most of its constructors and methods are now deprecated. Because of such issues, enterprise application developers turned toward third-party specialized date and time APIs, particularly Joda-Time.

Using java.time

The Java specification team of Oracle realized the limitations of the in-built date and time handling API and to provide a more efficient API introduced the java.time package in the new Java SE 8. This project is developed jointly by Oracle and Stephen Colebourne, the author of Joda-Time. The new java.time API “is inspired by Joda-Time“.

Calculating Time of the Default Time Zone

The java.time package have several classes to work with date and time. Let’s start with some code to calculate the current time of the default time zone.

As you can observe, the code is pretty straight forward. We called the now() method of LocalTime that returns a LocalTime object. This is an immutable object that represent a time without a date and a time-zone in the ISO-8601 calendar system.

The output of the code is.

In the output above, observe that the time is represented to nanosecond precision.

Calculating Time of a Different Time Zone

The java.time package provides classes that you can use to get the current time of a different time zone. As an example, if you are located in your default time zone and you want the current time of the day of a different time zone, say America/Los_Angeles, you can use the following classes:

  • ZoneId: Represents a particular time zone.
  • DateTimeFormatter: Similar to the SimpleDateFormat class we discussed earlier, this class is a formatter for parsing and printing date-time objects.

The code to calculate the current time of a different time zone is this.

In Line 4 – Line 5, we created a ZoneId object to represent the America/Los_Angeles time zone and then obtained the current time of the time zone by calling the now() method passing the ZoneId. The LocalTime object that the now() method returns holds the current time of the day of America/Los_Angeles. From Line 6 – Line 8, we formatted the time into 24 hour format using DateTimeFormatter and printed out the formatted time.

The output of the code is.

Calculating Time of a Different Time Offset

Time offset is the amount of time in hours and minutes added to or subtracted from Coordinated Universal Time (UTC). The java.time package provides the ZoneOffset class to calculate the current time of the day of a different time offset.

The code to use the ZoneOffset class is this.

In the code above, we created a ZoneOffset object in Line 4 to represent the -8:00 time offset. In Line 5 we created a ZoneId object initialized with the ZoneOffset object. The remaining code from Line 6 to Line 9 uses a LocalTime object initialized with the ZoneId and a DateTimeFormatter object to format the time in the 12 – hour format with the am/pm marker

The output of the preceding code is.

The complete code of the snippets we discussed is this.

Let’s write a test class to see the code in action.

The output I got on running the test using Maven is shown below, which obviously will differ based on your current time and time zone.


In terms of working with time in Java, this post has only scratched the surface. The java.time package has an extensive set of classes that simplifies working with date and time. However, you will also encounter the Date and Calendar classes and the Joda-Time API when you work on legacy applications. But, with the powerful features built in java.time, you can easily convert your legacy date and time calculation code to java.time. Go ahead and explore the java.time API, which is very well documented and easy to use.

Keep in mind when you’re using the local time options, Java is going through the JVM to talk to whatever machine it is running on. This is how Java is determines the time and the locale of time operations. Thus, your code can produce different results when running on different machines. As a word of caution, when you’re working on small applications, often you won’t need to worry about time timezone. But in larger applications, you will. You will need to make your code aware of the timezone. Many developers overlook the timezone component, then run into problems later as their application grows.



I remember in the late 90s / early 2000 all the buzz in programming was about XML. All the ‘experts’ said you had to be using XML for data exchange. XML does have many benefits. But often when programming today, XML is looked as ‘old school’ or too restrictive. XML sure can be finicky to work with. Many developers when given a choice (including myself) will use JSON over XML. JSON is much more forgiving than XML is, which makes it simpler to work with.

While XML may be old school, it still can be good to use. Even when you’re doing JSON based Restful web services.

Restful Web Services in Spring

I’m not going to cover building Restful Web Services in Spring in this post. That’s a future topic. But when you’re using Spring MVC to develop Restful Web Services, you can set up a lot of flexibility. I typically design my web services so the client can use JSON or XML. The standards support this, and it’s fairly easy to do.

Many of the examples you will see on the internet will start with using JAXB annotated Java POJOs. This works, but I also feel this is a mistake. The ‘contract’ for your web services is now a Java class. Which may work for you, but it’s not portable. What if one of your clients is writing in PHP??? A JAXB annotated Java class is useless to them.

XML Schema

XML Schema is a specification for describing XML documents. Think of it as strong typing for an XML document. You can specify the properties, if they can be null, their data types, if they can be a list, etc. The capabilities of XML schema is very robust.

A really cool feature of JAXB is you can generate JAXB annotated Java POJOs from a XML Schema document. And when you do this, you now have a portable contract for the data types of your web services. Your ‘contract’ is no longer tied to the Java programming language. You could give the XML schema to someone building a client for your web service in another programming language, like C# maybe, and they can use the XML Schema to generate code for the client.

XML Schema is a widely accepted standard that is not tied to a specific programming language. By using XML schema and providing it to your clients you make your Restful Web Service easier to consume. Your clients have a standardised specification of exactly what they need to send.

JAXB Generated Classes for Restful Web Services

In this post, I’m going to show you how to setup a Maven project to create a Jar file of Java classes generated by JAXB from an XML Schema.

Generating a Maven Project in IntelliJ

For the purposes of this example, I’m going to use IntelliJ to create a Maven project. In real use, you’ll want to set this up as either a independent Maven project, or a Maven module. This will allow the JAXB generated classes to be bundled in a JAR file and reused in other projects or modules.

1. Create a new project in IntelliJ.

Create Maven Project in IntelliJ

2. Set the GroupId and ArtifactId in the New Project dialog of IntelliJ.

New Project Dialog in IntelliJ

3. Select where to store the project on your drive.

path location in IntelliJ

4. IntelliJ will as your to verify the creation of a new directory. Click OK.

Directory Does Not Exist dialog in IntelliJ

5. Depending on your settings in IntelliJ, you may be asked to import changes. I often keep this feature off when dealing with large complex Maven projects due to performance reasons. If you see this dialog, click on ‘Import Changes’.

import Maven changes into IntelliJ

6. At this point you’ve created a new Maven project in IntelliJ. You can see the Maven standard directory structure has been created.

Maven project in IntelliJ


Create the XML Schema

The first file we will create is the XML Schema we will be using. Let’s say we’re creating a Web Service to add a product, and need a create product command object.

In our XML Schema we are creating a Product class, and the a CreateProductRequest class too. This is going to extend the ProductClass and add a field for the API key. This file is placed in the folder /main/resources. This is the default location for XML Schema files.


Configure Maven

Creating the project in IntelliJ gave us a very basic Maven POM file. Here is the Maven POM created for us.

Maven Dependencies

We need to add three dependencies to our Maven POM. This first is the JAXB API, the second is the JAXB implementation, and finally the third is for the Maven JAXB plugin.

Maven JAXB Plugin

Next we need to configure the JAXB plugin for Maven to the Maven POM.

Complete Maven POM

Here is the final Maven POM.

Building Our JAXB Maven Project

Running the Maven Package Goal

IntelliJ makes working with Maven very easy. On the right side of the IDE, you’ll see a button for ‘Maven Projects’, clicking that will open the ‘Maven Projects’ dialog. To build our project, under Lifecycle, double click on the ‘package’ goal.

Maven Projects in IntelliJ

You should see Maven run and build successfully.

Maven Build Artifacts

Maven will build into the ‘Target’ directory. You can see the Java classes generated in IntelliJ.

JAXB Maven Artifacts

JAXB Generated Classes

We expected two classes generated from the XML Schema we defined.



Notice how this class actually extends the Product class.


This is just a very simple example of generating classing from an XML Schema using JAXB and Maven. In this post, I’ve shown you step by step how to use an XML Schema and JAXB to generate Java classes. The generated classes are bundled into a JAR file, which is portable and can be shared with other Java projects.

As a Spring Source consultant, I was at a large company which built a number of Restful APIs. The team building the APIs did not use JAXB to generate classes from an XML Schema. Rather they built the JAXB classes by hand and could not offer their clients an XML Schema. As a consumer of their APIs, it was a time consuming process to configure and troubleshoot my data types.

I’ve also done consulting in organizations where the team did use XML Schemas for building their APIs. Having the XML Schema made writing the client code a snap.



You can find the documentation for the JAXB project here.

JAXB Maven Plugin

I used the default settings for the JAXB Maven plugin. Additional options are available. Documentation for the JAXB Maven plugin is here.

Project Source Code

The source code used in this tutorial is available on Github here.



I was recently asked on my Facebook page, “How do I become a Java Web Developer?” There is  really no simple answer for this question. There are many facets to becoming a Java web developer. I’ve encountered Java developers who were good front end developers, or good backend developers. By ‘front end’, I mean more of the browser side technologies – HTML, CSS, Javascript and then Java templating technologies such as Thymeleaf, Sitemesh, or just good old JSPs. Backend developers are going to have stronger skills with Java, databases (SQL and NoSQL), messaging (JMS/AQMP), and web services (SOAP / REST).

You also have what is known as a “full stack” Java developer. This is my personal skill set. A full stack developer is equally skilled as a front end developer and as a back end developer. This is probably the most difficult track to follow, just because of the diversity of technologies involved. One one day you might be debugging something in JQuery, and the next you’re performance tuning an Oracle database query. It takes time and experience to become a full stack Java developer.

Where to Start?

For aspiring developers, the technology landscape can be overwhelming. The technology landscape is always evolving too. Do you risk learning something that will soon be obsolete?

Client Side Technologies

My advice to new developers is to start with the basics. HTML, CSS, and Javascript. These technologies are core to web development. These technologies are also generic in the sense that it does not matter if you’re a Java web developer, or a Ruby web developer.


HTML – Hypertext Markup Language. This is what makes a web page. You need to have a solid understanding of HTML. Back in the beginning of the World Wide Web HTML was traditionally a file that was served by a web server to the browser. This worked great for static content. Stuff that never changed. But this is becoming rare. People want dynamic content. Thus, the HTML is no longer a static file, the HTML is generated on demand. As a Java Web Developer you’re going to be writing code that generates the HTML document for the web browser. You will need to have a solid understanding of the structure of an HTML document.


CSS – Cascading Style Sheets. This is what styles a page. It controls the fonts, the colors, the layout. While HTML defines the content of a web page, CSS defines how it looks when presented in a browser. For example, you may use one set of CSS rules for a desktop web application, and a different set of CSS rules for a mobile application. Same HTML, but two completely different looks when rendered by the browser.


Javascript – Do stuff on the web page. Do not confuse Javascript with Java. While there are some syntax similarities, these are two completely different programming languages. Javascript is what really drives Web 2.0 applications. Through the use of Javascript, you can dynamically change the HTML/CSS based on user actions, giving the web page a more application like feel for the user.


Hypertext Transfer Protocol – The communication between the client and the web server. I see too many web developers who do not understand HTTP. This is absolutely critical for you to understand. Especially as you get into working with AJAX. You need to know the difference between a POST and a GET. You should have memorized the meanings of HTTP status codes 200, 301, and 404 – and more. As a Java web developer you will work with HTTP everyday.

Server Side Technologies


Java – The question is how to become a Java web developer. So, of course you are going to need to know the Java programming language.In addition to just Java itself, you should be familiar with the Java Servlet API. There’s a number of Java web frameworks, which obscure the usage of the Java Servlet API. When things go wrong, you’re going to need to know what’s happening under the covers.


JPA – Java Persistence API – Using the database. JPA is the standard for working with traditional relational databases in Java. Hibernate is the most popular JPA implementation in use today. As a Java web developer, you’re going to be working with databases. You’ll get getting content from the database to display on a web page, or receiving content from the user to store in the database. Java Web Developers need to know how to use JPA.

Java Application Servers

Java Application Servers – The runtime container for Java web applications. Tomcat is, by far, the most popular Java application server. There is a Java standard for a Web Application Archive file – aka WAR file. These are deployed to Application servers such as Tomcat to provide the runtime environment for your web application. A decade ago, the trend was to use a more complex coupling between your application and the application server. However, the current trend is in favor of a looser coupling between your application and the application server.

Java Frameworks

Notice so far, I have not mentioned anything about the plethora of Java frameworks available for you to use? So far, I’ve described the different technologies you will use as a Java web developer. The client side technologies are completely independent of the server side technologies. Firefox doesn’t care if the server is running Java, Python or .NET. New developers often seem to forget this.

It is possible to do Java web development without using one of the Java frameworks. If you do, you will be writing a lot code to handle things which a framework would take care of for you. Which is why when developing Java web applications, you typically will want to use one of the frameworks.

Spring Framework

The Spring Framework is an outstanding collection of tools for building large scale web applications. Exact metrics are hard to determine, but I’ve seen some estimates which say Spring is used in over 60% of Java based web applications. Which really isn’t too surprising. You have the IoC container and dependency injection from Spring Core. Spring MVC, a mature and flexible MVC based web framework. Spring Security, best in class tool for securing your website. Spring Data to help with persistence. Spring has other projects which will help you in building large scale applications.

There really is no alternative to Spring when it comes to a holistic framework. There are competing technologies against the various Spring Projects. But there is no single solution that has the depth and breadth of the Spring Framework family of projects. In my Introduction to Spring online tutorial, I give you a good overview of the major Spring Framework projects and how they are used to build enterprise class applications.


Grails is a rapid application development framework built on top of Spring. You get everything Spring, and then the productivity benefits of Groovy. I like to describe Grails as Spring with a Groovy wrapper. An oversimplification for sure, but important to remember, Spring is still under the covers.

Grails is seeing more and more use in the enterprise. One of the strengths of Grails is its outstanding community support.

Spring Roo

Spring Roo is a pure Java framework, which seems to try to do what Grails does, without Groovy. I’m not a fan of Spring Roo, and last time I checked neither was the marketplace. Spring Roo has not been widely adopted.


A Scala based Framework. I have not had a chance to try out Play yet. I hear a lot of good things about Play in the marketplace. I feel Play is an interesting alternative. But it’s just not widely used in the enterprise. Not yet at least. Play is getting some encouraging traction.

JBoss Seam

JBoss Seam is probably the closest thing to an alternative to the Spring Framework. JBoss Seam follows the JEE standard. JBoss Seam is a good alternative, with good support, and adoption in the enterprise. JBoss Seam is often criticised as being slower than Spring in terms of development and performance. Some are much more critical of JBoss Seam.


Becoming a Java web developer is not something you can become overnight. There is no book called “Teach yourself Java Web Development in 21 days.” There are no short cuts to becoming a Java web developer. There are a lot of different technologies you need to learn and master. Each of these take time to learn.

Being a Java web developer can be a very rewarding career. You can get started just focusing on the front end, or just the backend technologies. Larger Java development shops will allow you to specialize in one area of the technologies over another.

If I was starting out as a Java web developer today, I probably would focus first on the client side technologies. The client side technologies are agnostic to the server side technologies. Thus, as you’re starting out you will have greater employment options. Compensation on the client side technologies is generally lower than the server side, but it’s a good place to start out. You can quickly gain the skills to be employable, then shift focus and broaden your skill set and later seek better employment opportunities.

, ,

In this series on unit testing with JUnit, we learned several unit testing aspects and how to implement them with JUnit. We can summarize the series till now as:

  • Part 1: Creating a basic unit test both using Maven and IntelliJ
  • Part 2: Using assertions and annotations
  • Part 3: Using assertThat with Hamcrest matchers

In this post, we will learn about parameterized tests and theories.

JUnit Parameterized Tests

While testing, it’s common to execute a series of tests which differ only by input values and expected results. As an example, if you are testing a method that validates email IDs, you should test it with different email ID formats to check whether the validations are getting correctly done. But testing each email ID format separately, will result in duplicate or boilerplate code. It is better to abstract the email ID test into a single test method and provide it a list of all input values and expected results. JUnit supports this functionality through parameterized tests.

To see how parameterized test works, we’ll start with a class with two methods which we will put under test.


The EmailIdUtility class above has two utility methods. The createEmailID() method accepts two String parameters and generates an email ID in a specific format. The format is simple – If you pass mark and doe as parameters to this method, it returns [email protected]. The second isValid() method accepts an email ID as a String, uses regular expression to validate it’s format, and returns the validation result.

We will first test the isValid() method with a parameterized test. JUnit runs a parameterized test with a special runner, Parameterized and we need to declare it with the @RuntWith annotation. In a parameterized test class, we declare instance variables corresponding to the number of inputs to the test and the output. As the isValid() method under test takes a single String parameter and returns a boolean, we declare two corresponding variables. For a parameterized test, we need to provide a constructor, which will initialize the variables. 


We also need to provide a public static method annotated with @Parameters annotation. This method will be used by the test runner to feed data into our tests.

The @Parameters annotated method above returns a collection of test data elements (which in turn are stored in an array). Test data elements are the different variations of the data, including the input as well as expected output needed by the test. The number of test data elements in each array must be the same with the number of parameters we declared in the constructor.

When the test runs, the runner instantiates the test class once for each set of parameters, passing the parameters to the constructor that we wrote. The constructor then initializes the instance variables we declared.

Notice the optional name attribute we wrote in the @Parameters annotation to identify the parameters being used in the test run. This attribute contains placeholders that are replaced at run time.

  • {index}: The current parameter index, starting from 0.
  • {0}, {1}, …: The first, second, and so on, parameter value. As an example, for the parameter {“[email protected]”, true}, then {0} = [email protected] and {1} = true.

Finally, we write the test method annotated with @Test. The complete code of the parameterized test is this.


The output on running the parameterized test in IntelliJ is this.

Parameterized Test Output

JUnit Theories

In a parameterized test, the test data elements are statically defined and you as the programmer are responsible for figuring out what data is needed for a particular range of tests. At times, you will likely want to make tests more generalized. Say, instead of testing for specific values, you might require to test for some wider range of acceptable input values. For this scenario, JUnit provides theories.

A theory is a special test method that a special JUnit runner (Theories) executes. To use the runner, annotate your test class with the @RunWith(Theories.class) annotation. The Theories runner executes a theory against several data inputs called data points. A theory is annotated with @Theory, but unlike normal @Test methods, a @Theory method has parameters. In order to fill these parameters with values, the Theories runner uses values of the data points having the same type.

There are two types of data points. You use them through the following two annotations:

  • @DataPoint: Annotates a field or method as a single data point. The value of the field or that the method returns will be used as a potential parameter for theories having the same type.
  • @DataPoints: Annotates an array or iterable-type field or method as a full array of data points. The values in the array or iterable will be used as potential parameters for theories having the same type. Use this annotation to avoid single data point fields cluttering your code.

Note: All data point fields and methods must be declared as public and static.

In the code example above, we annotated a String field with the @DataPoint annotation and a names() method that returns a String[] with the @DataPoints annotation.

Creating a JUnit Theory

Recall the createEmailID() method that we wrote earlier on this post – “The createEmailID() method accepts two String parameters and generates an email ID in a specific format.” A test theory that we can establish is “Provided stringA and stringB passed to createEmailID() are non-null, it will return an email ID containing both stringA and stringB ”. This is how we can represent the theory.

The testCreateEmailID() theory we wrote accepts two String parameters. At run time, the Theories runner will call testCreateEmailID() passing every possible combination of the data points we defined of type String. For example (mary,mary), (mary,first), (mary,second), and so on.


It’s very common for theories NOT to be valid for certain cases. You can exclude these from a test using assumptions, which basically means “don’t run this test if these conditions don’t apply“. In our theory, an assumption is that the parameters passed to the createEmailID() method under test are non-null values.

If an assumption fails, the data point is silently ignored. Programmatically, we add assumptions to theories through one of the many methods of the Assume class.
Here is our modified theory with assumptions.

In the code above, we used assumeNotNull because we assume that the parameters passed to createEmailID() are non-null values. Therefore, even if a null data point exists and the test runner passes it to our theory, the assumption will fail and the data point will be ignored.
The two assumeThat we wrote together performs exactly the same function as assumeNotNull. I have included them only for demonstrating the usage of assumeThat, which you can see is very similar to assertThat we covered in the earlier post.

The following is the complete code using a theory to test the createEmailID() method.


In the test class above, I have included null as a data point in the return statement of Line 23 for our assumptions and couple of System.out.println() statements to trace how parameters are passed to theories at run time.

Here is the output of the test in IntelliJ:
JUnit Theories Output

Also, here is the output I got while running the test with Maven for your review:

In the output above, notice that whenever a null value is being passed to the theory, the remaining part of the theory after assumeNotNull does not execute.


Parameterized tests in JUnit helps remove boiler plate test code and that saves time while writing test code. This is particularly useful during Enterprise Application Development with the Spring Framework. However, a common complaint is that when a parameterized test fails it’s very hard to see the parameters which caused it to fail. By properly naming the @Parameters annotation and great unit testing support that modern IDEs provide, such complains are quickly failing to hold grounds. Although theories are less commonly used, they are powerful instruments in any programmers test toolkit. Theories not only makes your tests more expressive but you will see how your test data becomes more independent of the code you’re testing. This will improve the quality of your code, since you’re more likely to hit edge cases, which you may have previously overlooked.

, ,

In the first part of the series on unit testing with JUnit, we looked at creating unit tests both using Maven and IntelliJ. In this post, we will look at some core unit testing concepts and apply those using JUnit constructs. We will learn about assertions, JUnit 4 annotations, and test suites.

JUnit Assertions

Assertions, or simply asserts provide programmers a way to validate the intended behavior of code. For example, through an assertion you can check whether a method returns the expected value for a given set of parameters or a method correctly sets up some instance or class variables. When you run the test, the assertion executes. If the method under test behaves exactly as you specified in the assertion, your test passes. Otherwise, an AssertionError is thrown.

JUnit provides support for assertions through a set of assert methods in the org.junit.Assert class. Before we start using them, let’s have a quick overview of the Arrange, Act, Assert (AAA) pattern. This pattern is the recommended way to write unit test methods where you divide a method into three sections, each with a specific purpose:

  • Arrange: Initialize objects and set up input data for the method under test.
  • Act: Invoke the method under test passing the arranged parameters.
  • Assert: Verify that the method under test behaves as expected. This is where you write an assertion method.

Here is a Java class we will be writing some JUnit unit tests to test.


In the EmployeeEmail class above, we wrote an addEmployeeEmailId() method that first checks whether an email ID is in valid format, and then adds it to a Map implementation. The isValidEmailId() method performs the email validation using a regular expression. We also wrote a getEmployeeEmailId() method to return an email ID from the Map, given a key.

To test the EmployeeEmail class, we will create a test class, EmployeeEmailTest and add test methods to it. Here, remember that the number of test methods to add and what they should do depends on the behavior of the EmployeeEmail class under test – not on the number of methods in it.

To start with, we will test that the getEmployeeEmailId() method returns true for a valid email ID and false for an invalid one with two test methods.

In both the test methods above, we separated the test code into the AAA sections. In the first test method, we used the assertTrue() method as we expect isValidEmailId() to return true for the email ID, [email protected]. We also want to test that isValidEmailId() returns false for an invalid email ID. For that, we wrote the second test method and used assertFalse().

Couple of things to observe here. In both the assertion methods, we passed a String parameter as the identifying message for an assertion error. It’s common for programmers to set this message to describe the condition that should be met. Instead, to be meaningful, this message should describe what’s wrong if the condition isn’t met.

Also, you might be thinking “Why two separate test methods instead of a single method with both the assert methods?” Having multiple assert methods in a single test method will not cause any errors in tests, and you will frequently encounter such test methods. But a good rule to follow is: “Proper unit tests should fail for exactly one reason”, which sounds similar to the Single Responsibility Principle.  In a failed test method having multiple assertions, more effort is required to determine which assertion failed. Also, it is not guaranteed that all of the assertions took place. For an unchecked exception, the assertions after the exception will not execute and JUnit proceeds to the next test method. Therefore, it is generally a best practice to use one assertion per test method.

With the basics in place, let’s write the complete test class and use the following assertions:

  • assertEquals() and assertNotEquals(): Tests whether two primitives/objects are equal or not. In addition to the string message passed as the first parameter, these methods accept the expected value as the second parameter and the actual value as the third parameter- an important ordering commonly misused.
  • assertNull() and assertNotNull(): Tests whether an object is null or not null.
  • assertSame() and assertNotSame(): Tests whether two object references point to the same object or don’t.


In the EmployeeEmailTest class above:

  • Line 38: We used assertEquals() to test the collection size after adding two elements to it through addEmployeeEmailId().
  • Line 50: We used assertNotEquals() to test that the collection does not allow duplicate keys added through addEmployeeEmailId().
  • Line 62: We used assertNotNull() to test that getEmployeeEmailId() does not return null for an email ID present in the collection.
  • Line 74: We used assertNull() to test that getEmployeeEmailId() returns null for an email ID not present in the collection.
  • Line 89: We used assertSame() to test that two collection references point to the same collection object after assigning one to the other through the = operator.
  • Line 103: We used assertNotSame() to test that two collection references are not pointing to the same object.

When we run the test in IntelliJ, the output is:
Output in IntelliJ

As you can see from the output, all the tests passed as expected.

Note: The order in which JUnit executes test methods is not guaranteed, so don’t count on it.

If you go back and look into the test class, you will notice several lines of code in the Arrange part being repeated across the test methods. Ideally, they should be in a single place and get executed before each test. We can achieve this through the use of JUnit annotations, which we will look into next.

JUnit Annotations

You can use JUnit Annotations, introduced in JUnit 4, to mark and configure test methods. We have already used the @Test annotation to mark public void methods as test methods. When JUnit encounters a method annotated with @Test, it constructs a new instance of the class, and then invokes the method. We can optionally provide a timeout parameter to @Test to specify a time measured in milliseconds. If the test method takes longer to execute than the specified time, the test fails. This is particularly useful when you test against performance in terms of time. This code marks a method as a test method and sets the timeout to 100 milliseconds.

Another important use of the @Test annotation is to test for exceptions. Suppose for a condition, a code throws an exception. We can use the @Test annotation to test whether the code indeed throws the exception when the condition is met. This code checks whether the getEmployeeEmailId() method throws an exception of type IllegalArgumentException when a non-String value is passed to it.

In addition to the @Test annotation, the other annotations are:

  • @Before: Causes a method to run before each test method of the class. You typically use this annotation to allocate resource, setup common initialization code, and load configuration files that the test methods require.
  • @After: Causes a method to run after each test method of the class. This method is guaranteed to run even if a @Before or @Test method throws an exception. Use this annotation to clean up initialization code and release any resource allocations done in @Before.
  • @BeforeClass: Causes a static method to run once and only once before any of the test methods in the class. This is useful in situations where you need to set up computationally expensive resources, say a server connection, a database, or even managing an embedded server for testing. As an example, instead of starting a server for each @Test method, start it once in a @BeforeClass method for all the tests in the class.
  • @AfterClass: Causes a static method to run once after all the test methods in the class completes. This method is guaranteed to run even if a @BeforeClass or @Test method throws an exception. Use this method to free one time resource initialization done in @BeforeClass.
  • @Ignore: Causes a test method to be ignored by JUnit. This can be useful when you have a complicated piece of code that is in transition, and you might want to temporarily disable some tests till that code is ready. Test runners of most IDEs report @Ignore tests as reminders during each test runs. This is essentially to mark tests as “there are things to be done”, which otherwise you might forget if you comment out the test method or remove the @Test annotation.

Here is an example of using all the JUnit annotations.


The output on running the test in IntelliJ is:

JUnit Output in IntelliJ

JUnit Test Suites

If you have large numbers of test classes for different functional areas or modules, you can structure them into test suites. JUnit Test Suites are containers of test classes and gives you finer control over what order your test classes are executed in. JUnit provides org.junit.runners.Suite, a class that runs a group of test classes.
The code to create a test suite is:


In the test suite class above, we wrote two annotations: @RunWith and @SuiteClasses. The @RunWith annotation instructs JUnit to use the Suite runner class and @SuiteClasses specifies the classes and their order that the Suite runner class should run. The test suite class is itself empty and acts only as a placeholder for the annotations.

The output on executing the test suite in IntelliJ is.

JUnit test suite Output in IntelliJ


JUnit Assertions not only make your code stable but also force you to think differently and think through different scenarios, which ultimately helps you to become better programmers. By understanding the purpose of different assertions and using them properly, testing becomes effective. But the question is “How many asserts per test method?”. It all comes down to the complexity of the method under test. For a method with multiple conditional statements, asserting the outcome for each condition should be done, while for a method performing a simple string manipulation, a single assertion should do. When developing unit tests with JUnit, it is considered a best practice that each test method is testing a specific condition, which will often lead to one assert per test method. Its not uncommon for a method under test to be associated with multiple test methods.
One assertion I have not covered in this post is assertThat(). It’s an important JUnit assertion which I will cover it in my next post on JUnit.

Unit Testing with the Spring Framework

While doing Enterprise Application Development with the Spring Framework and unit testing your code, you will be using lots of assertions. In addition to asserting the regular method behaviors, you will assert whether Spring beans are injected as expected by the Spring application context, whether dependencies between Spring beans are correctly maintained, and so on. While creating those tests ensure that they run fast, especially when testing is integrated in the build cycle. You will keep building your application as you code, and so you obviously won’t want your build to wait for a long running test to complete. If you do have such long running tests, put them in a separate test suite.

, ,

Unit testing is the first level of testing software where you write test code that executes a specific functionality in the code to be tested. In most cases, you as a programmer are responsible to deliver unit tested code. The objective is to check if the unit of the software, for example a public method of a class under test, behaves as expected and/or returns the expected data. Unit tests are not done on the production system but as isolated units. If the unit under test have external dependencies, such as an external data source or a Web service, the dependencies are replaced with a test implementation or a mock object created using a test framework. Unit testing is not the only type and it alone cannot handle all testing aspects. Other types of testing, such as integration and functional testing have their own roles in testing software.

In this series of posts we will focus on unit testing with JUnit – one of the most popular framework to test Java code. In this post, we will start by creating and executing a basic unit test, and then in further posts move to specific unit testing aspects.

The core JUnit framework comes in a single JAR file, which you can download, point the classpath to it, and then create and run tests. But in this post, we will learn how to perform unit testing in the real programmer’s way. We will start with Maven, and then move to IntelliJ.

Unit Testing with Maven

You’ve likely heard Maven being referred as a build tool. But, in addition to its capability to build deployable artifacts from source code, Maven provides a number of features for managing the software development life cycle. Unit testing is one such feature, which is incorporated as a test phase in the Maven build lifecycle.

Without going into Maven in depth, let’s start our first JUnit test with Maven.

    1. Download and install Maven if you haven’t done yet.
    2. Open up a command prompt (Windows) or a terminal (*uix or Mac), browse to a working directory to setup the project, and execute the following command.

The preceding archetype:generate command uses the maven-archetype-quickstart template to create a basic Maven project containing a pom.xml file, a App.java class, and a AppTest.java test class in the following directory structure.

In the directory structure above, the pom.xml file, also known as the Maven configuration file is the heart of a Maven project. It is where you define your project configurations – specifically the dependencies of your project. For example, as our project depends on JUnit, we need to declare it as a dependency in the pom.xml file. Although a JUnit dependency will already be present by default, we will update it to point to the latest JUnit version. This is how our final pom.xml file will look like.


Now that we have set up a basic Java class, a test class, and the pom.xml configuration, we can run a unit test.

    1. Execute the mvn test command from the working directory.

This command will run the default AppTest class that Maven generated for us with the following output.

We have executed a JUnit test using Maven. This test passed, but hardly provides any value yet. We will next move to using the IntelliJ IDE to write and execute a more comprehensive test.

Unit Testing in IntelliJ

Using IntelliJ, you can easily create, run, and debug unit tests. Among several other unit testing frameworks, IntelliJ provides built-in support for JUnit. In IntelliJ, you can create a JUnit test class with a click and navigate quickly between test classes and their corresponding target classes to debug test errors. A very useful unit testing feature in IntelliJ is code coverage. With this feature, you can view the exact percentage of methods and even lines of code covered by unit tests in your project.

Let’s import our existing Maven project to IntelliJ and do some unit testing.

Import Maven Project to IntelliJ

If you don’t have IntelliJ installed, download and install the free Community Edition or the 30-day trial of Ultimate Edition from the official website. Once you are done, perform the following steps:

    1. Open IntelliJ.
    2. On the Welcome to IntelliJ IDEA window, click Import Project.

Click Import project

    1. In the Select File or Directory to Import dialog box, browse to the working directory of the Maven project and select the pom.xml file.

Select pom.xml

    1. Click the OK button.
    2. In the Import Project from Maven dialog box that appears, select the Import Maven projects automatically checkbox to synchronize changes between the Maven and InteliiJ projects each time the pom.xml file changes.

Select the Import Maven projects automatically checkbox

    1. Click the Next button through a few more dialog boxes, accepting the default settings, and finally click Finish. The Project window of IntelliJ displays the project structure.

Thct Structure in the IntelliJ Project Windowe Proje

  1. Double click App in the Project window to open it in the code editor.
  2. Replace the default code of the App class with this code.


In the code above we wrote a concatAndConvertString() method in the App class that accepts two String parameters. The method first concatenates the strings and converts the result to uppercase before returning it.

We will next add a test class to test the concatAndConvertString() method.

Add a Test Class

Let’s go through the steps to add a test class in IntelliJ from scratch.

    1. Delete the default AppTest class from the Project window.
    2. In the Project window create a directory with the name test under main, We will use the test directory to keep the test code separated from the application code.
    3. Right-click test and select Mark Directory As→Test Sources Root.

select Mark Directory As, and then Test Sources Root.

    1. In the code editor where the App class is open, press Shift+F10 and select Create New Test.

Select Create New Test

    1. In the Create Test dialog box that appears, select the jUnit4 radio button and the check box corresponding to the concatAndConvertString() method that we will test.

Select jUnit4 radio button and the method that we will test

  1. Click the OK button. JUnit creates the AppTest class with a testConcatAndConvertString() method decorated with the @Test annotation. This annotation tells JUnit to run the method as a test case. In the test method, we will write the code to test the concatAndConvertString() method of App.


In Line 12 of the example above, we called the assertEquals() method, which is one of the several JUnit assertion methods. This overloaded method checks whether two String objects are equal. If they are not, the method throws an AssertionError. In our example, we called the assertEquals() method by passing the expected string value (HELLOWORLD) as the first parameter and the actual value that the concatAndConvertString() method returns as the second parameter.

Run the Unit Test

To run the test, select Run AppTest from the Run menu of IntelliJ or press Shift+F10. The Run window displays the test result. A green highlight box indicates that the test completed without any failure.

Green highlight box indicating a passed test

To know how test failures are reported, change the value of the expectedValue variable to HelloWorld and press Shift+F10. The Run dialog box displays a red progress bar to indicate the test failure along with a comparison failure message.

Red Progress Bar Indicating a Failed Test

Revert the the expectedValue variable to its original value before you close IntelliJ.


At this point, if you are thinking Why not just use System.out.println() for unit testing? Then you are thinking wrong. Inserting System.out.println() for debugging into code is undesirable because it requires manually scanning the output, every time the program is run, to ensure that the code is doing what’s expected. Imagine doing this in an enterprise application having hundreds and thousands of lines of code. On the other hand, unit tests examine the code’s behavior on runtime from the client’s point of view. This provides better insight on what might happen when the software is released.

Whenever you are writing code, you should be writing unit tests too. Writing unit tests will catch programming errors and improve the quality of your code. Many professional developers advocate doing Test Driven Development (TDD), where you write your unit tests before you write the application code.

Either way if you write your unit tests before or after you write the application code, the unit tests become a valuable asset for instrumenting your code. As the code base grows, you can refactor things as needed, and have more confidence that your changes will not have unintended consequences (ie where you change one thing, and accidently break something else).

In part 2 of my tutorial series on unit testing with JUnit I’ll take a deeper look at JUnit Assertions, JUnit Annotations, and JUnit Test Suites.

Unit Testing with the Spring Framework

Testing is an integral part of Enterprise Application Development process with the Spring Framework. The architecture of the Spring Framework lends itself to modular code and easier unit testing.  Spring provides testing support through the TestContext Framework that abstracts the underlying testing framework, such as JUnit and TestNG. You can use it by setting SpringJUnit4ClassRunner.class as the value for the @RunWith annotation. This tells Spring to use the test runner of TestContext instead of JUnit’s built in test runner. I’ve written a more in depth post about testing Spring applications with JUnit here.