, , ,

When developing Spring Boot applications, you need to tell the Spring Framework where to look for Spring components. Using component scan is one method of asking Spring to detect Spring managed components. Spring needs the information to locate and register all the Spring components with the application context when the application starts.

Spring can auto scan, detect, and instantiate components from pre-defined project packages. Spring can auto scan all classes annotated with the stereotype annotations @Component, @Controller, @Service, and @Repository

In this post, I will discuss how Spring component scanning works.

Sample Application

Let us create a simple Spring Boot application to understand how component scanning works in Spring.

We will start by writing few components.

DemoBeanA.java

DemoBeanB1.java

DemoBeanB2.java

DemoBeanB3.java

DemoBeanC.java

DemoBeanD.java

Spring Framework 5
Click Here to Become a Spring Framework Guru!

The @SpringBootApplication Annotation

Spring needs to know which packages to scan for annotated components in order to add them to the IoC container. In a Spring Boot project, we typically set the main application class with the @SpringBootApplication annotation. Under the hood, @SpringBootApplication is a composition of the @Configuration, @ComponentScan, and @EnableAutoConfiguration annotations. With this default setting, Spring Boot will auto scan for components in the current package (containing the @SpringBoot main class) and its sub packages.

To know more about these annotations go through my Spring Framework Annotations post.

Note: It is recommended that you locate your main application class in a root package above the component classes of the application.

The code to create the main class and access components is this.

BlogPostsApplication.java

The output of running the main class is this.
Output of BlogPostsApplication.java class

As you can notice, all the classes in the sub packages of the main class BlogPostsApplication are auto scanned by Spring.

@ComponentScan – Identifying Base Packages

The @ComponentScan annotation is used with the @Configuration annotation to tell Spring the packages to scan for annotated components. @ComponentScan also used to specify base packages and base package classes using thebasePackageClasses or basePackages attributes of @ComponentScan.

The basePackageClasses attribute is a type-safe alternative to basePackages. When you specify basePackageClasses, Spring will scan the package (and subpackages) of the classes you specify. 

A Java class annotated with @ComponentScan with the basePackageClassesattribute is this.

BlogPostsApplicationWithComponentScan.java

The output on running the main class is this.
Output of BlogPostsApplicationWithComponentScan.java Class

The @ComponentScan annotation uses the basePackages attribute to specify three packages (and subpackages) that will be scanned by Spring. The annotation also uses the basePackageClasses attribute to declare the DemoBeanB1 class whose package Spring Boot should scan.

As demoBeanC is in a different package, Spring did not find it during component scanning.

Component Scanning Filters

You can configure component scanning by using different type filters that Spring provides.

By using filters, you can further narrow the set of candidate components from everything in basePackages to everything in the base packages that matches the given filter or filters.

Filters can be of two types: include and exclude filters. As their names suggest, include filters specify which types are eligible for component scanning, while exclude filters specify which types are not.

You can use the include and/or exclude filters with or without the default filter. To disable the default filter, set the useDefaultFilters element of the @ComponentScan annotation to false.

The code to disable the default filter is this.

BlogPostsApplicationDisablingDefaultFilters.java

In the preceding code, the value member defines the specific guru.springframework.blog.componentscan.example.demopackageA package to scan, while the useDefaultFilters member disables the default filter.

The output on running the main class is this.
Output of BlogPostsApplicationDisablingDefaultFilters.java Class

As you can notice, the class DemoBeanA in the package demopackageA is unavailable when the useDefaultFilters element of the @ComponentScan annotation is set to false.

Component Scanning Filter Types

Spring provides the FilterType enumeration for the type filters that may be used in conjunction with @ComponentScan.

The available FilterType values are:

  • FilterType.ANNOTATION: Include or exclude those classes with a stereotype annotation
  • FilterType.ASPECTJ: Include or exclude classes using an AspectJ type pattern expression
  • FilterType.ASSIGNABLE_TYPE: Include or exclude classes that extend or implement this class or interface
  • FilterType.REGEX: Include or exclude classes using a regular expression
  • FilterType.CUSTOM: Include or exclude classes using a custom implementation of the org.springframework.core.type.TypeFilter interface

Include Filters

With include filters, you can include certain classes to be scanned by Spring. To include assignable type, use the includeFilters element of the @ComponentScan annotation with FilterType. ASSIGNABLE_TYPE. Using this filter, you can instruct Spring to scan for classes that extends or implements the class or interface you specify.

The code to use the includeFilters element of @ComponentScan is this.

BlogPostsApplicationIncludeFilter.java

The output on running the main class is this.
Output of BlogPostsApplicationIncludeFilter.java Class

As shown in the preceding figure, Spring detected and used the demoBean3 component that extends demoBean2.

Spring Framework 5
Learn Spring Framework 5 with my Spring Framework 5: Beginner to Guru course!

Include Filters using Regex

You can use regular expressions to filter out components to be scanned by Spring. Use the includeFiltersnested annotation @ComponentScan.Filter type FilterType.REGEXto set a pattern.

The code to use an exclude filter based on regular expression is this.

BlogPostsApplicationFilterTypeRegex.java

The output of the following code snippet is this.
Output of BlogPostsApplicationFilterTypeRegex.java Class
As shown in the preceding figure, the classes whose names end with A, or 2 are detected by Spring.

Exclude Filters

The @ComponentScan annotation enables you to exclude those classes that you do not want to scan.

The code to use an exclude filter is this.

BlogPostsApplicationExcludeFilter.java

In this code, the nested annotation @ComponentScan.Filter is used to specify the filter type as FilterType.ASSIGNABLE_TYPE and the base class that should be excluded from scanning.

The output is this.
Output of using the FilterType.ASSIGNABLE_TYPE type

As you can see, the class DemoBeanB2 has been excluded from being scanned.

Summary

When using Spring Boot, most of the time the default auto scanning will work for your project. You only need to ensure that your @SpringBoot main class is at the base package of your package hierarchy. Spring Boot will automatically perform a component scan in the package of the Spring Boot main class and below.

One related annotation that I didn’t mention in this post is @EntityScan is more about JPA entity scanning rather than component scanning. The @EntityScan annotation, unlike @ComponentScan does not create beans. It only identifies which classes should be used by a specific persistence context.

4
Share
,

@RequestMapping is one of the most common annotation used in Spring Web applications. This annotation maps HTTP requests to handler methods of MVC and REST controllers.

In this post, you’ll see how versatile the @RequestMapping annotation is when used to map Spring MVC controller methods.

Request Mapping Basics

In Spring MVC applications, the RequestDispatcher (Front Controller Below) servlet is responsible for routing incoming HTTP requests to handler methods of controllers.

When configuring Spring MVC, you need to specify the mappings between the requests and handler methods.

Spring MVC Dispatcher Servlet and @RequestMappingTo configure the mapping of web requests, you use the @RequestMapping annotation.

The @RequestMapping annotation can be applied to class-level and/or method-level in a controller.

The class-level annotation maps a specific request path or pattern onto a controller. You can then apply additional method-level annotations to make mappings more specific to handler methods.

Here is an example of the @RequestMapping annotation applied to both class and methods.

With the preceding code, requests to /home will be handled by get() while request to /home/index will be handled by index().

Spring Framework 5
Learn Spring Framework 5 with my Spring Framework 5: Beginner to Guru

@RequestMapping with Multiple URIs

You can have multiple request mappings for a method. For that add one @RequestMapping annotation with a list of values.

As you can see in this code, @RequestMapping supports wildcards and ant-style paths. For the preceding code, all these URLs will be handled by indexMultipleMapping().

  • localhost:8080/home
  • localhost:8080/home/
  • localhost:8080/home/page
  • localhost:8080/home/pageabc
  • localhost:8080/home/view/
  • localhost:8080/home/view/view

@RequestMapping with @RequestParam

The @RequestParam annotation is used with @RequestMapping to bind a web request parameter to the parameter of the handler method.

The @RequestParam annotation can be used with or without a value. The value specifies the request param that needs to be mapped to the handler method parameter, as shown in this code snippet.

In Line 6 of this code, the request param id will be mapped to the personId parameter personId of the getIdByValue() handler method.

An example URL is this:
localhost:8090/home/id?id=5

The value element of @RequestParam can be omitted if the request param and handler method parameter names are same, as shown in Line 11.

An example URL is this:
localhost:8090/home/personId?personId=5

The required element of @RequestParam defines whether the parameter value is required or not.

In this code snippet, as the required element is specified as false, the getName() handler method will be called for both of these URLs:

  • /home/name?person=xyz
  • /home/name

The default value of the @RequestParam is used to provide a default value when the request param is not provided or is empty.

In this code, if the person request param is empty in a request, the getName() handler method will receive the default value John as its parameter.

Using @RequestMapping with HTTP Method

The Spring MVC  @RequestMapping annotation is capable of handling HTTP request methods, such as GET, PUT, POST, DELETE, and PATCH.

By default all requests are assumed to be of HTTP GET type.

In order to define a request mapping with a specific HTTP method, you need to declare the HTTP method in @RequestMapping using the method element as follows.

In the code snippet above, the method element of the @RequestMapping annotations indicates the HTTP method type of the HTTP request.

All the handler methods will handle requests coming to the same URL ( /home), but will depend on the HTTP method being used.

For example, a POST request to /home will be handled by the post() method. While a DELETE request to /home will be handled by the delete() method.

You can see how Spring MVC will map the other methods using this same logic.

Using @RequestMapping with Producible and Consumable

The request mapping types can be narrowed down using the produces and consumes elements of the @RequestMapping annotation.

In order to produce the object in the requested media type, you use the produces element of @RequestMapping in combination with the @ResponseBody annotation.

You can also consume the object with the requested media type using the consumes element of @RequestMapping in combination with the @RequestBody annotation.

The code to use producible and consumable with @RequestMapping is this.

In this code, the getProduces() handler method produces a JSON response. The getConsumes() handler method consumes JSON as well as XML present in requests.

@RequestMapping with Headers

The @RequestMapping annotation provides a header element to narrow down the request mapping based on headers present in the request.

You can specify the header element as myHeader = myValue.

In the above code snippet, the headers attribute of the @RequestMapping annotation narrows down the mapping to the post() method. With this, the post() method will handle requests to /home/head whose content-type header specifies plain text as the value.

You can also indicate multiple header values like this:

Here it implies that both text/plain as well as text/html are accepted by the post() handler method.

@RequestMapping with Request Parameters

The params element of the @RequestMapping annotation further helps to narrow down request mapping. Using the params element, you can have multiple handler methods handling requests to the same URL, but with different parameters.

You can define params as myParams = myValue. You can also use the negation operator to specify that a particular parameter value is not supported in the request.

In this code snippet, both the getParams() and getParamsDifferent() methods will handle requests coming to the same URL ( /home/fetch) but will execute depending on the params element.

For example, when the URL is /home/fetch?id=10 the getParams() handler method will be executed with the id value 10.. For the URL, localhost:8080/home/fetch?personId=20, the getParamsDifferent() handler method gets executed with the id value 20.

Using @RequestMapping with Dynamic URIs

The @RequestMapping annotation is used in combination with the @PathVaraible annotation to handle dynamic URIs. In this use case, the URI values can act as the parameter of the handler methods in the controller. You can also use regular expressions to only accept the dynamic URI values that match the regular expression.

In this code, the method getDynamicUriValue() will execute for a request to localhost:8080/home/fetch/10. Also, the id parameter of the getDynamicUriValue() handler method will be populated with the value 10 dynamically.

The method getDynamicUriValueRegex() will execute for a request to localhost:8080/home/fetch/category/shirt. However, an exception will be thrown for a request to /home/fetch/10/shirt as it does not match the regular expression.

@PathVariable works differently from @RequestParam. You use @RequestParam to obtain the values of the query parameters from the URI. On the other hand, you use @PathVariable to obtain the parameter values from the URI template.

The @RequestMapping Default Handler Method

In the controller class you can have default handler method that gets executed when there is a request for a default URI.

Here is an example of a default handler method.

In this code, A request to /home will be handled by the default() method as the annotation does not specify any value.

Spring Framework 5
Learn Spring Framework 5! Enroll in my Spring Framework 5: Beginner to Guru course!

@RequestMapping Shortcuts

Spring 4.3 introduced method-level variants, also known as composed annotations of @RequestMapping. The composed annotations better express the semantics of the annotated methods. They act as wrapper to @RequestMapping and have become the standard ways of defining the endpoints.

For example, @GetMapping is a composed annotation that acts as a shortcut for @RequestMapping(method = RequestMethod.GET).
The method level variants are:

  • @GetMapping
  • @PostMapping
  • @PutMapping
  • @DeleteMapping
  • @PatchMapping

The following code shows using the composed annotations.

In this code, each of the handler methods are annotated with the composed variants of @RequestMapping. Although, each variant can be interchangeably used with @RequestMapping with the method attribute, it’s considered a best practice to use the composed variant. Primarily because the composed annotations reduce the configuration metadata on the application side and the code is more readable.

@RequestMapping Conclusion

As you can see in this post, the @RequestMapping  annotation is very versatile. You can use this annotation to configure Spring MVC to handle a variety of use cases. It can be used to configure traditional web page requests, and well as RESTFul web services in Spring MVC.

 

6
Share
, ,

I have met many developers who refer to tests as “Unit Tests” when they are actually integration tests. In service layers, I’ve seen tests referred as unit tests, but written with dependencies on the actual service, such as a database, web service, or some message server. Those are part of integration testing. Even if you’re just using the Spring Context to auto-wire dependencies, your test is an integration test. Rather than using the real services, you can use Mockito mocks and spies to keep your tests unit tests and avoid the overhead of running integration tests.

This is not to say Integration tests are bad. There is certainly a role for integration tests. They are a necessity.

But compared to unit tests, integration tests are sloooowwwww. Very slow. Your typical unit test will execute in a fraction of a second. Even complex unit tests on obsolete hardware will still complete sub-second.

Integration tests, on the otherhand take several seconds to execute. It takes time to start the Spring Context. It takes time to start a H2 in memory database. It takes time to establish a database connection.

While this may not seem much, it becomes exponential on a large project. As you add more and more tests, the length of your build becomes longer and longer.

No developer wants to break the build. So we run all the tests to be sure. As we code, we’ll be running the full suite of tests multiple times a day. For your own productivity, the suite of tests needs to run quickly.

If you’re writing Integration tests where a unit test would suffice, you’re not only impacting your own personal productivity. You’re impacting the productivity of the whole team.

On a recent client engagement, the development team was very diligent about writing tests. Which is good. But, the team favored writing Integration tests. Frequently, integration tests were used where a Unit test could have been used. The build was getting slower and slower. Because of this, the team started refactoring their tests to use Mockito mocks and spies to avoid the need for integration tests.

They were still testing the same objectives. But Mockito was being used to fill in for the dependency driving the need for the integration test.

For example, Spring Boot makes it easy to test using a H2 in memory database using JPA and repositories supplied by Spring Data JPA.

But why not use Mockito to provide a mock for your Spring Data JPA repository?

Unit tests should be atomic, lightweight, and fast that are done as isolated units. Additionally, unit tests in Spring should not bring up a Spring Context. I have written about the different types of tests in my earlier Testing Software post.

I have already written a series of posts on JUnit and a post on Testing Spring MVC With Spring Boot 1.4: Part 1. In the latter, I discussed unit testing controllers in a Spring MVC application.

I feel the majority of your tests should be unit tests, not integration tests. If you’re writing your code following the SOLID Principles of OOP, your code is already well structured to accept Mockito mocks.

In this post, I’ll explain how to use Mockito to test the service layer of a Spring Boot MVC application. If Mockito is new for you, I suggest reading my Mocking in Unit Tests With Mockito post first.

Using Mockito Mocks and SpiesMockito Mocks vs Spies

In unit test, a test double is a replacement of a dependent component (collaborator) of the object under test. The test double does not have to behave exactly as the collaborator. The purpose is to mimic the collaborator to make the object under test think that it is actually using the collaborator.

Based on the role played during testing, there can be different types of test doubles. In this post we’re going to look at mocks and spies.

There are some other types of test doubles, such as dummy objects, fake objects, and stubs. If you’re using Spock, one of my favorite tricks was to cast a map of closures in as a test double. (One of the many fun things you can do with Groovy!)

What makes a mock object different from the others is that it has behavior verification. Which means the mock object verifies that it (the mock object) is being used correctly by the object under test. If the verification succeeds, you can conclude the object under test will correctly use the real collaborator.

Spies on the other hand, provides a way to spy on a real object. With a spy, you can call all the real underlying methods of the object while still tracking every interaction, just as you would with a mock.

Things get a bit different for Mockito mocks vs spies. A Mockito mock allows us to stub a method call. Which meams we can stub a method to return a specific object. For example, we can mock a Spring Data JPA repository in a service class to stub a getProduct() method of the repository to return a Product object. To run the test, we don’t need the database to be up and running – a pure unit test.

A Mockito spy is a partial mock. We can mock a part of the object by stubbing few methods, while real method invocations will be used for the other. By saying so, we can conclude that calling a method on a spy will invoke the actual method, unless we explicitly stub the method, and therefore the term partial mock.

Let’s look mocks vs spies in action, with a Spring Boot MVC application.

The Application Under Test

Our application contains a single Product JPA entity. CRUD operations are performed on the entity by ProductRepository using a CrudRepository supplied by Spring Data JPA. If you look at the code, you will see all we did was extend the Spring Data JPA CrudRepository to create our ProductRepository. Under the hood, Spring Data JPA provides implementations to manage entities for most common operations, such as saving an entity, updating it, deleting it, or finding it by id.

The service layer is developed following the SOLID design principles. We used the “Code to an Interface” technique, while leveraging the benefits of dependency injection. We have a ProductService interface and a ProductServiceImpl implementation. It is this ProductServiceImpl class that we will unit test.

Here is the code of ProductServiceImpl .

ProductServiceImpl.java

In the ProductServiceImpl class, you can see that ProductRepository is @Autowired in. The repository is used to perform CRUD operations. – a mock candidate to test ProductServiceImpl.

Testing with Mockito Mocks

Coming to the testing part, let’s take up the getProductById() method of ProductServiceImpl. To unit test the functionality of this method, we need to mock the external Product and ProductRepository objects. We can do it by either using the Mockito’s mock() method or through the @Mockito annotation. We will use the latter option since it is convenient when you have a lot of mocks to inject.

Once we declare a mock` with the @Mockito annotation, we also need to initialize it. Mock initialization happens before each test method. We have two options – using the JUnit test runner, MockitoJUnitRunner or MockitoAnnotations.initMocks() . Both are equivalent solutions.

Finally, you need to provide the mocks to the object under test. You can do it by calling the setProductRepository() method of ProductServiceImpl or by using the @InjectMocks annotation.

The following code creates the Mockito mocks, and sets them on the object under test.

Note: Since we are using the Spring Boot Test starter dependency, Mockito core automatically is pulled into our project. Therefore no extra dependency declaration is required in our Maven POM.

Once our mocks are ready, we can start stubbing methods on the mock. Stubbing means simulating the behavior of a mock object’s method. We can stub a method on the ProductRepository mock object by setting up an expectation on the method invocation.

For example, we can stub the findOne() method of the ProductRepository mock to return a Product when called. We then call the method whose functionality we want to test, followed by an assertion, like this.

This approach can be used to test the other methods of ProductServiceImpl, leaving aside deleteProduct() that has void as the return type.

To test the deleteProduct(), we will stub it to do nothing, then call deleteProduct(), and finally assert that the delete() method has indeed been called.

Here is the complete test code for using Mockito mocks:

ProductServiceImplMockTest.java

Note: An alternative to doNothing() for stubbing a void method is to use doReturn(null).

Testing with Mockito Spies

We have tested our ProductServiceImpl with mocks. So why do we need spies at all? Actually, we don’t need one in this use case.

Outside Mockito, partial mocks were present for a long time to allow mocking only part (few methods) of an object. But, partial mocks were considered as code smells. Primarily because if you need to partially mock a class while ignoring the rest of its behavior, then this class is violating the Single Responsibility Principle, since the code was likely doing more than one thing.

Until Mockito 1.8, Mockito spies were not producing real partial mocks. However, after many debates & discussions, and after finding a valid use case for partial mock, support for partial mock was added to Mockito 1.8.

You can partially mock objects using spies and the callRealMethod() method. What it means is without stubbing a method, you can now call the underlying real method of a mock, like this.

Be careful that the real implementation is ‘safe’ when using thenCallRealMethod(). The actual implementation needs be able to run in the context of your test.

Another approach for partial mocking is to use a spy. As I mentioned earlier, all method calls on a spy are real calls to the underlying method, unless stubbed. So, you can also use a Mockito spy to partially mock few stubbed methods.

Here is the code provide a Mockito spy for our ProductServiceImpl .

ProductServiceImplSpyTest.java

In this test class, notice we used MockitoJUnitRunner instead of MockitoAnnotations.initMocks() for our annotations.

For the first test, we expected NullPointerException because the getProductById() call on the spy will invoke the actual getProductById() method of ProductServiceImpl, and our repository implementations are not created yet.

In the second test, we are not expecting any exception, as we are stubbing the save() method of ProductRepository.

The second and third methods are the relevant use cases of a spy in the context of our application– verifying method invocations.

Conclusion

In Spring Boot applications, by using Mockito, you replace the @Autowired components in the class you want to test with mock objects. In addition to unit test the service layer, you will be unit testing controllers by injecting mock services. To unit test the DAO layer, you will mock the database APIs. The list is endless – It depends on the type of application you are working on and the object under test. If your following the Dependency Inversion Principle and using Dependency Injection, mocking becomes easy.

For partial mocking, use it to test 3rd party APIs and legacy code. You won’t require partial mocks for new, test-driven, and well-designed code that follows the Single Responsibility Principle. Another problem is that when() style stubbing cannot be used on spies. Also, given a choice between thenCallRealMethod on mock and spy, use the former as it is lightweight. Using thenCallRealMethod on mock does not create the actual object instance but bare-bones shell instance of the Class to track interactions. However, if you use spy, you create an object instance. As regard spy, use it if you only if you want to modify the behavior of small chunk of API and then rely mostly on actual method calls.

The code for this post is available for download here.

1
Share
, , , ,

This is part 6 of the tutorial series for building a web application using Spring Boot. In this post we look at adding a DAO Authentication provider for Spring Security.

We started off with the first part by creating our Spring project using the Spring Initializr. In part 2, we rendered a web page using Thymeleaf and Spring MVC. This was followed by part 3 where we looked at setting up Spring Data JPA for database persistence. Part 4 was all about consolidating everything to provide a working Spring Boot MVC Web Application capable of performing CRUD operations.

In the previous part 5 of this series, we configured a basic in-memory authentication provider. It’s a good starting point to learn Spring Security, but as I mentioned there, it’s not for enterprise applications. A production-quality implementation would likely use the DAO authentication provider.

In this part of the series, I will discuss Spring Security with the DAO authentication provider to secure our Spring Boot Web application. We will implement both authentication and role-based authorization with credentials stored in the H2 database. For persistence, we will use the Spring Data JPA implementation of the repository pattern, that I covered in part 3. Although there are several Spring Data JPA implementations, Hibernate is by far the most popular.

As the Spring Data JPA dependency is included in our Maven POM, Hibernate gets pulled in and configured with sensible default properties via Spring Boot.

This post builds upon 5 previous posts. If you’re not familiar with all the content around Spring, I suggest you to go through this series from the start.

JPA Entities

Our application already has a Product JPA entity. We’ll add two more entities, User and Role. Following the SOLID design principle’sprogram to interface ” principle, we will start by writing an interface followed with an abstract class for our entities.

DomainObject.java

AbstractDomainClass.java

The entity classes are as follows.

User.java

Role.java

The User and Role JPA entities are part of the many-to-many relationship. Also, in Line 15 of the User class, notice that the password field is marked as @Transient.

That’s because we don’t want to store the password in text form.

Instead, we will store the encrypted form of the password.

JPA Repositories

Spring Data JPA provides the CRUD Repository feature. Using it, we just define the repository interfaces for our User and Role entities to extend CrudRepository.

The Spring Data JPA repositories for the User and Role entities are as follows.

UserRepository.java

RoleRepository.java

By extending CrudRepository, both the repositories inherit several methods for working with entity persistence, including methods for saving, deleting, and finding entities. Spring Data JPA uses generics and reflection to generate the concrete implementations of both the interfaces.

Spring Data JPA Services

We can now create the services, that will use Spring Data JPA to perform CRUD operations on the User and Role entities.

Of course, we will follow the Interface Segregation principle to maintain loose coupling. It’s always best to “program to interface”, especially when leveraging the benefits of Spring’s dependency injection.

So, let’s start with the service interfaces.

CRUDService.java

UserService.java

RoleService.java

Both RoleService and UserService extends CRUDService that defines the basic CRUD operations on entities. UserService, with the additional findByUsername() method is a more specialized service interface for CRUD operations on User.

We have made the service interfaces generic to mask our service implementations using the Façade design pattern. The implementations can be Spring Data JPA with repository, DAO, or Map patterns, or even plain JDBC, or some external Web service. The client code does not need not to be aware of the implementation. By using interfaces, we are able to leverage multiple concrete implementations of the services.

We’ll write the service implementation classes using the Spring Data JPA repository pattern.

UserServiceImpl.java

In this class, we auto-wired in UserRepository and EncryptionService. Going ahead, we will create EncryptionService using the Jasypt library to add encryption capabilities for storing user passwords. The overridden methods of this class use the UserRepository we created to perform CRUD operations on User.

The RoleServiceImpl provides a similar implementation for RoleService.

RoleServiceImpl.java

Free Spring TutorialPassword Encryption Service

The Jasypt library provides an implementation for unidirectional encryption. We will use Jasypt to encrypt a password before storing it to the database. For authentication, we will provide Jasypt the received password. Under the hood, Jasypt will encrypt the received password and compare it to the stored one.

Let’s add the Jasypt dependency to our Maven POM.

Note: The latest available Jasypt 1.9.2 targets Spring Security 3. But even for Spring Security 4 that we are using, Jasypt doesn’t have compatibility issues.

With Jasypt pulled in, we will write a bean for StrongPasswordEncryptor of Jasypt – a utility class for easily performing high-strength password encryption and checking. The configuration class, CommonBeanConfig is this.

CommonBeanConfig.java

Our generic EncryptionService interface will define two methods to encrypt and compare passwords.

EncryptionService.java

The implementation class is this.

EncryptionServiceImpl.java

In this implementation class, we autowired the StrongPasswordEncryptor bean. In Line 18, the encryptPassword() method encrypts the password passed to it. In Line 22, the checkPassword() method returns a boolean result of the password comparison.

User Details Service Implementation

Spring Security provides a UserDetailsService interface to lookup the username, password and GrantedAuthorities for any given user. This interface provides only one method, loadUserByUsername(). This method returns an implementation of Spring Security’s UserDetails interface that provides core user information.

The UserDetails implementation of our application is this.

UserDetailsImpl.java

In this class, we have defined the fields of our data model and their corresponding setter methods. The SimpleGrantedAuthority we set on Line 16 is a Spring Security implementation of an authority that we will convert from our role. Think of an authority as being a “permission” or a “right” typically expressed as strings.

We need to provide an implementation of the loadUserByUsername() method of UserDetailsService. But the challenge is that the findByUsername() method of our UserService returns a User entity, while Spring Security expects a UserDetails object from the loadUserByUsername() method.

We will create a converter for this to convert User to UserDetails implementation.

UserToUserDetails.java

This class implements the Spring Core Coverter  interface and overrides the convert() method that accepts a User object to convert. In Line 16, the code instantiates a UserDetailsImpl object, and from Line 19 – Line 26, the code initializes the UserDetailsImpl object with data from User.

With the converter ready, it’s now easy to implement the UserDetailsService interface. The implementation class is this.

Here is our implemention.

UserDetailsServiceImpl.java

In the UserDetailsServiceImpl class, we auto wired in UserService and Converter. In Line 31, the lone overridden method loadUserByUsername() converts a User to UserDetails by calling the convert() method of Converter.

Security Configuration

The current security configuration class, SpringSecConfig extends WebSecurityConfigurerAdapter to configure two things. An authentication provider and the application routes to protect. Our route configuration will remain the same. However, we need to register the DAO authentication provider for use with Spring Security.

We will start by setting up a password encoder to encode passwords present in the UserDetails object returned by the configured UserDetailsService. We will define a new bean for Spring Security’s PasswordEncoder that takes in the StrongPassordEncryptor bean.

Remember that we created StrongPassordEncryptor earlier in the CommonBeanConfig Spring configuration class?

Next, we will set up the DAO authentication provider, like this.

In this code, we passed the previously configured PasswordEncoder and UserDetailsService to daoAuthenticationProvider(). The PasswordEncoder is going to use the Jasypt library for encoding the password and verifying that the passwords match. The UserDetailsService will fetch the User object from the database and hand over to Spring Security as a UserDetails object. In the method, we instantiated the DaoAuthenticationProvider and initialized it with the PasswordEncoder and UserDetailsService implementations.

Next, we need to auto wire in the AuthenticationProvider as we want the Spring Context to manage it.

We will also auto wire in the AuthenticationManagerBuilder. Spring Security will use this to set up the AuthenticationProvider.

The complete SpringSecConfig class is this.

SpringSecConfig.java

Application Bootstrapping with Seed Data

For seed data of the application, we have an ApplicationListener implementation class that gets called upon the ContextRefresedEvent on startup. In this class, we will use Spring to inject the UserRepository and RoleRepository Spring Data JPA repositories for our use. We will create two User and two Role entities and save them to the database when the application starts. The code of this class is this.

SpringJpaBootstrap.java

This class in addition to loading product data, invokes the following methods to load users and roles at startup:

  • loadUsers(): Stores two User entities. One with “user” and the other with “admin” as both the user name and password.
  • loadRoles(): Stores two Role entities for the “USER” and “ADMIN” roles.
  • assignUsersToUserRole(): Assigns the User with username “user” to the “USER” role.
  • assignUsersToAdminRole(): Assigns the User with username “admin” to the “ADMIN” role.

Free Spring TutorialThymeleaf Extras module

In the previous part 5 of this series, I discussed the Thymeleaf “extras” integration module to integrate Spring Security in our Thymeleaf templates. Things will largely remain unchanged in this presentation layer, except for two instances.
Currently, both USER and ROLE are being referred from the presentation layer code as ROLE_USER and ROLE_ADMIN. This was required because we relied on Spring Security’s in-memory authentication provider for managing our users and roles, and Spring Security’s internal feature maps a configured role to the role name prefixed with ROLE_. With the DAO authentication provider, our roles are mapped to authorities as it is (We did this in in the UserToUserDetails converter), and we can refer them directly from code as USER and ADMIN.

The second change is brought in by GrantedAuthority used by the Spring Security UserDetails interface. If you recall, we mapped our Role implementation to SimpleGrantedAuthority in the UserToUserDetails converter.

Therefore, in the Thymeleaf templates, we need to change the hasRole() and hasAnyRole() authorization expressions to hasAuthority() and hasAnyAuthorities().

The affected templates are header.html and products.html.

header.html

products.html