An exciting feature in Spring Framework 5 is the new Web Reactive framework for allows reactive web applications. Reactive programming is about developing systems that are fully reactive and non-blocking. Such systems are suitable for event-loop style processing that can scale with a small number of threads.

Spring Framework 5 embraces Reactive Streams to enable developing systems based on the Reactive Manifesto published in 2014.

The Spring Web Reactive framework stands separately from Spring MVC. This is because Spring MVC is developed around the Java Servlet API, which uses blocking code inside of Java. While popular Java application servers such as Tomcat and Jetty, have evolved to offer non-blocking operations, the Java Servlet API has not.

From a programming perspective, reactive programming involves a major shift from imperative style logic to a declarative composition of asynchronous logic.

In this post, I’ll explain how to develop a Web Reactive application with the Spring Framework 5.0.

Spring Web Reactive Types

Under the covers, Spring Web Reactive is using Reactor, which is a Reactive Streams Implementation. The Spring Framework extends the Reactive Streams Publisher interface with the Flux and Mono  reactive types.

The Flux  data type represents zero to many objects. (0..N)

While the Mono  data type is zero to one.  (0..1)

If you’d like a deeper dive on reactive types, check on Understanding Reactive Types by Sebastien Deleuze.

The Web Reactive Application

The application that we will create is a web reactive application that performs operations on domain objects. To keep it simple, we will use an in memory repository implementation to simulate CRUD operations in this post. In latter posts, we will go reactive with Spring Data.

Spring 5 added the new spring-webflux module for reactive programming that we will use in our application. The application is composed of these components:

  • Domain object: Product in our application.
  • Repository: A repository interface with an implementation class to mimic CRUD operations in a Map.
  • Handler: A handler class to interact with the repository layer.
  • Server: A non-blocking Web server with single-threaded event loop. For this application, we will look how to use both Netty and Tomcat to serve requests.

The Maven POM

For web reactive programming, you need the new spring-webflux and reactive-stream modules as dependencies in your Maven POM.

To host the application in a supported runtime, you need to add its dependency. The supported runtimes are:

  • Tomcat: org.apache.tomcat.embed:tomcat-embed-core
  • Jetty: org.eclipse.jetty:jetty-server and org.eclipse.jetty:jetty-servlet
  • Reactor Netty: io.projectreactor.ipc:reactor-netty
  • Undertow: io.undertow:undertow-core

The code to add dependencies for both embedded Tomcat and Netty is this.

The final dependency is for reactive serialization and deserialization to and from JSON with Jackson.

Note – This is a pre-release of Jackson, will includes non-blocking serialization and deserialization. (Version 2.9.0 was not released at time of writing)

As we are using the latest milestone release of Spring Boot, remember to add the Spring milestones repository:

Here is the complete Maven POM.


The Domain Object

Our application has a Product domain object on which operations will be performed. The code for the Product object is this.


Product is a POJO with fields representing product information. Each field has its corresponding getter and setter methods. @JsonProperty is a Jackson annotation to map external JSON properties to the Product fields.

The Repository

The repository layer of the application is built on the ProductRepository interface with methods to save a product, retrieve a product by ID, and retrieve all products.

In this example, we are mimicking the functionality of a reactive data store with a simple ConcurrentHashMap implementation.


The important things in this interface are the new Mono and Flux reactive types of Project Reactor. Both these reactive types along with the other types of the Reactive API are capable
to serve a huge amount of requests concurrently, and to handle operations with latency. These types makes operations, such as requesting data from a remote server, more efficient. Unlike traditional processing that blocks the current thread while waiting a result, Reactive APIs are non-blocking as they deal with streams of data.

To understand Mono and Flux, let’s look at the two main interfaces of the Reactive API: Publisher, which is the source of events T in the stream and Subscriber, which is the destination for those events.

Both Mono and Fluximplements Publisher. The difference lies in cardinality, which is critical in reactive streams.

The difference lies in cardinality, which is critical in reactive streams.

  • Flux observes 0 to N items and completes either successfully or with an error.
  • A Mono observes 0 or 1 item, with Mono hinting at most 0 item.

Note: Reactive APIs were initially designed to deal with N elements, or streams of data. So Reactor initially came only with Flux. But, while working on Spring Framework 5, the team found a need to distinguish between streams of 1 or N elements, so the Mono reactive type was introduced.

Here is the repository implementation class.


This ProductRepositoryInMemoryImpl class uses a Map implementation to store Product objects.

In the overridden getProduct() method, the call to Mono.justOrEmpty() creates a new Mono that emits the specified item – Product object in this case, provided the Product object is not null. For a <span class="theme:classic lang:default decode:true crayon-inline"> null value, the Mono.justOrEmpty() method completes by emitting onComplete.

In the overridden getAllProducts() method, the call to Flux.fromIterable() creates a new Flux that emits the items ( Product objects) present in the Iterable passed as parameter.

In the overridden saveProduct() method, the call to doOnNext() accepts a callback that stores the provided Product into the Map. What we have here is an example of a classic non-blocking programming. Execution control does not block and wait for the product storing operation.

The Product Handler

The Product handler is similar to a typical service layer in Spring MVC. It interacts with the repository layer. Following the SOLID Principles we would want client code to interact with this layer through an interface. So, we start with a ProductHandler interface.

The code of the ProductHandler interface is this.


The implementation class, ProductHandlerImpl is this.


In the getProductFromRepository(ServerRequest request) method of the ProductHandlerImpl class:

  • Line 22 obtains the product ID sent as request parameter
  • Line 23 builds a HTTP response as ServerResponse for the NOT_FOUND HTTP status.
  • Line 24 calls the repository to obtain the Product as a Mono.
  • Line 25 – Line 27: Returns a Mono that can represent either the Product or the NOT_FOUND HTTP status if the product is not found.
  • Line 31 in the saveProductToRepository(ServerRequest request) method converts the request body to a Mono. Then Line 33 calls the saveProduct() method of the repository to save the product, and finally return a success status code as an HTTP response.
  • In the getAllProductsFromRepository() method, Line 37 calls the getAllProducts() method of the repository that returns a Flux< ServerResponse>. Then Line 38 returns back the Flux as a JSON that contains all the products.

Running the Application

The example web reactive application has two components. One is the Reactive Web Server. The second is our client.

The Reactive Web Server

Now it is time to wire up all the components together for a web reactive application.

We will use embedded Tomcat as the server for the application, but will also look how to do the same with the lightweight Reactive Netty.

These we will implement in a Server class.


In this Server class:

  • Line 37 – Line 38 creates a ProductHandler initialized with ProductRepository.
  • Line 39 – Line 43 constructs and returns a RouterFunction. In Spring Reactive Web, you can relate a RouterFunction with the @RequestMapping annotation. A RouterFunction is used for routing incoming requests to handler functions. In the Server class, incoming GET requests to /{id} and / are routed to the getProductFromRepository and getAllProductsFromRepository handler functions respectively. Incoming POST requests to / are routed to the saveProductToRepository handler function.
  • Line 53 – Line 54  in the startTomcatServer() method, integrates the RouterFunction into Tomcat as a generic HttpHandler.
  • Line 55- Line 61 initializes Tomcat with a host name, port number, context path, and a servlet mapping.
  • Line 62 finally starts Tomcat by calling the start() method.

The output on executing the Server class is this.
Output of Tomcat
To use Netty instead of Tomcat, use this code:

The Client

Spring Framework 5 adds a new reactive WebClient in addition to the existing RestTemplate. The new WebClient deserves a post on its own.

To keep this post simple and limited to only accessing our reactive Web application, I will use ExchangeFunction – a simple alternative to WebClient. ExchangeFunction represents a function that exchanges a client request for a (delayed) client response.

The code of the client class, named ReactiveClient is this.


In the ReactiveClient class, Line 21 calls the ExchangeFunctions.create() method passing a ReactorClientHttpConnector, which is an abstraction over HTTP clients to connect the client to the server. The create() method returns an ExchangeFunction.

In the createProduct() method of the ReactiveClient class, Line 30 – Line 31 builds a ClientRequest that posts a Product object to a URL represented by the URI object. Then Line 32 calls the exchange(request) method to exchange the given request for a response Mono.

In the getAllProducts() method, Line 37 starts an exchange to send a GET request to get all products.

The response body is converted into a Flux and printed to the console.

With Tomcat running, the output on running the ReactiveClient class is:
Output of Reactive Web CLient


In this post, I showed you a very simple example of the new web reactive features inside of Spring Framework 5.

While the reactive programming features inside of Spring Framework 5 are certainly fun to use. What, I’m finding that is, even more, fun is the functional programming style of the new Spring Framework 5 APIs.

Consider the configuration of the web reactive server:

This functional style is a significant change from what we’ve become accustomed to in Spring MVC.

Don’t worry, Spring MVC is still alive and well. And even when using the Reactive features in Spring Framework 5, you can still define ‘controllers’ in the traditional declarative sense.

And maybe traditional monolithic applications will continue to declare controllers using traditional approaches?

Where I expect the functional style to really shine is in the realm of microservices. This new functional style makes it crazy easy to define small, targeted services.

I’m looking forward to seeing how the Spring community adopts the functional API, and seeing how it evolves.


If you’re following the Java community, you may be hearing about Reactive Streams in Java. Seems like in all the major tech conferences, you’re seeing presentations on Reactive Programming. Last year the buzz was all about Functional programming, this year the buzz is about Reactive Programming.

In 2016 the buzz was all about Functional programming. In 2017 the buzz is about Reactive Programming.

So, is the attention span of the Java community that short lived?

Have we Java developers forgotten about Functional programming and moved on to Reactive programming?

Not exactly. Actually, the Functional Programming paradigm complements Reactive Programming Paradigm very nicely.

You don’t need to use the Functional Programming paradigm to follow a Reactive Programming. You could use the good old imperative programming paradigm Java developers have traditionally used.  Maybe at least. You’d be creating yourself a lot headaches if you did. (Just because you can do something, does not mean you should do that something!)

Functional programming is important to Reactive Programming. But I’m not diving into Functional Programming in this post.

In this post, I want to look at the overall Reactive landscape in Java.

What is the Difference Between Reactive Programming and Reactive Streams?

With these new buzz words, it’s very easy to get confused about their meaning.

Reactive Programming is a programming paradigm. I wouldn’t call reactive programming new. It’s actually been around for awhile.

Just like object oriented programming, functional programming, or procedural programming, reactive programming is just another programming paradigm.

Reactive Streams, on the other hand, is a specification. For Java programmers, Reactive Streams is an API. Reactive Streams gives us a common API for Reactive Programming in Java.

The Reactive Streams API is the product of a collaboration between engineers from Kaazing, Netflix, Pivotal, Red Hat, Twitter, Typesafe and many others.

Reactive Streams is much like JPA or JDBC. Both are API specifications. Both of which you need use implementations of the API specification.

For example, from the JDBC specification, you have the Java DataSource interface. The Oracle JDBC implementation will provide you an implementation of the DataSource interface. Just as Microsoft’s SQL Server JDBC implementation will also provide an implementation of the DataSource interface.

Now your higher level programs can accept the DataSource object and should be able to work with the data source, and not need to worry if it was provided by Oracle or provided by Microsoft.

Just like JPA or JDBC, Reactive Streams gives us an API interface we can code to, without needing to worry about the underlying implementation.

Reactive Programming

There are plenty of opinions around what Reactive programming is. There is plenty of hype around Reactive programming too!

The best starting place to start learning about the Reactive Programming paradigm is to read the Reactive Manifesto. The Reactive Manifesto is a prescription for building modern, cloud scale architectures.

The Reactive Manifesto is a prescription for building modern, cloud scale architectures.

Reactive Manifesto

The Reactive Manifesto describes four key attributes of reactive systems:


The system responds in a timely manner if at all possible. Responsiveness is the cornerstone of usability and utility, but more than that, responsiveness means that problems may be detected quickly and dealt with effectively. Responsive systems focus on providing rapid and consistent response times, establishing reliable upper bounds so they deliver a consistent quality of service. This consistent behaviour in turn simplifies error handling, builds end user confidence, and encourages further interaction.


The system stays responsive in the face of failure. This applies not only to highly-available, mission critical systems — any system that is not resilient will be unresponsive after a failure. Resilience is achieved by replication, containment, isolation and delegation. Failures are contained within each component, isolating components from each other and thereby ensuring that parts of the system can fail and recover without compromising the system as a whole. Recovery of each component is delegated to another (external) component and high-availability is ensured by replication where necessary. The client of a component is not burdened with handling its failures.


The system stays responsive under varying workload. Reactive Systems can react to changes in the input rate by increasing or decreasing the resources allocated to service these inputs. This implies designs that have no contention points or central bottlenecks, resulting in the ability to shard or replicate components and distribute inputs among them. Reactive Systems support predictive, as well as Reactive, scaling algorithms by providing relevant live performance measures. They achieve elasticity in a cost-effective way on commodity hardware and software platforms.

Message Driven

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. This boundary also provides the means to delegate failures as messages. Employing explicit message-passing enables load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying back-pressure when necessary. Location transparent messaging as a means of communication makes it possible for the management of failure to work with the same constructs and semantics across a cluster or within a single host. Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.

The first three attributes (Responsive, Resilient, Elastic) are more related to your architecture choices. It’s easy to see why technologies such as microservices, Docker and Kubernetes are important aspects of reactive systems. Running a LAMP stack on a single server clearly does not meet the objectives of the Reactive Manifesto.

traits of reactive systemsMessage Driven and Reactive Programming

As Java developers, it’s the last attribute, Message Driven attribute, that interests us most.

Message-driven architectures are certainly nothing revolutionary. If you need a primer on message driven systems, I’d like to suggest reading Enterprise Integration Patterns. A truly iconic computer science book. The concepts in this book laid the foundations for Spring Integration and Apache Camel.

A few aspects of the Reactive Manifesto that does interest us Java developers are, failures at messages, back-pressure, and non-blocking. These are subtle, but important aspects of Reactive Programming in Java.

Failures as Messages

Often in Reactive programming, you will be processing a stream of messages. What is undesirable is to throw an exception and end the processing of the stream of messages.

The preferred approach is to gracefully handle the failure.

Maybe you needed to execute a web service and it was down. Maybe there is a backup service you can use? Or maybe retry in 10ms?

I’m not going to solve every edge case here. The key takeaway is you do not want to loudly fail with a runtime exception. Ideally, you want to note the failure, and have some type of re-try or recovery logic in place.

Often failures are handled with callbacks. Javascript developers are well accustomed to using callbacks.

But callbacks can get ugly to use. Javascript developers refer to this as call back hell.

In Reactive Steams, exceptions are first class citizens. Exceptions are not rudely thrown. Error handling is built right into the Reactive Streams API specification.

Back Pressure

Have you ever heard of the phrase “Drinking from the Firehose”?

drinking from the firehose - importance of back pressure in reactive programing.

Back Pressure is a very important concept in Reactive programming. It gives down stream clients a way to say I’d some more please.

Imagine if you’re making a query of a database, and the result set returns back 10 million rows. Traditionally, the database will vomit out all 10 million rows as fast as the client will accept them.

When the client can’t accept any more, it blocks. And the database anxiously awaits. Blocked. The threads in the chain patiently wait to be unblocked.

In a Reactive world, we want our clients empowered to say give me the first 1,000. Then we can give them 1,000 and continue about our business – until the client comes back and asks for another set of records.

This is a sharp contrast to traditional systems where the client has no say. Throttling is done by blocking threads, not programmatically.


The final, and perhaps most important, aspect of reactive architectures important to us Java developers is non-blocking.

Until Reactive came long, being non-blocking didn’t seem like that big of a deal.

As Java developers, we’ve been taught to take advantage of the powerful modern hardware by using threads. More and more cores, meant we could use more and more threads. Thus, if we needed to wait on the database or a web service to return, a different thread could utilize the CPU. This seemed to make sense to us. While our blocked thread waited on some type of I/O, a different thread could use the CPU.

So, blocking is no big deal. Right?

Well, not so much. Each thread in the system will consume resources. Each time a thread is blocked, resources are consumed. While the CPU is very efficient at servicing different threads, there is still a cost involved.

We Java developers can be an arrogant bunch.

They’ve always looked down upon Javascript. Kind of a nasty little language, preferred by script kiddies.  Just the fact Javascript shared the word ‘java’ always made us Java programmers feel a bit dirty.

If you’re a Java developer, how many times have you felt annoyed when you have to point out that Java and Javascript are two different languages?

Then Node.js came along.

And Node.js put up crazy benchmarks in throughput.

And then the Java community took notice.

Yep, the script kiddies had grown up and were encroaching on our turf.

It wasn’t that Javascript running in the Google’s V8 Javascript engine was some blazing fast godsend to programming. Java used it have its warts in terms of performance, but its pretty efficient, even compared to modern native languages.

Java used it have its warts in terms of performance, but now its pretty efficient. Even when Java compared to modern native languages.

The secret sauce of Node.js’s performance was non-blocking.

Node.js uses an event loop with limited a number of threads. While blocking in the Java world is often viewed as no big deal, in the Node.js world it would be the kiss of death to performance.

These graphics can help you visualize the difference.

In Node.JS there is a non-blocking event loop. Requests are processed in a non-blocking manner. Threads do not get stuck waiting on other processes.

node.js single thread event loop processing

Contrast the Node.JS model to the typical multithreaded server used in Java. Concurrency is achieved through the use of multiple threads. Which is generally accepted due to the growth of multi-core processors.

multi threaded server with blocking

I personally envision the difference between the two approaches as the difference between a super highway and lots of city streets with lights.

With a single thread event loop, your process is cruising quickly along on a super highway. In a Multi-threaded server, your process is stuck on city streets in stop and go traffic.

Both can move a lot of traffic. But, I’d rather be cruising at highway speeds!

your code on reactive streams

What happens when you move to a non-blocking paradigm, is your code stays on the CPU longer. There is less switching of threads. You’re removing the overhead not only managing many threads, but also the context switching between threads.

You will see more head room in system capacity for your program to utilize.

Non-blocking is a not a performance holy grail. You’re not going to see things run a ton faster.

Yes, there is a cost to managing blocking. But all things considered, it is relatively efficient.

In fact, on a moderately utilized system, I’m not sure how measurable the difference would be.

But what you can expect to see, as your system load increases, you will have additional capacity to service more requests. You will achieve greater concurrency.

How much?

Good question. Use cases are very specific. As with all benchmarks, your mileage will vary.

Learn more about my Spring Framework 5 course here!The Reactive Streams API

Let’s take a look at the Reactive Streams API for Java. The Reactive Streams API consists of just 4 interfaces.


A publisher is a provider of a potentially unbounded number of sequenced elements, publishing them according to the demand received from its Subscribers.



Will receive call to Subscriber.onSubscribe(Subscription) once after passing an instance of Subscriber to Publisher.subscribe(Subscriber).



A Subscription represents a one-to-one lifecycle of a Subscriber subscribing to a Publisher.



A Processor represents a processing stage—which is both a Subscriber and a Publisher and obeys the contracts of both.


Reactive Streams Implementations for Java

The reactive landscape in Java is evolving and maturing. David Karnok has a great blog post on Advanced Reactive Java, in which he breaks down the various reactive projects into generations. I’ll note the generations of each below – (which may change at any time with a new release).


RxJava is the Java implementation out of the ReactiveX project.  At the time of writing, the ReactiveX project had implementations for Java, Javascript, .NET (C#), Scala, Clojure, C++, Ruby, Python, PHP, Swift and several others.

ReactiveX provides a reactive twist on the GoF Observer pattern, which is a nice approach. ReactiveX calls their approach ‘Observer Pattern Done Right’.

ReactiveX is a combination of the best ideas from the Observer pattern, the Iterator pattern, and functional programming.

RxJava predates the Reactive Streams specification. While RxJava 2.0+ does implement the Reactive Streams API specification, you’ll notice a slight difference in terminology.

David Karnok, who is a key committer on RxJava, considers RxJava a 3rd Generation reactive library.


Reactor is a Reactive Streams compliant implementation from Pivotal. As of Reactor 3.0, Java 8 or above is a requirement.

The reactive functionality found in Spring Framework 5 is built upon Reactor 3.0.

Reactor is a 4th generation reactive library. (David Karnok is also a committer on project Reactor)

Akka Streams

Akka Streams also fully implements the Reactive Streams specification. Akka uses Actors to deal with streaming data. While Akka Streams is compliant with the Reactive Streams API specification, the Akka Streams API is completely decoupled from the Reactive Streams interfaces.

Akka Streams is considered a 3rd generation reactive library.


Ratpack is a set of Java libraries for building modern high-performance HTTP applications. Ratpack uses Java 8, Netty, and reactive principles. Ratpack provides a basic implementation of the Reactive Stream API, but is not designed to be a fully-featured reactive toolkit.

Optionally, you can use RxJava or Reactor with Ratpack.


Vert.x is an Eclipse Foundation project, which is a polyglot event driven application framework for the JVM. Reactive support in Vert.x is similar to Ratpack. Vert.x allows you to use RxJava or their native implementation of the Reactive Streams API.

Reactive Streams and JVM Releases

Reactive Streams for Java 1.8

With Java 1.8, you will find robust support for the Reactive Streams specification.

In Java 1.8 Reactive streams is not part of the Java API. However, it is available as a separate jar.

Reactive Streams Maven Dependency

While you can include this dependency directly, whatever implementation of Reactive Streams you are using, should include it automatically as a dependency.

Reactive Streams for Java 1.9

Things change a little bit when you move to Java 1.9. Reactive Streams become part of the official Java 9 API.

You’ll notice that the Reactive Streams interfaces move under the Flow class in Java 9. But other than that, the API is the same as Reactive Streams 1.0 in Java 1.8.


At the time of writing, Java 9 is right around the corner. In Java 9, Reactive Streams is officially part of the Java API.

In researching this article, it’s clear the various reactive libraries have been evolving and maturing (ie David Karnok generations classification).

Before Reactive Streams, the various reactive libraries had no way of interoperability. They could not talk to each other. Early versions of RxJava were not compatible with early versions of project Reactor.

But on the eve of the release of Java 9, the major reactive libraries have adopted the Reactive Streams specification. The different libraries are now interoperable.

Having the interoperability is an important domino to fall. For example, Mongo DB has implemented a Reactive Streams driver. Now, in our applications, we can use Reactor or RxJava to consume data from a Mongo DB.

We’re still early in the adaptation of Reactive Streams. But over the next year or so, we can expect more and more open source projects to offer Reactive Streams compatibilities.

I expect we are going to see a lot more of Reactive Streams in the near future.

It’s fun time to be a Java developer!


Spring Framework 5.0 is the first major release of the Spring Framework since version 4 was released in December of 2013. Juergen Hoeller, Spring Framework project lead announced the release of the first Spring Framework 5.0 milestone (5.0 M1) on 28 July 2016.

Now, a year later, we are looking forward to Release Candidate 3 (RC3) to be released on July 18th, 2017. This is expected to be the final release on the roadmap to the first GA (General Availability) release of Spring Framework 5.0.

I’m excited about the new features and enhancements in Spring Framework 5.0.

At a high level, features of Spring Framework 5.0 can be categorized into:

  • JDK baseline update
  • Core framework revision
  • Core container updates
  • Functional programming with Kotlin
  • Reactive Programming Model
  • Testing improvements
  • Library support
  • Discontinued support

JDK Baseline Update for Spring Framework 5.0

The entire Spring framework 5.0 codebase runs on Java 8. Therefore, Java 8 is the minimum requirement to work on Spring Framework 5.0.

This is actually very significant for the framework. While as developers, we’ve been able to enjoy all the new features found in modern Java releases. The framework itself was carrying a lot of baggage in supporting deprecated Java releases.

The framework now requires a minimum of Java 8.

Originally, Spring Framework 5.0 was expected to release on Java 9. However, with the Java 9 release running 18 months + behind, the Spring team decided to decouple the Spring Framework 5.0 release from Java 9.

However, when Java 9 is released (expected in September of 2017), Spring Framework 5.0 will be ready.

Core Framework Revision

The core Spring Framework 5.0 has been revised to utilize the new features introduced in Java 8. The key ones are:

  • Based on Java 8 reflection enhancements, method parameters in Spring Framework 5.0 can be efficiently accessed.
  • Core Spring interfaces now provide selective declarations built on Java 8 default methods.
  • @Nullable and @NotNull annotations to explicitly mark nullable arguments and return values. This enables dealing null values at compile time rather than throwing NullPointerExceptions at runtime.

On the logging front, Spring Framework 5.0 comes out of the box with Commons Logging bridge module, named spring-jcl instead of the standard Commons Logging. Also, this new version will auto detect Log4j 2.x, SLF4J, JUL ( java.util.logging) without any extra bridges.

Defensive programming also gets a thrust with the Resource abstraction providing the isFile indicator for the getFile method.

Core Container Updates

Spring Framework 5.0 now supports candidate component index as an alternative to classpath scanning. This support has been added to shortcut the candidate component identification step in the classpath scanner.

An application build task can define its own META-INF/spring.components file for the current project. At compilation time, the source model is introspected and JPA entities and Spring Components are flagged.

Reading entities from the index rather than scanning the classpath does not have significant differences for small projects with less than 200 classes. However, it has significant impacts on large projects.

Loading the component index is cheap. Therefore the startup time with the index remains constant as the number of classes increase. While for a compoent scan the startup time increases significantly.

What this means for us developers on large Spring projects, the startup time for our applications will be reduced significantly. While 20 or 30 seconds does not seem like much, when you’re waiting for that dozens or hundreds of times a day, it adds up. Using the component index will help with your daily productivity.

You can find more information on the component index feature on Spring’s Jira.

Now  @Nullable annotations can also be used as indicators for optional injection points. Using  @Nullable imposes an obligation on the consumers that they must prepare for a value to be null. Prior to this release, the only way to accomplish this is through Android’s Nullable, Checker Framework’s Nullable, and JSR 305’s Nullable.

Some other new and enhanced features from the release note are:

  • Implementation of functional programming style in GenericApplicationContext and AnnotationConfigApplicationContext
  • Consistent detection of transaction, caching, async annotations on interface methods.
  • XML configuration namespaces streamlined towards unversioned schemas.

Functional Programming with Kotlin

Spring Framework 5.0 introduces support for JetBrains Kotlin language. Kotlin is an object-oriented language supporting functional programming style.

Kotlin runs on top of the JVM, but not limited to it. With Kotlin support, developers can dive into functional Spring programming, in particular for functional Web endpoints and bean registration.

In Spring Framework 5.0, you can write clean and idiomatic Kotlin code for Web functional API, like this.

For bean registration, as an alternative to XML or @Configuration and @Bean, you can now use Kotlin to register your Spring Beans, like this:

spring framework 5.0 reactive is coming
Click here to learn about my new Spring Framework 5 course!

Reactive Programming Model

An exciting feature in this Spring release is the new reactive stack Web framework.

Being fully reactive and non-blocking, this Spring Framework 5.0 is suitable for event-loop style processing that can scale with a small number of threads.

Reactive Streams is an API specification developed by engineers from Netflix, Pivotal, Typesafe, Red Hat, Oracle, Twitter, and Spray.io. This provides a common API for reactive programming implementations to implement. Much like JPA for Hibernate. Where JPA is the API, and Hibernate is the implementation.

The Reactive Streams API is officially part of Java 9. In Java 8, you will need to include a dependency for the Reactive Streams API specification.

Streaming support in Spring Framework 5.0 is built upon Project Reactor, which implements the Reactive Streams API specification.

Spring Framework 5.0 has a new  spring-webflux module that supports reactive HTTP and WebSocket clients. Spring Framework 5.0 also provides support for reactive web applications running on servers which includes REST, HTML, and WebSocket style interactions.

I have a detailed post about Reactive Streams here.

There are two distinct programming models on the server-side in spring-webflux:

  • Annotation-based with @Controller and the other annotations of Spring MVC
  • Functional style routing and handling with Java 8 lambda

With Spring Webflux, you can now create WebClient, which is reactive and non-blocking as an alternative to RestTemplate.

A WebClient implementation of a REST endpoint in Spring 5.0 is this.

Spring Framework 5 - Beginner to Guru
Click Here to learn about – Spring Framework 5 – Beginner to Guru!

While the new WebFlux module brings us some exciting new capabilities, traditional Spring MVC is still fully supported in Spring Framework 5.0.

Testing Improvements

Spring Framework 5.0 fully supports Junit 5 Jupiter to write tests and extensions in JUnit 5. In addition to providing a programming and extension model, the Jupiter sub-project provides a test engine to run Jupiter based tests on Spring.

In addition, Spring Framework 5 provides support for parallel test execution in Spring TestContext Framework.
For the reactive programming model, spring-test now includes WebTestClient for integrating testing support for Spring WebFlux. The new WebTestClient, similar to MockMvc does not need a running server. Using a mock request and response, WebTestClient can bind directly to the WebFlux server infrastructure.

For a complete list enhancements in the existing TestContext framework, you can refer here.

Of course, Spring Framework 5.0 still supports our old friend JUnit 4 as well! At the time of writing, JUnit 5 is just about to go GA. Support for JUnit 4 is going to be with the Spring Framework for some time into the future.

Library Support

Spring Framework 5.0 now supports the following upgraded library versions:

Discontinued Support

At the API level, Spring Framework 5.0 has discontinued support for the following packages:

  • beans.factory.access
  • jdbc.support.nativejdbc
  • mock.staticmock of the spring-aspects module.
  • web.view.tiles2M. Now Tiles 3 is the minimum requirement.
  • orm.hibernate3 and orm.hibernate4. Now, Hibernate 5 is the supported framework.

Spring Framework 5.0 has also discontinued support for the following libraries:

  • Portlet
  • Velocity
  • JasperReports
  • XMLBeans
  • JDO
  • Guava

If you are using any of the preceding packages, it is recommended to stay on Spring Framework 4.3.x.


The highlight of the Spring Framework 5.0 is definitely reactive programming, which is a significant paradigm shift. You can look at Spring Framework 5.0 as a cornerstone release for reactive programs. For the remainder of 2017 and beyond, you can expect to see child projects implement reactive features. You will see reactive programming features added to upcoming releases of Spring Data, Spring Security, Spring Integration and more.

You can look at Spring Framework 5.0 as a cornerstone release for reactive programs. For the remainder of 2017 and beyond, you can expect to see child projects implement reactive features. You will see reactive programming features added to upcoming releases of Spring Data, Spring Security, Spring Integration and more.

The Spring Data team has already implemented reactive support for MongoDB and Redis.

It’s still too early to get reactive support with JDBC. The JDBC specification itself is blocking. So, its going to be some time before we see reactive programming with traditional JDBC databases.

While Reactive Programming is the shiny new toy inside of Spring Framework 5.0, it’s not going to be supported everywhere. The downstream technologies need to provide reactive support.

With the growing popularity of Reactive Programming, we can expect to see more and more technologies implement Reactive solutions. The reactive landscape is rapidly evolving. Spring Framework 5 uses Reactor, which is Reactive Streams compliant implementation. You can read more about the Reactive Streams specification here.


Spring Boot makes developing RESTful services ridiculously easy. And using Swagger makes documenting your RESTful services easy.

Building a back-end API layer introduces a whole new area of challenges that goes beyond implementing just endpoints. You now have clients which will now be using your API. Your clients will need to know how to interact with your API. In SOAP based web services, you had a WSDL to work with. This gave API developers a XML based contract, which defined the API. However, with RESTFul web services, there is no WSDL. Thus your API documentation becomes more critical.

API documentation should be structured so that it’s informative, succinct, and easy to read. But best practices on, how you document your API, its structure, what to include and what not to is altogether a different subject that I won’t be covering here. For best practices on documentation, I suggest going through this presentation of Andy Wikinson.

In this post I’ll cover how to use Swagger 2 to generate REST API documentation for a Spring Boot project.

Swagger 2 in Spring Boot

Swagger 2 is an open source project used to describe and document RESTful APIs. Swagger 2 is language-agnostic and is extensible into new technologies and protocols beyond HTTP. The current version defines a set HTML, JavaScript, and CSS assets to dynamically generate documentation from a Swagger-compliant API. These files are bundled by the Swagger UI project to display the API on browser. Besides rendering documentation, Swagger UI allows other API developers or consumers to interact with the API’s resources without having any of the implementation logic in place.

The Swagger 2 specification, which is known as OpenAPI specification has several implementations. Currently, Springfox that has replaced Swagger-SpringMVC (Swagger 1.2 and older) is popular for Spring Boot applications. Springfox supports both Swagger 1.2 and 2.0.

We will be using Springfox in our project.

To bring it in, we need the following dependency declaration in our Maven POM.

In addition to Sprinfox, we also require Swagger UI. The code to include Swagger UI is this.

Free Spring TutorialThe Spring Boot RESTful Application

Our application implements a set of REST endpoints to manage products. We have a Product JPA entity and a repository named ProductRepository that extends CrudRepository to perform CRUD operations on products against an in-memory H2 database.

The service layer is composed of a ProductService interface and a ProductServiceImpl implementation class.

The Maven POM of the application is this.


The controller of the application , ProductController defines the REST API endpoints. The code of ProductController is this.

In this controller, the @RestController annotation introduced in Spring 4.0 marks ProductController as a REST API controller. Under the hood, @RestController works as a convenient annotation to annotate the class with the @Controller and @ResponseBody.

The @RequestMapping class-level annotation maps requests to /product onto the ProductController class. The method-level @RequestMapping annotations maps web requests to the handler methods of the controller.

Configuring Swagger 2 in the Application

For our application, we will create a Docket bean in a Spring Boot configuration to configure Swagger 2 for the application. A Springfox Docket instance provides the primary API configuration with sensible defaults and convenience methods for configuration. Our Spring Boot configuration class, SwaggerConfig is this.

In this configuration class, the @EnableSwagger2 annotation enables Swagger support in the class. The select() method called on the Docket bean instance returns an ApiSelectorBuilder, which provides the apis() and paths() methods to filter the controllers and methods being documented using String predicates. In the code, the RequestHandlerSelectors.basePackage predicate matches the guru.springframework.controllers base package to filter the API. The regex parameter passed to paths() acts as an additional filter to generate documentation only for the path starting with /product.

At this point, you should be able to test the configuration by starting the app and pointing your browser to http://localhost:8080/v2/api-docs
Swagger JSON Output
Obviously, the above JSON dump that Swagger 2 generates for our endpoints is not something we want.

What we want is some nice human readable structured documentation, and this is where Swagger UI takes over.

On pointing your browser to http://localhost:8080/swagger-ui.html, you will see the generated documentation rendered by Swagger UI, like this.

Swagger default documentation

As you can see, Swagger 2 used sensible defaults to generate documentation of our ProductController .

Then Swagger UI wrapped everything up to provide us an intuitive UI. This was all done automatically. We did not write any code or other documentation to support Swagger.

Customizing Swagger

So far, we’ve been looking at Swagger documentation, as it comes out of the box. But Swagger 2 has some great customization options.

Let’s start customizing Swagger by providing information about our API in the SwaggerConfig class like this.


In the SwaggerConfig class, we have added a metaData() method that returns and ApiInfo object initialized with information about our API. Line 23 initializes the Docket with the new information.

The Swagger 2 generated documentation, now look similar to this.
Swagger Generated API Information

Swagger 2 Annotations for REST Endpoints

At this point, if you click the product-controller link, Swagger-UI will display the documentation of our operation endpoints, like this.
Swagger default endpoint documentation

We can use the @Api annotation on our ProductController class to describe our API.

The Swagger-UI generated documentation will reflect the description, and now looks like this.
Customized Swagger Generated API Description
For each of our operation endpoint, we can use the @ApiOperation annotation to describe the endpoint and its response type, like this:

Swagger 2 also allows overriding the default response messages of HTTP methods. You can use the @ApiResponse annotation to document other responses, in addition to the regular HTTP 200 OK, like this.

One undocumented thing that took quite some of my time was related to the value of Response Content Type. Swagger 2 generated "*/*", while I was expecting "application/json" for Response Content Type. It was only after updating the @RequestMapping annotation with produces = "application/json" that the desired value got generated. The annotated ProductController is this.


The output of the operation endpoints on the browser is this.
Swagger generated endpoint and response code documentation
If you have noticed, the current documentation is missing one thing – documentation of the Product JPA entity. We will generate documentation for our model next.

Swagger 2 Annotations for Model

You can use the @ApiModelProperty annotation to describe the properties of the Product model. With @ApiModelProperty, you can also document a property as required.

The code of our Product class is this.


The Swagger 2 generated documentation for Product is this.
Swagger model documentation


Beside REST API documentation and presentation with Swagger Core and Swagger UI, Swagger 2 has a whole lot of other uses too that is beyond the scope of this post. One of my favorite is Swagger Editor, a tool to design new APIs or edit existing ones. The editor visually renders your Swagger definition and provides real time error-feedback. Another one is Swagger Codegen – a code generation framework for building Client SDKs, servers, and documentation from Swagger definitions.

Swagger 2 also supports Swagger definition through JSON and YAML files. It is something you should try if you want to avoid implementation-specific code in your codebase by externalizing them in JSON and YAML files – something that I will cover in a future post.

The code for this post is available for download here.

, ,

I have met many developers who refer to tests as “Unit Tests” when they are actually integration tests. In service layers, I’ve seen tests referred as unit tests, but written with dependencies on the actual service, such as a database, web service, or some message server. Those are part of integration testing. Even if you’re just using the Spring Context to auto-wire dependencies, your test is an integration test. Rather than using the real services, you can use Mockito mocks and spies to keep your tests unit tests and avoid the overhead of running integration tests.

This is not to say Integration tests are bad. There is certainly a role for integration tests. They are a necessity.

But compared to unit tests, integration tests are sloooowwwww. Very slow. Your typical unit test will execute in a fraction of a second. Even complex unit tests on obsolete hardware will still complete sub-second.

Integration tests, on the otherhand take several seconds to execute. It takes time to start the Spring Context. It takes time to start a H2 in memory database. It takes time to establish a database connection.

While this may not seem much, it becomes exponential on a large project. As you add more and more tests, the length of your build becomes longer and longer.

No developer wants to break the build. So we run all the tests to be sure. As we code, we’ll be running the full suite of tests multiple times a day. For your own productivity, the suite of tests needs to run quickly.

If you’re writing Integration tests where a unit test would suffice, you’re not only impacting your own personal productivity. You’re impacting the productivity of the whole team.

On a recent client engagement, the development team was very diligent about writing tests. Which is good. But, the team favored writing Integration tests. Frequently, integration tests were used where a Unit test could have been used. The build was getting slower and slower. Because of this, the team started refactoring their tests to use Mockito mocks and spies to avoid the need for integration tests.

They were still testing the same objectives. But Mockito was being used to fill in for the dependency driving the need for the integration test.

For example, Spring Boot makes it easy to test using a H2 in memory database using JPA and repositories supplied by Spring Data JPA.

But why not use Mockito to provide a mock for your Spring Data JPA repository?

Unit tests should be atomic, lightweight, and fast that are done as isolated units. Additionally, unit tests in Spring should not bring up a Spring Context. I have written about the different types of tests in my earlier Testing Software post.

I have already written a series of posts on JUnit and a post on Testing Spring MVC With Spring Boot 1.4: Part 1. In the latter, I discussed unit testing controllers in a Spring MVC application.

I feel the majority of your tests should be unit tests, not integration tests. If you’re writing your code following the SOLID Principles of OOP, your code is already well structured to accept Mockito mocks.

In this post, I’ll explain how to use Mockito to test the service layer of a Spring Boot MVC application. If Mockito is new for you, I suggest reading my Mocking in Unit Tests With Mockito post first.

Using Mockito Mocks and SpiesMockito Mocks vs Spies

In unit test, a test double is a replacement of a dependent component (collaborator) of the object under test. The test double does not have to behave exactly as the collaborator. The purpose is to mimic the collaborator to make the object under test think that it is actually using the collaborator.

Based on the role played during testing, there can be different types of test doubles. In this post we’re going to look at mocks and spies.

There are some other types of test doubles, such as dummy objects, fake objects, and stubs. If you’re using Spock, one of my favorite tricks was to cast a map of closures in as a test double. (One of the many fun things you can do with Groovy!)

What makes a mock object different from the others is that it has behavior verification. Which means the mock object verifies that it (the mock object) is being used correctly by the object under test. If the verification succeeds, you can conclude the object under test will correctly use the real collaborator.

Spies on the other hand, provides a way to spy on a real object. With a spy, you can call all the real underlying methods of the object while still tracking every interaction, just as you would with a mock.

Things get a bit different for Mockito mocks vs spies. A Mockito mock allows us to stub a method call. Which meams we can stub a method to return a specific object. For example, we can mock a Spring Data JPA repository in a service class to stub a getProduct() method of the repository to return a Product object. To run the test, we don’t need the database to be up and running – a pure unit test.

A Mockito spy is a partial mock. We can mock a part of the object by stubbing few methods, while real method invocations will be used for the other. By saying so, we can conclude that calling a method on a spy will invoke the actual method, unless we explicitly stub the method, and therefore the term partial mock.

Let’s look mocks vs spies in action, with a Spring Boot MVC application.

The Application Under Test

Our application contains a single Product JPA entity. CRUD operations are performed on the entity by ProductRepository using a CrudRepository supplied by Spring Data JPA. If you look at the code, you will see all we did was extend the Spring Data JPA CrudRepository to create our ProductRepository. Under the hood, Spring Data JPA provides implementations to manage entities for most common operations, such as saving an entity, updating it, deleting it, or finding it by id.

The service layer is developed following the SOLID design principles. We used the “Code to an Interface” technique, while leveraging the benefits of dependency injection. We have a ProductService interface and a ProductServiceImpl implementation. It is this ProductServiceImpl class that we will unit test.

Here is the code of ProductServiceImpl .


In the ProductServiceImpl class, you can see that ProductRepository is @Autowired in. The repository is used to perform CRUD operations. – a mock candidate to test ProductServiceImpl.

Testing with Mockito Mocks

Coming to the testing part, let’s take up the getProductById() method of ProductServiceImpl. To unit test the functionality of this method, we need to mock the external Product and ProductRepository objects. We can do it by either using the Mockito’s mock() method or through the @Mockito annotation. We will use the latter option since it is convenient when you have a lot of mocks to inject.

Once we declare a mock` with the @Mockito annotation, we also need to initialize it. Mock initialization happens before each test method. We have two options – using the JUnit test runner, MockitoJUnitRunner or MockitoAnnotations.initMocks() . Both are equivalent solutions.

Finally, you need to provide the mocks to the object under test. You can do it by calling the setProductRepository() method of ProductServiceImpl or by using the @InjectMocks annotation.

The following code creates the Mockito mocks, and sets them on the object under test.

Note: Since we are using the Spring Boot Test starter dependency, Mockito core automatically is pulled into our project. Therefore no extra dependency declaration is required in our Maven POM.

Once our mocks are ready, we can start stubbing methods on the mock. Stubbing means simulating the behavior of a mock object’s method. We can stub a method on the ProductRepository mock object by setting up an expectation on the method invocation.

For example, we can stub the findOne() method of the ProductRepository mock to return a Product when called. We then call the method whose functionality we want to test, followed by an assertion, like this.

This approach can be used to test the other methods of ProductServiceImpl, leaving aside deleteProduct() that has void as the return type.

To test the deleteProduct(), we will stub it to do nothing, then call deleteProduct(), and finally assert that the delete() method has indeed been called.

Here is the complete test code for using Mockito mocks:


Note: An alternative to doNothing() for stubbing a void method is to use doReturn(null).

Testing with Mockito Spies

We have tested our ProductServiceImpl with mocks. So why do we need spies at all? Actually, we don’t need one in this use case.

Outside Mockito, partial mocks were present for a long time to allow mocking only part (few methods) of an object. But, partial mocks were considered as code smells. Primarily because if you need to partially mock a class while ignoring the rest of its behavior, then this class is violating the Single Responsibility Principle, since the code was likely doing more than one thing.

Until Mockito 1.8, Mockito spies were not producing real partial mocks. However, after many debates & discussions, and after finding a valid use case for partial mock, support for partial mock was added to Mockito 1.8.

You can partially mock objects using spies and the callRealMethod() method. What it means is without stubbing a method, you can now call the underlying real method of a mock, like this.

Be careful that the real implementation is ‘safe’ when using thenCallRealMethod(). The actual implementation needs be able to run in the context of your test.

Another approach for partial mocking is to use a spy. As I mentioned earlier, all method calls on a spy are real calls to the underlying method, unless stubbed. So, you can also use a Mockito spy to partially mock few stubbed methods.

Here is the code provide a Mockito spy for our ProductServiceImpl .


In this test class, notice we used MockitoJUnitRunner instead of MockitoAnnotations.initMocks() for our annotations.

For the first test, we expected NullPointerException because the getProductById() call on the spy will invoke the actual getProductById() method of ProductServiceImpl, and our repository implementations are not created yet.

In the second test, we are not expecting any exception, as we are stubbing the save() method of ProductRepository.

The second and third methods are the relevant use cases of a spy in the context of our application– verifying method invocations.


In Spring Boot applications, by using Mockito, you replace the @Autowired components in the class you want to test with mock objects. In addition to unit test the service layer, you will be unit testing controllers by injecting mock services. To unit test the DAO layer, you will mock the database APIs. The list is endless – It depends on the type of application you are working on and the object under test. If your following the Dependency Inversion Principle and using Dependency Injection, mocking becomes easy.

For partial mocking, use it to test 3rd party APIs and legacy code. You won’t require partial mocks for new, test-driven, and well-designed code that follows the Single Responsibility Principle. Another problem is that when() style stubbing cannot be used on spies. Also, given a choice between thenCallRealMethod on mock and spy, use the former as it is lightweight. Using thenCallRealMethod on mock does not create the actual object instance but bare-bones shell instance of the Class to track interactions. However, if you use spy, you create an object instance. As regard spy, use it if you only if you want to modify the behavior of small chunk of API and then rely mostly on actual method calls.

The code for this post is available for download here.


When developing enterprise applications, Spring programmers typically prefer writing data-centric code against a lightweight in-memory database, such as H2 rather than running an enterprise database server such as Oracle, or MySQL. Out of the box, Spring Boot is very easy to use with the H2 Database.

In-memory databases are useful in the early development stages in local environments, but they have lots of restrictions. As the development progresses, you would most probably require an RDBMS to develop and test your application before deploying it to use a production database server. I have written a series of posts on integrating Spring Boot for Oracle, MySQL, and PostgreSQL.

If you are wondering about the complexities of switching between databases, it’s actually quite simple. When you’re using Spring Data JPA with an ORM technology like Hibernate, the persistence layer is fairly well decoupled, which allows you to easily run your code against multiple databases. The level of decoupling even allows you to easily switch between an RDBMS and a NoSQL database, such as MongoDB. My previous post on Integrating Spring Boot for MongoDB covers that.

In this post, I will discuss Spring Boot integration for MariaDB. MariaDB started off as an offshoot of MySQL due to concerns of Oracle’s acquisition of MySQL. Led by the original developers of MySQL, MariaDB has become one of the fastest growing open source databases.

MariaDB Configuration

For this post, I’m using MariaDB installed locally on my laptop. You’ll need to have a database defined for your use.
Use the following command to log into MariaDB:

Once you are logged in, use the following command to create a database.

MariaDB Dependencies

First, we need to add the MariaDB database driver, mariadb-java-client, as a dependency to our project. The Maven POM file is this.


Spring Boot Properties

We need to override the H2 database properties being set by default in Spring Boot. The nice part is, Spring Boot sets default database properties only when you don’t. So, when we configure MariaDB for use, Spring Boot won’t setup the H2 database anymore.

The following properties are required to configure MariaDB with Spring Boot. You can see these are pretty standard Java data source properties.

As we are using JPA, we need to configure Hibernate for MariaDB too. Line 4 tells Hibernate to recreate the database on startup. This is definitely not the behavior we want if this was actually a production database You can set this property to the following values: none, validate, update, create-drop. For a production database, you probably want to use validate.

JPA Entity

In our example application, we will perform CRUD operations on a user. For that, we will write a simple JPA entity, User for our application. I have written a post to use Spring Data JPA in a Spring Boot Web application, and so won’t go into JPA here.


JPA Repository

Spring Data JPA CRUD Repository is a feature of Spring Data JPA that I extensively use. Using it, you can just define an interface that extends CrudRepository to manage entities for most common operations, such as saving an entity, updating it, deleting it, or finding it by id. Spring Data JPA uses generics and reflection to generate the concrete implementation of the interface we define.

For our User domain class we can define a Spring Data JPA repository as follows.


That’s all we need to setup Spring Boot to use MariaDB. We will write some test code for this setup.


For the test, I have used JUnit. To know more about JUnit, you can refer my series on JUnit Testing.
The result of the JUnit test is this.

JUnit Test Result for MariaDB


As you can see, it is very easy to configure Spring Boot for MariaDB. As usual, Spring Boot will auto configure sensible defaults for you. And as needed, you can override the default Spring Boot properties for your specfic application.

, , , ,

This is part 6 of the tutorial series for building a web application using Spring Boot. In this post we look at adding a DAO Authentication provider for Spring Security.

We started off with the first part by creating our Spring project using the Spring Initializr. In part 2, we rendered a web page using Thymeleaf and Spring MVC. This was followed by part 3 where we looked at setting up Spring Data JPA for database persistence. Part 4 was all about consolidating everything to provide a working Spring Boot MVC Web Application capable of performing CRUD operations.

In the previous part 5 of this series, we configured a basic in-memory authentication provider. It’s a good starting point to learn Spring Security, but as I mentioned there, it’s not for enterprise applications. A production-quality implementation would likely use the DAO authentication provider.

In this part of the series, I will discuss Spring Security with the DAO authentication provider to secure our Spring Boot Web application. We will implement both authentication and role-based authorization with credentials stored in the H2 database. For persistence, we will use the Spring Data JPA implementation of the repository pattern, that I covered in part 3. Although there are several Spring Data JPA implementations, Hibernate is by far the most popular.

As the Spring Data JPA dependency is included in our Maven POM, Hibernate gets pulled in and configured with sensible default properties via Spring Boot.

This post builds upon 5 previous posts. If you’re not familiar with all the content around Spring, I suggest you to go through this series from the start.

JPA Entities

Our application already has a Product JPA entity. We’ll add two more entities, User and Role. Following the SOLID design principle’sprogram to interface ” principle, we will start by writing an interface followed with an abstract class for our entities.



The entity classes are as follows.