Consumer Driven Contracts are considered a design pattern for evolving services. Spring Cloud Contract can be used to implement consumer driven contracts for services developed using the Spring Framework.

In this post, I’ll take an in depth look at using Spring Cloud Contract to create Consumer Driven Contracts.

An Overview of Spring Cloud Contract

Let’s consider a typical scenario where a consumer sends a request to a producer.

If you were working on the API consumer, how can you write a test without creating a dependency on the API producer?

One tool you can use is WireMock. In WireMock, you define stubs in JSON and then test your consumer calls against the stubs.

By using a tool like WireMock, when developing the consumer side code, you break any dependency on the producer implementation.

Here is an example WireMock stub.

This WireMock stub definition says that given a request of method POST sent to URL gamemanagerwith a query parameter game and a request body, we would like a response of status 200 with JSON in the body.

Spring Cloud Contract allows you to write a contract for a service using a Groovy or YAML DSL. This contract sets up the communication rules, such as the endpoint address, expected requests, and responses for both the service provider and consumer.

Spring Cloud Contract provides plugins for Maven and Gradle to generate a stubs from a contracts. These are WireMock stubs like the example above.

The consumer side of the application can then use the stubs for testing.

For the producer side, Spring Cloud Contract can generate unit tests which utilize Spring’s MockMVC.

Its important to maintain the distinction between consumer and producers. Spring Cloud Contract is producing testing artifacts which allow the development of the consumer side code and the producer side code to occur completely independently.

This independence is very important. On small projects, its fairly common for a developer or small team to have access to the course code for both consumer and producer.

On larger projects, or when dealing with third parties, often you will not have access to both sides of the API.

For example, you may be a Spring developer providing an API for the UI team to write against.  By providing WireMock stubs the UI team can run the stubs for their development. The client side technology does not need to be Spring or even Java. They could be using Angular or ReactJS.

The WireMock stubs can be run in a stand-alone server, allowing any RESTful client to interact with the defined stubs.

Let’s take a closer look at the process by setting up a Producer and Consumer.

The Initial Producer

Let’s say the Producer is in development.

Right now, the Producer does not have any implementation code but only the service contract in form of Groovy DSL.

The Maven POM

Before we start writing the contracts, we need to configure Maven.

We need to include the spring-cloud-starter-contract-verifier as a Maven dependency.

We also need to add spring-cloud-dependencies in the section of our Maven POM.

Finally, Spring Cloud Contract provides plugin for both Maven and Gradle that generates stubs from DSLs.

We need to include the plugin in our Maven POM.

Here is the complete pom.xml of the Producer.

pom.xml

The Spring Cloud Contract Groovy DSL

Spring Cloud Contract supports out of the box two types of DSLs: Groovy and YAML.

In this post, I will use Spring Cloud Contract Groovy DSL. DSL or Domain specific language is meant to make the code written in Groovy human-readable. 

Currently our Producer, the Game Manager is responsible for allowing a player with a score greater than 500 to play a game of football. Any player with a score less than 500 is not allowed to play the game.

Our first contract defines a successful POST request to the Game Manager.

Here is the game_contract_for_score_greater_than_500.groovy DSL.

game_contract_for_score_greater_than_500.groovy

The other contract defines a POST request to the Game Manager for a score lesser thn 500. The game_contract_for_score_lesser_than_500.groovy DSL is this.

game_contract_for_score_lesser_than_500.groovy

As you can see the DSLs are intuitive. The code imports the Contract class and calls the make method. The contract, in addition to the contract name and description has two parts: request and response.

For this example, we have hardcoded the request values. However, Groovy DSL supports dynamic values through regular expressions. More information here.

The Consumer

The consumer will have a REST controller that receives requests for playing game. The consumer needs to verify with the Game Manager before starting the game by making a request to it.

On the consumer side, fork or clone the producer and run this command.

On running the command, the Spring Cloud Contract Maven plugin does two things:

  1. Converts the Groovy DSL contracts into a WireMock stub.
  2. Installs the JAR containing the stub into your Maven local repository

NOTE: In a typical enterprise situation where the consumer is a different team, the JAR artifact would typically be published to an internal maven repository.

This figure shows the generated stub in the Project window of IntelliJ.
Generated Stub

We are now ready to implement the consumer.

The Maven POM

One of the tool that the consumer needs is the stub runner, which is part of the Spring Cloud Contract project. Stub runner is responsible for starting up the producer stub on the consumer side. This enables the consumer to make requests to the stub offline and do all necessary testing before going live in production.

The code to include the stub runner dependency in the Maven POM is this.

We will also use Project Lombok, a Java library that auto generates boilerplate code based on annotations you provide in your code. The Lombok dependency declaration in the Maven POM is this.

The complete Maven POM on the consumer side is this.

pom.xml

The Domain Object

We will model a player as a domain object on the consumer side.

Here is the code for the Player class.

Player.java

I’ve used Lombok annotations on the Player class. As you can see the class is much cleaner and only focuses on the properties it represents.

The @Data Lombok annotations will generate the getter, setter, toString(), hashCode(), and equals() method for us during the build.

The @AllArgsConstructor and @NoArgsConstructor will add a constructor to initialize all properties and a default constructor for us.

The @Builder annotation will add builder APIs based on the Builder pattern for your class.

When you build the Player class, you will get an equivalent class with the Lombok generated code, like this.

The Tests

Next let’s go with the TDD approach and write the tests first. We will begin with an abstract test class, AbstractTest.

AbstractTest.java

This code initializes a JacksonTester object with an ObjectMapper that we will next use to write JSON data to test our controller.

Next is our controller test class. I have used Spring Boot Test and JUnit. I have a post on Spring Boot Test which you should go through if you are new to it. Also, if you are new to JUnit, I suggest you go through my JUnit series of posts.

The controller test class, GameEngineControllerTest.java is this.

GameEngineControllerTest.java

In this code:

  • Line 20: Specifies SpringRunner as the test runner.
  • Line 21 bootstraps the test with Spring Boot’s support. The annotation works by creating the ApplicationContext for your tests through SpringApplication. The webEnvironment.MOCK attribute configures the “web environments” to start tests with a MOCK servlet environment.
  • Line 22 auto configures MockMvc to be injected for us. MockMvc in the test will be providing a mock environment instead of the full blown MVC environment that we get on starting the server.
  • Line 23 enable auto-configuration of JSON tester that we wrote earlier in AbstractTest.
  • Line 24 is important. Here we used @AutoConfigureStubRunner to instruct the Spring Cloud Contract stub runner to run the producer stub online. If you recall, we installed the producer stub in Maven local. The ids attribute specifies the group-ID and artifact-ID of the stub and instructs to start the stub on port 8090.
  • Line 25 @DirtiesContext indicates that the ApplicationContext associated with the test is dirty and should be closed. Thereafter, subsequent tests will be supplied a new context.
  • Line 27 – 29 autowires in MockMvc and the GameEngineController under test.
  • Line 33 – 44 uses MockMVC to test the controller for a Player with score greater than 500.
  • Line 48 – 60 is the second test case that uses MockMVC to test the controller for a Player with score lesser than 500.

When you run the tests, as expected they fails. This is the first step in TDD where you write your tests first and see them fail.

The Controller Implementation

The controller for the Game Engine consumer is a REST controller with a single endpoint. We will implement the controller to make a REST request to the Producer stub in compliance with the Spring Cloud Groovy DSL contract.

Although in a real project, you will have a service layer making the REST request, for the sake of demonstration, I am doing it from the controller itself.

The controller code is this.

GameEngineController.java

The controller code uses a Spring injected RestTemplate to make a POST request to the producer stub and return back the response string as a ResponseEntity.

If you run the controller tests now, the tests pass.
Test Output of Spring Cloud Contract

Contract Violation

Let’s consider a scenario where the consumer violate the producer contract. To simulate this, we will update the test cases to do two things:

  1. Send a Person initialized only with the score field.
  2. Introduce a typo in the request URL.

Here is the code of the updated test cases.

In the first test case, we commented out the statement that sets the player name. In the second test case we intentionally introduced a typo in the request URL as /play/footballs instead of /play/football.

When we run the tests fail, as shown in this figure.

Test Output Contract Violation

As we can see, with Spring Cloud Contract we are enforcing certain set of rules that we expect consumers to follow. A consumer violating any rule fails the tests.

When you are working in cloud native architecture with CI and CD pipeline configured, this test failure will break your CI build. As a result, consumer code violating the contract would never get deployed to production.

Producer Tests

Coming back the the Producer side of the contract, Spring Cloud Contract will also generate unit tests.

The generated tests are build from a base class. This allows you a lot of flexibility in how your tests are setup. And a minimum, you will need to mock the controller under test.

Recall the following configuration of Maven:

Here, in the configuration we are defining the base class for the tests.

Producer Base Test

Here is our implementation of the base class. This is a very minimal implementation. A more complex example could involve mocking services with Mockito.

For the test base class, I also added a controller, which is just an empty class.

Spring Cloud Contract Generated Tests

Spring Cloud Contract will generate tests for our controller from the contracts we have defined.

In the example we have been following, the following tests are generated.

You can see in the above example, the generated test class extends from the base class we configured and implemented.

If your following TDD, you would now have failing tests, and you would go and complete the implementation of the Game controller.

If you execute tests (mvn clean install), you will see the following output, showing the failing tests.

Summary

In these examples, I’ve only shown a few highlights of using Spring Cloud Contract. The project is very robust and offers rich functionality.

If you are building or consuming APIs developed by others, using Consumer Driven Contracts is a tool you can use to improve the quality of your software. And to avoid unexpected breaking changes.

0
Share

MySQL is the most popular open-source database management system. MySQL uses a relational database and Structured Query Language (SQL) to manage its data.

In this post, I’ll show you how to install MySQL on Ubuntu.

Installing MySQL

There are two ways to install MySQL on Ubuntu. First is to use one of the versions included in the APT package repository by default. Use the following command to find the MySQL version in the APT package repository.

To install MySQL included in the APT repository:

    1. Update the package index on your server

    1. Install the package

During installation, the installer will prompt you to create a root password. Type a secure one and ensure you remember it.

Note: If you are not sure about the version, you can omit the version and run sudo apt-get install mysql-server. This installs the latest version for your Linux distribution.

Installing a Newer MySQL Vwersion

If you want to install the latest MySQL version, such as MySQL 5.7 you’ll need to manually add the MySQL’s repository first. To do so, perform the following steps:

    1. Install the newer APT package repository from the MySQL APT repository page.

    1. Go through the prompt to select and apply the specific MySQL version to install.
    2. Update the package index on your server.

    1. Install the package.

Once installation is complete, MySQL should run automatically. You can check the status of MySQL using the following command.

In order to stop MySQL from the running state, run the following command:

To start MySQL, run the following command:

Refer to the official documentation here for various post-installation setup and configurations of MySQL.

0
Share
,

An exciting feature in Spring Framework 5 is the new Web Reactive framework for allows reactive web applications. Reactive programming is about developing systems that are fully reactive and non-blocking. Such systems are suitable for event-loop style processing that can scale with a small number of threads.

Spring Framework 5 embraces Reactive Streams to enable developing systems based on the Reactive Manifesto published in 2014.

The Spring Web Reactive framework stands separately from Spring MVC. This is because Spring MVC is developed around the Java Servlet API, which uses blocking code inside of Java. While popular Java application servers such as Tomcat and Jetty, have evolved to offer non-blocking operations, the Java Servlet API has not.

From a programming perspective, reactive programming involves a major shift from imperative style logic to a declarative composition of asynchronous logic.

In this post, I’ll explain how to develop a Web Reactive application with the Spring Framework 5.0.

Spring Web Reactive Types

Under the covers, Spring Web Reactive is using Reactor, which is a Reactive Streams Implementation. The Spring Framework extends the Reactive Streams Publisher interface with the Flux and Mono  reactive types.

The Flux  data type represents zero to many objects. (0..N)

While the Mono  data type is zero to one.  (0..1)

If you’d like a deeper dive on reactive types, check on Understanding Reactive Types by Sebastien Deleuze.

The Web Reactive Application

The application that we will create is a web reactive application that performs operations on domain objects. To keep it simple, we will use an in memory repository implementation to simulate CRUD operations in this post. In latter posts, we will go reactive with Spring Data.

Spring 5 added the new spring-webflux module for reactive programming that we will use in our application. The application is composed of these components:

  • Domain object: Product in our application.
  • Repository: A repository interface with an implementation class to mimic CRUD operations in a Map.
  • Handler: A handler class to interact with the repository layer.
  • Server: A non-blocking Web server with single-threaded event loop. For this application, we will look how to use both Netty and Tomcat to serve requests.

The Maven POM

For web reactive programming, you need the new spring-webflux and reactive-stream modules as dependencies in your Maven POM.

To host the application in a supported runtime, you need to add its dependency. The supported runtimes are:

  • Tomcat: org.apache.tomcat.embed:tomcat-embed-core
  • Jetty: org.eclipse.jetty:jetty-server and org.eclipse.jetty:jetty-servlet
  • Reactor Netty: io.projectreactor.ipc:reactor-netty
  • Undertow: io.undertow:undertow-core

The code to add dependencies for both embedded Tomcat and Netty is this.

The final dependency is for reactive serialization and deserialization to and from JSON with Jackson.

Note – This is a pre-release of Jackson, will includes non-blocking serialization and deserialization. (Version 2.9.0 was not released at time of writing)

As we are using the latest milestone release of Spring Boot, remember to add the Spring milestones repository:

Here is the complete Maven POM.

pom.xml

The Domain Object

Our application has a Product domain object on which operations will be performed. The code for the Product object is this.

Product.java

Product is a POJO with fields representing product information. Each field has its corresponding getter and setter methods. @JsonProperty is a Jackson annotation to map external JSON properties to the Product fields.

The Repository

The repository layer of the application is built on the ProductRepository interface with methods to save a product, retrieve a product by ID, and retrieve all products.

In this example, we are mimicking the functionality of a reactive data store with a simple ConcurrentHashMap implementation.

ProductRepository.java

The important things in this interface are the new Mono and Flux reactive types of Project Reactor. Both these reactive types along with the other types of the Reactive API are capable
to serve a huge amount of requests concurrently, and to handle operations with latency. These types makes operations, such as requesting data from a remote server, more efficient. Unlike traditional processing that blocks the current thread while waiting a result, Reactive APIs are non-blocking as they deal with streams of data.

To understand Mono and Flux, let’s look at the two main interfaces of the Reactive API: Publisher, which is the source of events T in the stream and Subscriber, which is the destination for those events.

Both Mono and Fluximplements Publisher. The difference lies in cardinality, which is critical in reactive streams.

The difference lies in cardinality, which is critical in reactive streams.

  • Flux observes 0 to N items and completes either successfully or with an error.
  • A Mono observes 0 or 1 item, with Mono hinting at most 0 item.

Note: Reactive APIs were initially designed to deal with N elements, or streams of data. So Reactor initially came only with Flux. But, while working on Spring Framework 5, the team found a need to distinguish between streams of 1 or N elements, so the Mono reactive type was introduced.

Here is the repository implementation class.

ProductRepositoryInMemoryImpl.java

This ProductRepositoryInMemoryImpl class uses a Map implementation to store Product objects.

In the overridden getProduct() method, the call to Mono.justOrEmpty() creates a new Mono that emits the specified item – Product object in this case, provided the Product object is not null. For a <span class="theme:classic lang:default decode:true crayon-inline"> null value, the Mono.justOrEmpty() method completes by emitting onComplete.

In the overridden getAllProducts() method, the call to Flux.fromIterable() creates a new Flux that emits the items ( Product objects) present in the Iterable passed as parameter.

In the overridden saveProduct() method, the call to doOnNext() accepts a callback that stores the provided Product into the Map. What we have here is an example of a classic non-blocking programming. Execution control does not block and wait for the product storing operation.

The Product Handler

The Product handler is similar to a typical service layer in Spring MVC. It interacts with the repository layer. Following the SOLID Principles we would want client code to interact with this layer through an interface. So, we start with a ProductHandler interface.

The code of the ProductHandler interface is this.

ProductHandler.java

The implementation class, ProductHandlerImpl is this.

ProductHandlerImpl.java

In the getProductFromRepository(ServerRequest request) method of the ProductHandlerImpl class:

  • Line 22 obtains the product ID sent as request parameter
  • Line 23 builds a HTTP response as ServerResponse for the NOT_FOUND HTTP status.
  • Line 24 calls the repository to obtain the Product as a Mono.
  • Line 25 – Line 27: Returns a Mono that can represent either the Product or the NOT_FOUND HTTP status if the product is not found.
  • Line 31 in the saveProductToRepository(ServerRequest request) method converts the request body to a Mono. Then Line 33 calls the saveProduct() method of the repository to save the product, and finally return a success status code as an HTTP response.
  • In the getAllProductsFromRepository() method, Line 37 calls the getAllProducts() method of the repository that returns a Flux< ServerResponse>. Then Line 38 returns back the Flux as a JSON that contains all the products.

Running the Application

The example web reactive application has two components. One is the Reactive Web Server. The second is our client.

The Reactive Web Server

Now it is time to wire up all the components together for a web reactive application.

We will use embedded Tomcat as the server for the application, but will also look how to do the same with the lightweight Reactive Netty.

These we will implement in a Server class.

Server.java

In this Server class:

  • Line 37 – Line 38 creates a ProductHandler initialized with ProductRepository.
  • Line 39 – Line 43 constructs and returns a RouterFunction. In Spring Reactive Web, you can relate a RouterFunction with the @RequestMapping annotation. A RouterFunction is used for routing incoming requests to handler functions. In the Server class, incoming GET requests to /{id} and / are routed to the getProductFromRepository and getAllProductsFromRepository handler functions respectively. Incoming POST requests to / are routed to the saveProductToRepository handler function.
  • Line 53 – Line 54  in the startTomcatServer() method, integrates the RouterFunction into Tomcat as a generic HttpHandler.
  • Line 55- Line 61 initializes Tomcat with a host name, port number, context path, and a servlet mapping.
  • Line 62 finally starts Tomcat by calling the start() method.

The output on executing the Server class is this.
Output of Tomcat
To use Netty instead of Tomcat, use this code:

The Client

Spring Framework 5 adds a new reactive WebClient in addition to the existing RestTemplate. The new WebClient deserves a post on its own.

To keep this post simple and limited to only accessing our reactive Web application, I will use ExchangeFunction – a simple alternative to WebClient. ExchangeFunction represents a function that exchanges a client request for a (delayed) client response.

The code of the client class, named ReactiveClient is this.

ReactiveWebClient.java

In the ReactiveClient class, Line 21 calls the ExchangeFunctions.create() method passing a ReactorClientHttpConnector, which is an abstraction over HTTP clients to connect the client to the server. The create() method returns an ExchangeFunction.

In the createProduct() method of the ReactiveClient class, Line 30 – Line 31 builds a ClientRequest that posts a Product object to a URL represented by the URI object. Then Line 32 calls the exchange(request) method to exchange the given request for a response Mono.

In the getAllProducts() method, Line 37 starts an exchange to send a GET request to get all products.

The response body is converted into a Flux and printed to the console.

With Tomcat running, the output on running the ReactiveClient class is:
Output of Reactive Web CLient

Conclusion

In this post, I showed you a very simple example of the new web reactive features inside of Spring Framework 5.

While the reactive programming features inside of Spring Framework 5 are certainly fun to use. What, I’m finding that is, even more, fun is the functional programming style of the new Spring Framework 5 APIs.

Consider the configuration of the web reactive server:

This functional style is a significant change from what we’ve become accustomed to in Spring MVC.

Don’t worry, Spring MVC is still alive and well. And even when using the Reactive features in Spring Framework 5, you can still define ‘controllers’ in the traditional declarative sense.

And maybe traditional monolithic applications will continue to declare controllers using traditional approaches?

Where I expect the functional style to really shine is in the realm of microservices. This new functional style makes it crazy easy to define small, targeted services.

I’m looking forward to seeing how the Spring community adopts the functional API, and seeing how it evolves.

2
Share
,

If you’re following the Java community, you may be hearing about Reactive Streams in Java. Seems like in all the major tech conferences, you’re seeing presentations on Reactive Programming. Last year the buzz was all about Functional programming, this year the buzz is about Reactive Programming.

In 2016 the buzz was all about Functional programming. In 2017 the buzz is about Reactive Programming.

So, is the attention span of the Java community that short lived?

Have we Java developers forgotten about Functional programming and moved on to Reactive programming?

Not exactly. Actually, the Functional Programming paradigm complements Reactive Programming Paradigm very nicely.

You don’t need to use the Functional Programming paradigm to follow a Reactive Programming. You could use the good old imperative programming paradigm Java developers have traditionally used.  Maybe at least. You’d be creating yourself a lot headaches if you did. (Just because you can do something, does not mean you should do that something!)

Functional programming is important to Reactive Programming. But I’m not diving into Functional Programming in this post.

In this post, I want to look at the overall Reactive landscape in Java.

What is the Difference Between Reactive Programming and Reactive Streams?

With these new buzz words, it’s very easy to get confused about their meaning.

Reactive Programming is a programming paradigm. I wouldn’t call reactive programming new. It’s actually been around for awhile.

Just like object oriented programming, functional programming, or procedural programming, reactive programming is just another programming paradigm.

Reactive Streams, on the other hand, is a specification. For Java programmers, Reactive Streams is an API. Reactive Streams gives us a common API for Reactive Programming in Java.

The Reactive Streams API is the product of a collaboration between engineers from Kaazing, Netflix, Pivotal, Red Hat, Twitter, Typesafe and many others.

Reactive Streams is much like JPA or JDBC. Both are API specifications. Both of which you need use implementations of the API specification.

For example, from the JDBC specification, you have the Java DataSource interface. The Oracle JDBC implementation will provide you an implementation of the DataSource interface. Just as Microsoft’s SQL Server JDBC implementation will also provide an implementation of the DataSource interface.

Now your higher level programs can accept the DataSource object and should be able to work with the data source, and not need to worry if it was provided by Oracle or provided by Microsoft.

Just like JPA or JDBC, Reactive Streams gives us an API interface we can code to, without needing to worry about the underlying implementation.

Reactive Programming

There are plenty of opinions around what Reactive programming is. There is plenty of hype around Reactive programming too!

The best starting place to start learning about the Reactive Programming paradigm is to read the Reactive Manifesto. The Reactive Manifesto is a prescription for building modern, cloud scale architectures.

The Reactive Manifesto is a prescription for building modern, cloud scale architectures.

Reactive Manifesto

The Reactive Manifesto describes four key attributes of reactive systems:

Responsive

The system responds in a timely manner if at all possible. Responsiveness is the cornerstone of usability and utility, but more than that, responsiveness means that problems may be detected quickly and dealt with effectively. Responsive systems focus on providing rapid and consistent response times, establishing reliable upper bounds so they deliver a consistent quality of service. This consistent behaviour in turn simplifies error handling, builds end user confidence, and encourages further interaction.

Resilient

The system stays responsive in the face of failure. This applies not only to highly-available, mission critical systems — any system that is not resilient will be unresponsive after a failure. Resilience is achieved by replication, containment, isolation and delegation. Failures are contained within each component, isolating components from each other and thereby ensuring that parts of the system can fail and recover without compromising the system as a whole. Recovery of each component is delegated to another (external) component and high-availability is ensured by replication where necessary. The client of a component is not burdened with handling its failures.

Elastic

The system stays responsive under varying workload. Reactive Systems can react to changes in the input rate by increasing or decreasing the resources allocated to service these inputs. This implies designs that have no contention points or central bottlenecks, resulting in the ability to shard or replicate components and distribute inputs among them. Reactive Systems support predictive, as well as Reactive, scaling algorithms by providing relevant live performance measures. They achieve elasticity in a cost-effective way on commodity hardware and software platforms.

Message Driven

Reactive Systems rely on asynchronous message-passing to establish a boundary between components that ensures loose coupling, isolation and location transparency. This boundary also provides the means to delegate failures as messages. Employing explicit message-passing enables load management, elasticity, and flow control by shaping and monitoring the message queues in the system and applying back-pressure when necessary. Location transparent messaging as a means of communication makes it possible for the management of failure to work with the same constructs and semantics across a cluster or within a single host. Non-blocking communication allows recipients to only consume resources while active, leading to less system overhead.

The first three attributes (Responsive, Resilient, Elastic) are more related to your architecture choices. It’s easy to see why technologies such as microservices, Docker and Kubernetes are important aspects of reactive systems. Running a LAMP stack on a single server clearly does not meet the objectives of the Reactive Manifesto.

traits of reactive systemsMessage Driven and Reactive Programming

As Java developers, it’s the last attribute, Message Driven attribute, that interests us most.

Message-driven architectures are certainly nothing revolutionary. If you need a primer on message driven systems, I’d like to suggest reading Enterprise Integration Patterns. A truly iconic computer science book. The concepts in this book laid the foundations for Spring Integration and Apache Camel.

A few aspects of the Reactive Manifesto that does interest us Java developers are, failures at messages, back-pressure, and non-blocking. These are subtle, but important aspects of Reactive Programming in Java.

Failures as Messages

Often in Reactive programming, you will be processing a stream of messages. What is undesirable is to throw an exception and end the processing of the stream of messages.

The preferred approach is to gracefully handle the failure.

Maybe you needed to execute a web service and it was down. Maybe there is a backup service you can use? Or maybe retry in 10ms?

I’m not going to solve every edge case here. The key takeaway is you do not want to loudly fail with a runtime exception. Ideally, you want to note the failure, and have some type of re-try or recovery logic in place.

Often failures are handled with callbacks. Javascript developers are well accustomed to using callbacks.

But callbacks can get ugly to use. Javascript developers refer to this as call back hell.

In Reactive Steams, exceptions are first class citizens. Exceptions are not rudely thrown. Error handling is built right into the Reactive Streams API specification.

Back Pressure

Have you ever heard of the phrase “Drinking from the Firehose”?

drinking from the firehose - importance of back pressure in reactive programing.

Back Pressure is a very important concept in Reactive programming. It gives down stream clients a way to say I’d some more please.

Imagine if you’re making a query of a database, and the result set returns back 10 million rows. Traditionally, the database will vomit out all 10 million rows as fast as the client will accept them.

When the client can’t accept any more, it blocks. And the database anxiously awaits. Blocked. The threads in the chain patiently wait to be unblocked.

In a Reactive world, we want our clients empowered to say give me the first 1,000. Then we can give them 1,000 and continue about our business – until the client comes back and asks for another set of records.

This is a sharp contrast to traditional systems where the client has no say. Throttling is done by blocking threads, not programmatically.

Non-Blocking

The final, and perhaps most important, aspect of reactive architectures important to us Java developers is non-blocking.

Until Reactive came long, being non-blocking didn’t seem like that big of a deal.

As Java developers, we’ve been taught to take advantage of the powerful modern hardware by using threads. More and more cores, meant we could use more and more threads. Thus, if we needed to wait on the database or a web service to return, a different thread could utilize the CPU. This seemed to make sense to us. While our blocked thread waited on some type of I/O, a different thread could use the CPU.

So, blocking is no big deal. Right?

Well, not so much. Each thread in the system will consume resources. Each time a thread is blocked, resources are consumed. While the CPU is very efficient at servicing different threads, there is still a cost involved.

We Java developers can be an arrogant bunch.

They’ve always looked down upon Javascript. Kind of a nasty little language, preferred by script kiddies.  Just the fact Javascript shared the word ‘java’ always made us Java programmers feel a bit dirty.

If you’re a Java developer, how many times have you felt annoyed when you have to point out that Java and Javascript are two different languages?

Then Node.js came along.

And Node.js put up crazy benchmarks in throughput.

And then the Java community took notice.

Yep, the script kiddies had grown up and were encroaching on our turf.

It wasn’t that Javascript running in the Google’s V8 Javascript engine was some blazing fast godsend to programming. Java used it have its warts in terms of performance, but its pretty efficient, even compared to modern native languages.

Java used it have its warts in terms of performance, but now its pretty efficient. Even when Java compared to modern native languages.

The secret sauce of Node.js’s performance was non-blocking.

Node.js uses an event loop with limited a number of threads. While blocking in the Java world is often viewed as no big deal, in the Node.js world it would be the kiss of death to performance.

These graphics can help you visualize the difference.

In Node.JS there is a non-blocking event loop. Requests are processed in a non-blocking manner. Threads do not get stuck waiting on other processes.

node.js single thread event loop processing

Contrast the Node.JS model to the typical multithreaded server used in Java. Concurrency is achieved through the use of multiple threads. Which is generally accepted due to the growth of multi-core processors.

multi threaded server with blocking

I personally envision the difference between the two approaches as the difference between a super highway and lots of city streets with lights.

With a single thread event loop, your process is cruising quickly along on a super highway. In a Multi-threaded server, your process is stuck on city streets in stop and go traffic.

Both can move a lot of traffic. But, I’d rather be cruising at highway speeds!

your code on reactive streams

What happens when you move to a non-blocking paradigm, is your code stays on the CPU longer. There is less switching of threads. You’re removing the overhead not only managing many threads, but also the context switching between threads.

You will see more head room in system capacity for your program to utilize.

Non-blocking is a not a performance holy grail. You’re not going to see things run a ton faster.

Yes, there is a cost to managing blocking. But all things considered, it is relatively efficient.

In fact, on a moderately utilized system, I’m not sure how measurable the difference would be.

But what you can expect to see, as your system load increases, you will have additional capacity to service more requests. You will achieve greater concurrency.

How much?

Good question. Use cases are very specific. As with all benchmarks, your mileage will vary.

Learn more about my Spring Framework 5 course here!The Reactive Streams API

Let’s take a look at the Reactive Streams API for Java. The Reactive Streams API consists of just 4 interfaces.

Publisher

A publisher is a provider of a potentially unbounded number of sequenced elements, publishing them according to the demand received from its Subscribers.

Publisher

Subscriber

Will receive call to Subscriber.onSubscribe(Subscription) once after passing an instance of Subscriber to Publisher.subscribe(Subscriber).

Subscriber

Subscription

A Subscription represents a one-to-one lifecycle of a Subscriber subscribing to a Publisher.

Subscription

Processor

A Processor represents a processing stage—which is both a Subscriber and a Publisher and obeys the contracts of both.

Processor

Reactive Streams Implementations for Java

The reactive landscape in Java is evolving and maturing. David Karnok has a great blog post on Advanced Reactive Java, in which he breaks down the various reactive projects into generations. I’ll note the generations of each below – (which may change at any time with a new release).

RxJava

RxJava is the Java implementation out of the ReactiveX project.  At the time of writing, the ReactiveX project had implementations for Java, Javascript, .NET (C#), Scala, Clojure, C++, Ruby, Python, PHP, Swift and several others.

ReactiveX provides a reactive twist on the GoF Observer pattern, which is a nice approach. ReactiveX calls their approach ‘Observer Pattern Done Right’.

ReactiveX is a combination of the best ideas from the Observer pattern, the Iterator pattern, and functional programming.

RxJava predates the Reactive Streams specification. While RxJava 2.0+ does implement the Reactive Streams API specification, you’ll notice a slight difference in terminology.

David Karnok, who is a key committer on RxJava, considers RxJava a 3rd Generation reactive library.

Reactor

Reactor is a Reactive Streams compliant implementation from Pivotal. As of Reactor 3.0, Java 8 or above is a requirement.

The reactive functionality found in Spring Framework 5 is built upon Reactor 3.0.

Reactor is a 4th generation reactive library. (David Karnok is also a committer on project Reactor)

Akka Streams

Akka Streams also fully implements the Reactive Streams specification. Akka uses Actors to deal with streaming data. While Akka Streams is compliant with the Reactive Streams API specification, the Akka Streams API is completely decoupled from the Reactive Streams interfaces.

Akka Streams is considered a 3rd generation reactive library.

Ratpack

Ratpack is a set of Java libraries for building modern high-performance HTTP applications. Ratpack uses Java 8, Netty, and reactive principles. Ratpack provides a basic implementation of the Reactive Stream API, but is not designed to be a fully-featured reactive toolkit.

Optionally, you can use RxJava or Reactor with Ratpack.

Vert.x

Vert.x is an Eclipse Foundation project, which is a polyglot event driven application framework for the JVM. Reactive support in Vert.x is similar to Ratpack. Vert.x allows you to use RxJava or their native implementation of the Reactive Streams API.

Reactive Streams and JVM Releases

Reactive Streams for Java 1.8

With Java 1.8, you will find robust support for the Reactive Streams specification.

In Java 1.8 Reactive streams is not part of the Java API. However, it is available as a separate jar.

Reactive Streams Maven Dependency

While you can include this dependency directly, whatever implementation of Reactive Streams you are using, should include it automatically as a dependency.

Reactive Streams for Java 1.9

Things change a little bit when you move to Java 1.9. Reactive Streams become part of the official Java 9 API.

You’ll notice that the Reactive Streams interfaces move under the Flow class in Java 9. But other than that, the API is the same as Reactive Streams 1.0 in Java 1.8.

Conclusion

At the time of writing, Java 9 is right around the corner. In Java 9, Reactive Streams is officially part of the Java API.

In researching this article, it’s clear the various reactive libraries have been evolving and maturing (ie David Karnok generations classification).

Before Reactive Streams, the various reactive libraries had no way of interoperability. They could not talk to each other. Early versions of RxJava were not compatible with early versions of project Reactor.

But on the eve of the release of Java 9, the major reactive libraries have adopted the Reactive Streams specification. The different libraries are now interoperable.

Having the interoperability is an important domino to fall. For example, Mongo DB has implemented a Reactive Streams driver. Now, in our applications, we can use Reactor or RxJava to consume data from a Mongo DB.

We’re still early in the adaptation of Reactive Streams. But over the next year or so, we can expect more and more open source projects to offer Reactive Streams compatibilities.

I expect we are going to see a lot more of Reactive Streams in the near future.

It’s fun time to be a Java developer!

40
Share

Transcript

Okay, I’d like do a quick code review of my Spring Boot Mongo DB example application. This is up on github. And you can find it under my repository Spring framework Guru/spring – boot – Mongo DB. (pretty creative name there)

This is an example Spring Bood Application connect to Mongo DB. Not necessarily running in Docker. I connect can connect to any Mongo DB database.

So let’s take a quick look at the code in it, and do a quick overview.  I already have it loaded up in IntelliJ.

So, lets start up from the domain up. And what I have is a product and this is a standard mapping class for Spring, mapping out to a Mongo document. So no big mystery there. We just have a product class with an ID, description, price, and image URL. So, nothing terribly exciting there.

Now I do have a couple converters. So I am using a form. A command form, some people call it command object to back it. So this converts it back and forth. So this my product form. Its a command. So really the biggest difference there is that we are treating the ID as a string, because Mongo database type does not really transfer over to the web tier very well. So we do need to convert that back-and-forth.

So next thing to look at our standard Spring MVC controller. This is the controller going to facing out to the web and handling web requests for us.

I did jump up a little bit, and if you’re following along, this pattern is gonna look very very familiar. So, I have a service layer that interacts with my controller layer. So this is my interface for the service. Then there’s my implementation of it. And I am using spring data repositories for this and almost all the wiring in my converter saw a wire in the two dependencies one is the product repository which is provided by spring data and this is the Mongol implementation that I am using and everything is wired up and then finally we have a couple time leaf templates here that we used to show the data so Megan I get into all the all the details here but you can see what’s going on here so nothing

And I am using Spring Data Repositories for this. And also wiring in my converter. I wire in the two dependencies.  One is the product repository, which is provided by Spring Data. And this is the Mongo DB implementation that I am using. Everything is wired up.

And then finally we have a couple Thymeleaf templates here that we used to show the data from Mongo. I’m not going to get into all the all the details here. But you can see what’s going on here. so nothing

So nothing too terribly creative here. Its just a quick and dirty to get this working.  It is not production grade by any means.

And then let’s take a quick look at the dependencies.  Up you can see there on line 17 I am fact running spring boot 1.5.1, and that is the most current release of Spring Boot at the time of recording.

I’m bringing in a couple other dependencies, and these are important. So bring in the Spring Boot Starter Data for Mongo DB. As well as Thymeleaf, web, and of course the test.

And this POM is fairly untouched since I pulled it off of Josh Long’s favorite web site spring.start.io. I’m sorry start.spring.io. And that’s the URL so you can grab this at anytime.

So lets go ahead and take this for spin. What I’m going to do, I have command line ready over here.

And I have Docker there so that’s a standard run command for Docker. Docker run map out the ports for the latest image of Mongo. And the minus D parameter tells it to run it in the background.

So that is now running Docker. Use the command Docker PS see that it is running.

And let’s do Docker logs minus F with the image name here. So now you can see that it is running. Let’s bounce over to InteliJ and I am going to start up my Spring Boot application. We can see Spring Boot is initializing. Pretty light project, so its going to come up pretty quick. and is running on tomcat now.  Look at the other window you see that I have a new connection established to Docker (to Mongo). I don’t have a log level turned up on this so we won’t see any database activity.

Let’s come over here and to localhost 8080 and that’s going to do a redirect to product list. I did not initialize any data in the database, but I can come and create a new product. Product $22 and url in URL. It’s not doing any data type checking there. It will show me that this is created. This is the Mongo ID that was created. So now, I come back over here I see that it is listed. And should I want to edit it, and say do new product 2222. I am going to submit it, and we see that that is been updated. If I come back to the index again it redirects to product slash list. And I don’t have too much interesting here, but it shows that at that update has been persisted and is getting pulled out of the Mongo database.

0
Share

Out of the box, Spring Boot is very easy to use with the H2 Database. Spring programmers typically prefer writing code against such lightweight in-memory database, rather than on an enterprise database server such as Microsoft SQL Server or Oracle.

In-memory databases come with several restrictions making them useful only in the development stages in local environments. While in-memory databases are great to develop against, data is not persisted to disk, thus is lost when the database is shut down.

As the development progresses, you would most probably require an RDBMS to develop and test your application before deploying it to use a production database server. I have written a series of posts on integrating Spring Boot for Oracle, MySQL, MariaDB, and PostgreSQL.

Spring makes switching between RDBM’s simple. When you’re using Spring Data JPA with an ORM technology such as Hibernate, the persistence layer is nicely well decoupled. Which allows you to easily run your code against multiple databases. The level of decoupling even allows you to easily switch between an RDBMS and a NoSQL database, such as MongoDB. One of my previous post on Integrating Spring Boot for MongoDB covers that.

In this post, I will discuss Spring Boot configuration for Microsoft SQL Server.

SQL Server Configuration

For this post, I’m using SQL Server 2014 Express installed locally on my laptop. I used SQL Server 2014 Management Studio to connect to the database server using SQL Server Authentication.
Connect To SQL Server
Once you are logged in, create a springbootdb database from the Object Explorer window.
Configure SQL Server database for use with Spring Boot

A common problem that trips up many Java developers trying to connect to SQL Server is this error:

com.microsoft.sqlserver.jdbc.SQLServerException: The TCP/IP connection to the host localhost, port 1433 has failed. Error: “Connection refused: connect. Verify the connection properties, check that an instance of SQL Server is running on the host and accepting TCP/IP connections at the port, and that no firewall is blocking TCP connections to the port.”.

I too learned the hard way to resolve it with these steps.

    1. From the Start menu, open SQL Server 2014 Configuration Manager.
    2. Click Protocol for SQLEXPRESS under SQL Server Network Configuration on the left pane. On the right pane, right- click TCP/IP, and select Properties.
    3. On the TCP/IP Properties dialog box that appears, click the IP Addresses tab.
    4. Scroll down to locate the IPALL node. Remove any value, if present for TCP Dynamic Ports and specify 1433 for TCP Port.

TCP/IP Properties for SQL Server

  1. Click OK.
  2. Again right-click TCP/IP on the right pane, and select Enable.
  3. On the SQL Server Services node, right-click SQL Server (SQLEXPRESS), and select Restart.

This sets up SQL Server to be reached from JDBC code.

SQL Server Dependencies

To connect with SQL Server from Java applications, Microsoft provides a Microsoft JDBC Driver for SQL Server. However, till November 2016, Maven did not directly support the driver as it was not open sourced. By making it open source, Microsoft finally made the driver available on the Maven Central Repository. More information can be found here.

The Maven POM file of my Spring Boot application that brings in the database driver is this.

pom.xml

Spring Boot Properties

We need to override the H2 database properties being set by default in Spring Boot. The nice part is, Spring Boot sets default database properties only when you don’t. So, when we configure SQL Server for use, Spring Boot won’t setup the H2 database anymore.

The following data source configurations are required to configure SQL Server with Spring Boot.

application.properties

As we are using JPA, we need to configure Hibernate for SQL Server too. Line 7 tells Hibernate to recreate the database on startup. This is definitely not the behavior we want if this was actually a production database You can set this property to the following values: none, validate, update, create-drop.

For a production database, you probably want to use validate.

Spring Framework 5
Become a Spring Framework 5 Guru!

JPA Entity

In our example application, we will perform CRUD operations on a user. For that, we will write a simple JPA entity, User for our application. I have written a post to use Spring Data JPA in a Spring Boot Web application, and so won’t go into JPA here.

User.java

JPA Repository

Spring Data JPA CRUD Repository is a feature of Spring Data JPA that I extensively use. Using it, you can just define an interface that extends CrudRepository to manage entities for most common operations, such as saving an entity, updating it, deleting it, or finding it by id. Spring Data JPA uses generics and reflection to generate the concrete implementation of the interface we define.

For our User domain class we can define a Spring Data JPA repository as follows.

UserRepository.java

That’s all we need to setup in Spring Boot to use SQL Server.

Let’s write some test code for this setup.

UserRepositoryTest.java

For the test, I have used JUnit. To know more about JUnit, you can refer my series on JUnit Testing.

The result of the JUnit test is this.

JUnit Test Result for SQL Server

Conclusion

As you can see, it is very easy to configure Spring Boot for SQL Server. As usual, Spring Boot will auto configure sensible defaults for you. And as needed, you can override the default Spring Boot properties for your specific application.

0
Share

I’ve been playing with Docker a lot recently to deploy Spring Boot applications.  Docker is very cool. I’ve been learning a lot about it.

This is my unofficial Docker Cheat sheet. Use with caution!

Got any tips and tricks? Comment below, and I’ll try to update this.

List all Docker Images

List All Running Docker Containers

List All Docker Containers

Start a Docker Container

Stop a Docker Container

Kill All Running Containers

View the logs of a Running Docker Container

Delete All Stopped Docker Containers

Use -f option to nuke the running containers too.

Remove a Docker Image

Delete All Docker Images

Delete All Untagged (dangling) Docker Images

Delete All Images

Remove Dangling Volumes

SSH Into a Running Docker Container

Okay not technically SSH, but this will give you a bash shell in the container.

Use Docker Compose to Build Containers

Run from directory of your docker-compose.yml file.

Use Docker Compose to Start a Group of Containers

Use this command from directory of your docker-compose.yml file.

This will tell Docker to fetch the latest version of the container from the repo, and not use the local cache.

This can be problematic if you’re doing CI builds with Jenkins and pushing Docker images to another host, or using for CI testing. I was deploying a Spring Boot Web Application from Jekins, and found the docker container was not getting refreshed with the latest Spring Boot artifact.

Follow the Logs of Running Docker Containers With Docker Compose

Save a Running Docker Container as an Image

Follow the logs of one container running under Docker Compose

Introduction to Docker Course
Checkout my Free Introduction to Docker Course!

Dockerfile Hints for Spring Boot Developers

Add Oracle Java to an Image

For CentOS/ RHEL