Working for startups is always an interesting experience. Currently, I’m a software engineer at Velo Payments. If you’ve ever worked for a startup, you’ll quickly see that you get to wear many hats.

One of the hats I get to wear is the creation of of developer center (currently in the oven). In the very near future, Velo will be exposing a set of financial APIs to move money around the world.

Within the developer center, we hope to be introducing hundreds, thousands of consumers to our APIs.

API development is never an easy task. And evolving APIs is even more complicated.

Our use case raises a number of concerns:

  • How can we ensure we don’t inadvertantly release a breaking change to our APIs?
  • How do we communicate how to use our APIs?
  • How do we document our APIs?
  • How do we automate testing of our APIs?
  • Can we do all this and remain technology agnostic?

There is a plethora of tools available for our use. Yet none, is ‘just right’.

We clearly have a use case for Consumer Driven Contracts. To summarize the folks at ThoughtWorks:

Consumer-Driven Contracts are a pattern for evolving services. In Consumer-Driven Contracts, each consumer captures their expectations of the provider in a separate contract. All of these contracts are shared with the provider so they gain insight into the obligations they must fulfill for each individual client. The provider can create a test suite to validate these obligations. This lets them stay agile and make changes that do not affect any consumer, and pinpoint consumers that will be affected by a required change for deeper planning and discussion.

In a nutshell, a ‘contract’ can be looked at as a request / response pair. You give the API x, and can expect the API to return y. Contracts are a technique for defining API interactions.

Contracts however, do a very poor job of documenting APIs.

For our use case of releasing public APIs, we want a technology agnostic method of documenting our APIs. Currently, Open API is a clear leader in this domain.

In 2015, SmartBear donated the Swagger 2.0 specification to the Open API Initiative. This kicked off the formation of the Open API Initiative, a consortium of companies, including: 3Scale, Apigee, Capital One, Google, IBM, Intuit, Microsoft, PayPal, and Restlet.

In the summer of 2017, the Open API Initiative released the Open API 3.0 Specification. (Say adios to the name ‘Swagger’)

Open API 3.0 specifications can be written in JSON or YAML, and do an excellent job of documenting RESTful APIs.

The Open API Specification does not however, define API interactions.

The Open API 3.0 Specification does however, define extensions.

Through the use of Open API Specification Extensions, we can define Consumer Driven Contracts.

In this post I’m going to show you how to you can define Consumer Driven Contracts in the Open API 3.0 Specification for Spring Cloud Contract.

If you are not familiar with Spring Cloud Contract, please see my post showing how to use Spring Cloud Contract.

Spring Cloud Contract Groovy DSL

One of my initial concerns of Spring Cloud Contract was how you need to define the contracts in Groovy, and in a very Spring specific Groovy DSL. It’s not something that would be portable to other technologies.

Spring Cloud Contract YAML DSL

Here is the same contract expressed in YAML

Better. I like YAML, since its technology agnostic. Someone could port this DSL to a different technology stack.

Other Concerns about Spring Cloud Contract DSLs

Don’t Repeat Yourself

As Java developers, roughly since we learned how to write “Hello world”, Don’t repeat yourself, aka ‘DRY’ has been beaten in to our heads.

don't repeat yourselfLet’s say you have several conditions you wish to test for an endpoint. You’ll be duplicating a lot of code. Elements like the URL and Content-type will get repeated over and over. Clearly violating DRY!

And if you documented your API using Open API or Swagger, the DRY validations get even worse!

Consider the Spring Cloud Contract will define for each contract things like:

Spring Cloud Contract

  • Request / Response pairs
  • Paths
  • Parameters
  • Headers
  • Cookies
  • HTTP Methods
  • HTTP Status verbs

While the Open API Specification defines:

Open API

  • Paths
  • Parameters
  • Headers
  • Cookies
  • HTTP Methods
  • HTTP Status verbs
  • Request Schemas
  • Response Schemas

Consider the overlap:

Spring Cloud Contract / Open API

  • Paths
  • Parameters
  • Headers
  • Cookies
  • HTTP Methods
  • HTTP Status verbs
  • Request / Response Objects

Now we have DRY violations stacking up like flights going into Chicago O’hare!

What if I wish to refactor a URL path? Now I’m updating the controller source code, tests, contracts, and API documentation.

Thank god our IDEs have search and replace capabilities!

You Can’t Handle the Truth!

In my use case, the APIs under development will be public.

Thus, we do need solid API documentation. It does not need to be Open API. But it does need to be some type of friendly, human readable documentation.

As you start to define API attributes in contracts and in API documentation the question starts to become “what is the single source of truth for the API?”

One could argue it should be the API documentation.

Yet, just as easy to say it should be the consumer driven contracts.

Who’s API is it Anyways?

If we can’t determine the single source of truth for the API, who is the owner of the API?

Do the consumer driven contracts own the API? So the API documentation needs to conform to the contracts when there is a difference?

Or is the API definitively defined by the documentation? Thus, contracts must adhere to the API documentation.

Again a situation where valid arguments can be made for either one.

Contract First vs Code First vs Document First

Do you write contracts first?

Do you code first?

Do you write API documentation first?

We’re mostly developers, so code first, right???

What if we could write the API specification and contracts at the same time?

I know this whole area is subject to some very spirited debate. Not something I’ll be able to solve in this post.

Personally, I’m leaning more and more towards having the specification first, then contracts, then code.

Yes, there is tooling to generate Swagger / Open API from the Spring Source code. My biggest hesitation there is how do you prevent inadvertent breaking changes? Since your specification is generated from the source code, it will always be right. Even after you’ve broken a consumer.

Spring Cloud Contract Open API 3.0 Contract Converter

It is actually now possible to write Spring Cloud Contract definitions using Open API 3.0 with my Spring Cloud Contract Open API Contract Converter or SCC OA3 Converter for short.

Having the API Specification and API documentation in a single document, addresses many of the concerns above.

  • DRY violations are minimized!
  • A single source of truth for the API
  • The API is defined by the API Specification
  • Clear ownership of what the API is!

In a nutshell, the SCC OA3 Converter combines the SCC YAML DSL into OA3 extensions.

From the SCC OA3 Converter, you can expect:

  • Near 100% compatibility to the SCC YAML DSL (still testing edge cases)
  • The ability to define multiple contracts in OA3
  • Minimal violations of DRY
  • Having a single document which defines your API
  • The resulting OA3 Specification is 100% compatible with other OA3 tooling.

Open API 3.0 Consumer Driven Contracts Example

Spring Cloud Contract YAML Definitions

First lets explore two contracts written using using the existing YAML DSL of Spring Cloud Contract.

These two examples are from the YAML samples available in the Spring Cloud Contract GitHub Repository. I’m leaving the commenting in to help explain what each contract is doing.

Contract 1 – Should Mark Client as Fraud

Contract 2 – Should Mark Client as Not Fraud

It’s fairly clear what these two contracts are testing for.

Spring Cloud Contract Open API 3.0 Contracts

Here are the same contracts expressed using the Open API Specification.

Following the spirit of DRY, contract elements which can be derived from the Open API Specification, such as PATH are.

While elements which relate to defining the API interaction are defined in Open API Extensions.

Any property which starts with an ‘x-‘ is an Open API Extension object. As much as possible, the extension objects are modeled after the Spring Cloud Contract YAML DSL.

Open API 3.0 Contracts Example

This is the complete example. Following this example, I’ll break things down in depth.

Let’s break things down on how the contracts are defined in the Open API Specification.

Contract Definition

At a high level, contracts are defined using an extension on the Open API Operation Object.

In this snippet, I’m defining two contracts.

Open API Snippet

Both contracts will be applied against the path ‘/fraudcheck’ and the HTTP verb PUT.

The extension object ‘x-contracts’ is a list. The objects in the list are expected to have a contract ID. This ID property is important since it allows us to tie together properties of the contract defined in other sections of the Open API Specification.

Contract Request Definition

To define the request of the contract, the Open API Request Body Object is extended.

In this snippet, you can see how the Request Body is extended.

From the Open API Specification, we can determine the request should use ‘application/json’ for Content Type.

Then under the ‘x-contracts’ property, the request properties for two contracts are defined.

Open API Snippet

Contrast the above to this snippet from the Spring Cloud Contract YAML DSL.

Spring Cloud Contract YAML DSL Snippet

The body and matchers elements are the same.

While Content Type is not needed, since it is derived from the Open API Specification.

Contract Response Definition

To define the expected response for a given contract, the Open API Response Object is extended.

In the snippet below, the Open API Response object is the ‘200’ YAML property.

From the Open API properties, we can infer the expected response should have an HTTP status of 200, and the expected content type is ‘application/json’.

The response object is extended with the ‘x-contracts’ property.

In this example, you can see the expected response properties defined for two contracts.

Open API Snippet

Again, lets contrast this against the original Spring Cloud Contract YAML DSL example.

Here you can see we’re are expecting a HTTP 200 status and content type of ‘application/json’. (both defined in Open API Specification properties above)

And again the body and matchers elements remain the same.

Spring Cloud Contract YAML DSL Snippet

Next Steps

How to Define Your Own Contracts in Open API 3.0

If you’d like to try defining your own contracts for Spring Cloud Contract, please see my GitHub Repository. Here you will find complete directions on how to configure Maven and additional examples.

The above examples reference a common example used in the Spring Cloud Contract stand alone examples. You can find a complete example of a stand alone reference project here in GitHub. In this example, I literally copied the Java classes used in the Spring Cloud Contract YAML example, deleted the YAML contracts, and re-wrote them in Open API 3.0.

Help Wanted!

My Open API Contract converter is in its initial release. Spring Cloud Contract has a variety of examples of YAML contracts in their unit tests. I’d like to convert the remaining YAML contracts to Open API 3.0 contracts and write unit test for them. This is an area I’d love to get some help with.

If you’d like to contribute to this project, please see open issues here. I’ve also setup a Gitter room where you can communicate with me and others contributing to the project.

Atlassian’s Swagger Request Validator

Another tool I wish to explore is Atlassian’s Swagger Request Validator. They’ve added support for the Open API 3.0 specification just in the last few weeks. I want to see what additional assertions can be automated from properties defined in the API specification.

API Documentation for Humans

The Open API examples we’ve been looking at in this post are in YAML. YAML is great for computers, but not so great for humans.

The folks from Rebilly have open sourced their API documentation. They have a parser which consumes the Open API YAML to produce very rich API documentation using ReactJS. You can see an example here. I’m currently looking at using this tool document Velo’s public APIs.

Special Thanks

Special Thanks to Marcin Grzejszczak, one of the primary authors of Spring Cloud Contract. He’s been very helpful with Spring Cloud contract in general, and in guiding me in how to write the Open API 3.0 contract parser.

In Summary

Developing quality APIs is challenging. For the public API’s I’m supporting, using the Open API specification was an easy choice.

If I can provide an Open API specification of my APIs to others, now they have a tool they can leverage. I don’t know if my API consumers will be using Spring, .NET, Python, Ruby, or whatever.

Due to the popularity of Open API and Swagger there are a ton of tools to choose from.

Using the Open API Specification, I can:

  • Generate Server Side and Client Side stubs in roughly a gazillion different languages.
  • Create documentation in markdown
  • Insert request / response samples.
  • Provide code samples
  • Auto-generate code for Pact, Wiremock, RestAssured, Spring MockMVC via the Atlassian tools mentioned above.
  • Interact with the APIs via Swagger UI
  • Generate rich friendly API documentation like this Rebilly example. (Rebilly is just one example, there are many others)
  • And much more.

Seems like you can do more and more with Open API. You can even get a validator badge for GitHub. (OA3 support coming soon)

And now, you can define Consumer Driven Contracts for Spring Cloud Contract in Open API 3.0!


@RequestMapping is one of the most common annotation used in Spring Web applications. This annotation maps HTTP requests to handler methods of MVC and REST controllers.

In this post, you’ll see how versatile the @RequestMapping annotation is when used to map Spring MVC controller methods.

Request Mapping Basics

In Spring MVC applications, the RequestDispatcher (Front Controller Below) servlet is responsible for routing incoming HTTP requests to handler methods of controllers.

When configuring Spring MVC, you need to specify the mappings between the requests and handler methods.

Spring MVC Dispatcher Servlet and @RequestMappingTo configure the mapping of web requests, you use the @RequestMapping annotation.

The @RequestMapping annotation can be applied to class-level and/or method-level in a controller.

The class-level annotation maps a specific request path or pattern onto a controller. You can then apply additional method-level annotations to make mappings more specific to handler methods.

Here is an example of the @RequestMapping annotation applied to both class and methods.

With the preceding code, requests to /home will be handled by get() while request to /home/index will be handled by index().

Spring Framework 5
Learn Spring Framework 5 with my Spring Framework 5: Beginner to Guru

@RequestMapping with Multiple URIs

You can have multiple request mappings for a method. For that add one @RequestMapping annotation with a list of values.

As you can see in this code, @RequestMapping supports wildcards and ant-style paths. For the preceding code, all these URLs will be handled by indexMultipleMapping().

  • localhost:8080/home
  • localhost:8080/home/
  • localhost:8080/home/page
  • localhost:8080/home/pageabc
  • localhost:8080/home/view/
  • localhost:8080/home/view/view

@RequestMapping with @RequestParam

The @RequestParam annotation is used with @RequestMapping to bind a web request parameter to the parameter of the handler method.

The @RequestParam annotation can be used with or without a value. The value specifies the request param that needs to be mapped to the handler method parameter, as shown in this code snippet.

In Line 6 of this code, the request param id will be mapped to the personId parameter personId of the getIdByValue() handler method.

An example URL is this:

The value element of @RequestParam can be omitted if the request param and handler method parameter names are same, as shown in Line 11.

An example URL is this:

The required element of @RequestParam defines whether the parameter value is required or not.

In this code snippet, as the required element is specified as false, the getName() handler method will be called for both of these URLs:

  • /home/name?person=xyz
  • /home/name

The default value of the @RequestParam is used to provide a default value when the request param is not provided or is empty.

In this code, if the person request param is empty in a request, the getName() handler method will receive the default value John as its parameter.

Using @RequestMapping with HTTP Method

The Spring MVC  @RequestMapping annotation is capable of handling HTTP request methods, such as GET, PUT, POST, DELETE, and PATCH.

By default all requests are assumed to be of HTTP GET type.

In order to define a request mapping with a specific HTTP method, you need to declare the HTTP method in @RequestMapping using the method element as follows.

In the code snippet above, the method element of the @RequestMapping annotations indicates the HTTP method type of the HTTP request.

All the handler methods will handle requests coming to the same URL ( /home), but will depend on the HTTP method being used.

For example, a POST request to /home will be handled by the post() method. While a DELETE request to /home will be handled by the delete() method.

You can see how Spring MVC will map the other methods using this same logic.

Using @RequestMapping with Producible and Consumable

The request mapping types can be narrowed down using the produces and consumes elements of the @RequestMapping annotation.

In order to produce the object in the requested media type, you use the produces element of @RequestMapping in combination with the @ResponseBody annotation.

You can also consume the object with the requested media type using the consumes element of @RequestMapping in combination with the @RequestBody annotation.

The code to use producible and consumable with @RequestMapping is this.

In this code, the getProduces() handler method produces a JSON response. The getConsumes() handler method consumes JSON as well as XML present in requests.

@RequestMapping with Headers

The @RequestMapping annotation provides a header element to narrow down the request mapping based on headers present in the request.

You can specify the header element as myHeader = myValue.

In the above code snippet, the headers attribute of the @RequestMapping annotation narrows down the mapping to the post() method. With this, the post() method will handle requests to /home/head whose content-type header specifies plain text as the value.

You can also indicate multiple header values like this:

Here it implies that both text/plain as well as text/html are accepted by the post() handler method.

@RequestMapping with Request Parameters

The params element of the @RequestMapping annotation further helps to narrow down request mapping. Using the params element, you can have multiple handler methods handling requests to the same URL, but with different parameters.

You can define params as myParams = myValue. You can also use the negation operator to specify that a particular parameter value is not supported in the request.

In this code snippet, both the getParams() and getParamsDifferent() methods will handle requests coming to the same URL ( /home/fetch) but will execute depending on the params element.

For example, when the URL is /home/fetch?id=10 the getParams() handler method will be executed with the id value 10.. For the URL, localhost:8080/home/fetch?personId=20, the getParamsDifferent() handler method gets executed with the id value 20.

Using @RequestMapping with Dynamic URIs

The @RequestMapping annotation is used in combination with the @PathVaraible annotation to handle dynamic URIs. In this use case, the URI values can act as the parameter of the handler methods in the controller. You can also use regular expressions to only accept the dynamic URI values that match the regular expression.

In this code, the method getDynamicUriValue() will execute for a request to localhost:8080/home/fetch/10. Also, the id parameter of the getDynamicUriValue() handler method will be populated with the value 10 dynamically.

The method getDynamicUriValueRegex() will execute for a request to localhost:8080/home/fetch/category/shirt. However, an exception will be thrown for a request to /home/fetch/10/shirt as it does not match the regular expression.

@PathVariable works differently from @RequestParam. You use @RequestParam to obtain the values of the query parameters from the URI. On the other hand, you use @PathVariable to obtain the parameter values from the URI template.

The @RequestMapping Default Handler Method

In the controller class you can have default handler method that gets executed when there is a request for a default URI.

Here is an example of a default handler method.

In this code, A request to /home will be handled by the default() method as the annotation does not specify any value.

Spring Framework 5
Learn Spring Framework 5! Enroll in my Spring Framework 5: Beginner to Guru course!

@RequestMapping Shortcuts

Spring 4.3 introduced method-level variants, also known as composed annotations of @RequestMapping. The composed annotations better express the semantics of the annotated methods. They act as wrapper to @RequestMapping and have become the standard ways of defining the endpoints.

For example, @GetMapping is a composed annotation that acts as a shortcut for @RequestMapping(method = RequestMethod.GET).
The method level variants are:

  • @GetMapping
  • @PostMapping
  • @PutMapping
  • @DeleteMapping
  • @PatchMapping

The following code shows using the composed annotations.

In this code, each of the handler methods are annotated with the composed variants of @RequestMapping. Although, each variant can be interchangeably used with @RequestMapping with the method attribute, it’s considered a best practice to use the composed variant. Primarily because the composed annotations reduce the configuration metadata on the application side and the code is more readable.

@RequestMapping Conclusion

As you can see in this post, the @RequestMapping  annotation is very versatile. You can use this annotation to configure Spring MVC to handle a variety of use cases. It can be used to configure traditional web page requests, and well as RESTFul web services in Spring MVC.



An exciting feature in Spring Framework 5 is the new Web Reactive framework for allows reactive web applications. Reactive programming is about developing systems that are fully reactive and non-blocking. Such systems are suitable for event-loop style processing that can scale with a small number of threads.

Spring Framework 5 embraces Reactive Streams to enable developing systems based on the Reactive Manifesto published in 2014.

The Spring Web Reactive framework stands separately from Spring MVC. This is because Spring MVC is developed around the Java Servlet API, which uses blocking code inside of Java. While popular Java application servers such as Tomcat and Jetty, have evolved to offer non-blocking operations, the Java Servlet API has not.

From a programming perspective, reactive programming involves a major shift from imperative style logic to a declarative composition of asynchronous logic.

In this post, I’ll explain how to develop a Web Reactive application with the Spring Framework 5.0.

Spring Web Reactive Types

Under the covers, Spring Web Reactive is using Reactor, which is a Reactive Streams Implementation. The Spring Framework extends the Reactive Streams Publisher interface with the Flux and Mono  reactive types.

The Flux  data type represents zero to many objects. (0..N)

While the Mono  data type is zero to one.  (0..1)

If you’d like a deeper dive on reactive types, check on Understanding Reactive Types by Sebastien Deleuze.

The Web Reactive Application

The application that we will create is a web reactive application that performs operations on domain objects. To keep it simple, we will use an in memory repository implementation to simulate CRUD operations in this post. In latter posts, we will go reactive with Spring Data.

Spring 5 added the new spring-webflux module for reactive programming that we will use in our application. The application is composed of these components:

  • Domain object: Product in our application.
  • Repository: A repository interface with an implementation class to mimic CRUD operations in a Map.
  • Handler: A handler class to interact with the repository layer.
  • Server: A non-blocking Web server with single-threaded event loop. For this application, we will look how to use both Netty and Tomcat to serve requests.

The Maven POM

For web reactive programming, you need the new spring-webflux and reactive-stream modules as dependencies in your Maven POM.

To host the application in a supported runtime, you need to add its dependency. The supported runtimes are:

  • Tomcat: org.apache.tomcat.embed:tomcat-embed-core
  • Jetty: org.eclipse.jetty:jetty-server and org.eclipse.jetty:jetty-servlet
  • Reactor Netty: io.projectreactor.ipc:reactor-netty
  • Undertow: io.undertow:undertow-core

The code to add dependencies for both embedded Tomcat and Netty is this.

The final dependency is for reactive serialization and deserialization to and from JSON with Jackson.

Note – This is a pre-release of Jackson, will includes non-blocking serialization and deserialization. (Version 2.9.0 was not released at time of writing)

As we are using the latest milestone release of Spring Boot, remember to add the Spring milestones repository:

Here is the complete Maven POM.


The Domain Object

Our application has a Product domain object on which operations will be performed. The code for the Product object is this.


Product is a POJO with fields representing product information. Each field has its corresponding getter and setter methods. @JsonProperty is a Jackson annotation to map external JSON properties to the Product fields.

The Repository

The repository layer of the application is built on the ProductRepository interface with methods to save a product, retrieve a product by ID, and retrieve all products.

In this example, we are mimicking the functionality of a reactive data store with a simple ConcurrentHashMap implementation.


The important things in this interface are the new Mono and Flux reactive types of Project Reactor. Both these reactive types along with the other types of the Reactive API are capable
to serve a huge amount of requests concurrently, and to handle operations with latency. These types makes operations, such as requesting data from a remote server, more efficient. Unlike traditional processing that blocks the current thread while waiting a result, Reactive APIs are non-blocking as they deal with streams of data.

To understand Mono and Flux, let’s look at the two main interfaces of the Reactive API: Publisher, which is the source of events T in the stream and Subscriber, which is the destination for those events.

Both Mono and Fluximplements Publisher. The difference lies in cardinality, which is critical in reactive streams.

The difference lies in cardinality, which is critical in reactive streams.

  • Flux observes 0 to N items and completes either successfully or with an error.
  • A Mono observes 0 or 1 item, with Mono hinting at most 0 item.

Note: Reactive APIs were initially designed to deal with N elements, or streams of data. So Reactor initially came only with Flux. But, while working on Spring Framework 5, the team found a need to distinguish between streams of 1 or N elements, so the Mono reactive type was introduced.

Here is the repository implementation class.


This ProductRepositoryInMemoryImpl class uses a Map implementation to store Product objects.

In the overridden getProduct() method, the call to Mono.justOrEmpty() creates a new Mono that emits the specified item – Product object in this case, provided the Product object is not null. For a <span class="theme:classic lang:default decode:true crayon-inline"> null value, the Mono.justOrEmpty() method completes by emitting onComplete.

In the overridden getAllProducts() method, the call to Flux.fromIterable() creates a new Flux that emits the items ( Product objects) present in the Iterable passed as parameter.

In the overridden saveProduct() method, the call to doOnNext() accepts a callback that stores the provided Product into the Map. What we have here is an example of a classic non-blocking programming. Execution control does not block and wait for the product storing operation.

The Product Handler

The Product handler is similar to a typical service layer in Spring MVC. It interacts with the repository layer. Following the SOLID Principles we would want client code to interact with this layer through an interface. So, we start with a ProductHandler interface.

The code of the ProductHandler interface is this.


The implementation class, ProductHandlerImpl is this.


In the getProductFromRepository(ServerRequest request) method of the ProductHandlerImpl class:

  • Line 22 obtains the product ID sent as request parameter
  • Line 23 builds a HTTP response as ServerResponse for the NOT_FOUND HTTP status.
  • Line 24 calls the repository to obtain the Product as a Mono.
  • Line 25 – Line 27: Returns a Mono that can represent either the Product or the NOT_FOUND HTTP status if the product is not found.
  • Line 31 in the saveProductToRepository(ServerRequest request) method converts the request body to a Mono. Then Line 33 calls the saveProduct() method of the repository to save the product, and finally return a success status code as an HTTP response.
  • In the getAllProductsFromRepository() method, Line 37 calls the getAllProducts() method of the repository that returns a Flux< ServerResponse>. Then Line 38 returns back the Flux as a JSON that contains all the products.

Running the Application

The example web reactive application has two components. One is the Reactive Web Server. The second is our client.

The Reactive Web Server

Now it is time to wire up all the components together for a web reactive application.

We will use embedded Tomcat as the server for the application, but will also look how to do the same with the lightweight Reactive Netty.

These we will implement in a Server class.


In this Server class:

  • Line 37 – Line 38 creates a ProductHandler initialized with ProductRepository.
  • Line 39 – Line 43 constructs and returns a RouterFunction. In Spring Reactive Web, you can relate a RouterFunction with the @RequestMapping annotation. A RouterFunction is used for routing incoming requests to handler functions. In the Server class, incoming GET requests to /{id} and / are routed to the getProductFromRepository and getAllProductsFromRepository handler functions respectively. Incoming POST requests to / are routed to the saveProductToRepository handler function.
  • Line 53 – Line 54  in the startTomcatServer() method, integrates the RouterFunction into Tomcat as a generic HttpHandler.
  • Line 55- Line 61 initializes Tomcat with a host name, port number, context path, and a servlet mapping.
  • Line 62 finally starts Tomcat by calling the start() method.

The output on executing the Server class is this.
Output of Tomcat
To use Netty instead of Tomcat, use this code:

The Client

Spring Framework 5 adds a new reactive WebClient in addition to the existing RestTemplate. The new WebClient deserves a post on its own.

To keep this post simple and limited to only accessing our reactive Web application, I will use ExchangeFunction – a simple alternative to WebClient. ExchangeFunction represents a function that exchanges a client request for a (delayed) client response.

The code of the client class, named ReactiveClient is this.


In the ReactiveClient class, Line 21 calls the ExchangeFunctions.create() method passing a ReactorClientHttpConnector, which is an abstraction over HTTP clients to connect the client to the server. The create() method returns an ExchangeFunction.

In the createProduct() method of the ReactiveClient class, Line 30 – Line 31 builds a ClientRequest that posts a Product object to a URL represented by the URI object. Then Line 32 calls the exchange(request) method to exchange the given request for a response Mono.

In the getAllProducts() method, Line 37 starts an exchange to send a GET request to get all products.

The response body is converted into a Flux and printed to the console.

With Tomcat running, the output on running the ReactiveClient class is:
Output of Reactive Web CLient


In this post, I showed you a very simple example of the new web reactive features inside of Spring Framework 5.

While the reactive programming features inside of Spring Framework 5 are certainly fun to use. What, I’m finding that is, even more, fun is the functional programming style of the new Spring Framework 5 APIs.

Consider the configuration of the web reactive server:

This functional style is a significant change from what we’ve become accustomed to in Spring MVC.

Don’t worry, Spring MVC is still alive and well. And even when using the Reactive features in Spring Framework 5, you can still define ‘controllers’ in the traditional declarative sense.

And maybe traditional monolithic applications will continue to declare controllers using traditional approaches?

Where I expect the functional style to really shine is in the realm of microservices. This new functional style makes it crazy easy to define small, targeted services.

I’m looking forward to seeing how the Spring community adopts the functional API, and seeing how it evolves.


This week one of my students in my Spring Core course ran into an issue how Spring was performing dependency inject. By default, the Spring Framework will perform dependency injection by type. This generally works fine, since you often will only have one bean in the Spring context for a given type. But this is not always the case.

When you do have more than one bean of a given type, you need to tell Spring which bean you wish it to use for dependency injection. If you fail to do so, Spring will throw a NoUniqueBeanDefinitionException exception, which means there’s more than one bean which would fulfill the requirement.

There are two simple ways you can resolve the NoUniqueBeanDefinitionException exception in Spring. You can use the @Primary  annotation, which will tell Spring when all other things are equal to select the primary bean over other instances of that type for the autowire requirement.

The second way, is to use the @Qualifier  annotation. Through the use of this annotation, you can give Spring hints about the name of the bean you want to use. By default, the reference name of the bean is typically the lower case class name.

In the video below, I go through the dependency injection example used in my Spring Core course, and show you how to modify it to get the NoUniqueBeanDefinitionException. I then walk through first using the @Primary  annotation to give a preference to one bean over the other, and the I use the @Qualifier  to specifically select which instance of the bean into my classes.

While the Spring Framework does perform dependency injection by type by default, it does offer you a great deal of control how beans are autowired.


Samy is my Hero

A few months ago Tim Ferriss interviewed Samy Kamkar on his podcast. Samy’s large claim to fame is being the author of the MySpace Samy worm. This is a worm that infected over a million MySpace accounts in just 20 hours. MySpace actually shut down because of the worm.

Hearing Samy’s version of the story is absolutely hilarious. Samy is a hacker. He loves to see how things work. Samy tells Tim that he didn’t set out to create the fastest spreading virus of all time. He saw an exploit in the MySpace code that would allow him to add Javascript code to his profile to add the string “but most of all, Samy is my Hero” to anyone’s MySpace profile that visited his MySpace page, and have them add Samy as their friend.

But Samy was bored with that. He wanted more friends on MySpace. Through his hacking skills, he found a way to add that same script to anyone’s MySpace page that visited his MySpace page. Now anyone who visited someone who had been to Samy’s MySpace page was infected. Anyone visiting an infected MySpace profile would add Samy as their friend on MySpace, add “but most of all, Samy is my Hero” to their profile, and they would also be infected with the Samy worm.

The results were exponential. 5, 10, 30, 80, 1,000, 5,000, 10,000, etc. Every time Samy refreshed his MySpace page he had more friends, and the rate was growing. Before MySpace crashed, I think Samy said the rate was 10’s of thousands – per second!

Samy is my Hero
Yes, people even made Tshirts. (Samy in the middle)

While I think the exploit was hilarious and relatively harmless, the government did not. Eight months later Samy was raided by the US Secret Service and he was charged with crimes under the Patriot Act. Samy’s punishment was for 3 years, he was not allowed to use a computer.

Since then Samy has continued hacking. But in a good way. He’s more of white hat hacker. Samy is also the author of Evercookie – an impossible to delete cookie for tracking internet users. A technology that the NSA is a fan of. Both of these exploits drove awareness and changes. The Samy Worm was a XSS exploit, which is now common to defend against. And Evercookie drove privacy changes in all major browsers.

I love Samy’s passion for hacking. Today he’s hacking keyless car FOBs and consumer drones. The more expensive the car, the easier it is to hack. And did you know you can take over someone else’s drone?

Hacking Spring Boot Autoconfiguration

All programmers are hackers to some degree. We love to figure out how stuff works. So, whenever I start hacking something, often I start thinking about Samy is my hero.

This week I’ve been hacking the Spring Boot autoconfiguration. I developed a Spring Boot web application for my Spring Core course, and in my Spring Core Advanced course, I’m un-doing all the magic of Spring Boot autoconfiguration. I’ve been spending hours going through the Spring Boot autoconfiguration code developed by the Spring Boot team: Phillip Webb, Dave Syer, Josh Long, Stéphane Nicoll, Rob Winch, Andy Wilkinson, Marcel Overdijk, Christian Dupuis, Sébastien Deleuze.

The Spring Boot documentation is fairly decent about explaining what is being autoconfigured at a high level. But the documentation does not get down to specifics. My goal is to un-do all the Spring Boot autoconfiguration magic. Ultimately I’m going to remove Spring Boot entirely from my project. It’s not because I don’t like Spring Boot. I’m a total Spring Boot fanboy. Spring Boot is the most exciting thing to happen to Spring since Java annotations. (Really, who misses the XML hell of configuring a Spring / Hibernate project??? Anyone? Buller? Buller?)

No, I’m going through this exercise to show my students the “old” days of Spring application development. To be honest, I’m also gaining a better appreciation for all the things Spring Boot is doing for us. I’ve been using Spring Boot long enough that I was blissfully forgetting what Spring application development was like before Spring Boot.

But not everyone is lucky as me. I know there are many Spring developers out there wishing they could be using Spring Boot. And a fair amount that are scared of Spring Boot.

And if you’re a Spring developer wondering what Spring Boot is –

picard spring boot face palm

After peeking under the covers of Spring Boot Autoconfiguration, I do have to give kudos to the Spring development team. They’ve been doing a real nice job. Overall, I’m impressed. There is a lot going on with Spring Boot autoconfiguration. There is a lot of conditional stuff happening. Much of it isn’t trivial either. Autoconfiguration of Hibernate, popular databases, and Spring Security? Yep, it’s in there.

Much of the Spring Boot is conditional. It only kicks in when the appropriate jars are on your classpath. And typically, key properties can be easily overridden via properties files.

I thought I’d share a bit of what I’ve found in my hacking adventures with Spring Boot. After all, Samy is my hero.

Hitchhiker’s Guide to Spring Boot Autoconfiguration

Spring Boot Autoconfiguration Classes

To my knowledge all the Spring Boot Autoconfiguration classes are in a single jar. Below is the Maven dependency for Spring Boot Autoconfiguration. Don’t worry, this jar is automatically included as a dependency of Spring Boot. I’m just pointing it out so you can easily hack it with your tool of choice. (IntelliJ for me)


Inside this jar is a collection of Spring Java configuration classes. These are the classes behind the autoconfiguration in Spring Boot.

Key Spring Boot Autoconfiguration Annotations


There’s a Java annotation in the Spring configuration which I was not familiar with called @ConditionalOnClass . In a nutshell this is what will kick in the Spring Boot autoconfiguration. If the specified classes are found, do then do the auto configuration.


This is an annotation to specify properties. If you remember, Spring Boot Autoconfiguration allows you to override properties via Spring Boot properties files. Through this annotation, if a property has not been set in the environment, one can be specified.


In Spring Boot, you can supply a bean through normal Spring configuration. I have an example of this in my post on Configuring Spring Boot for Oracle. In this post I show you how to override the Spring Boot data source by just properties, or by creating a DataSource bean in a Spring Java configuration class. First is by properties, ie @ConditionalOnProperty . Second is by bean type, ie @ConditionalOnMissingBean .

With the @ConditionalOnMissingBean  annotation, the configuration option will only kick in if the bean is not already contained in the Spring bean factory.

Hacking the Spring Boot Autoconfiguration for Thymeleaf

Spring Boot Default

As an example, lets take a look at hacking the Thymeleaf autoconfiguration of Spring Boot.

Here we can see the above Spring Boot autoconfiguration annotations in use as it applies to the autoconfiguration of Thymeleaf.


As of version 1.3.1 of Spring Boot.


Overriding Spring Boot

Now, here’s the implementation I did for my Spring Core Advanced class.

Now, don’t get on me about hard coding properties. I know, BAD DEVELOPER! My students haven’t learned about externalizing properties – yet.

In a nutshell, I’ve provided the Thymeleaf objects needed to configure Thymeleaf for use with Spring MVC. In doing so, the Spring Boot autoconfiguration won’t kick in (due to the @ConditionalOnMissingBean  in the Spring Boot default autoconfiguration class).


Spring Boot autoconfiguration is a very cool feature of Spring Boot. As Spring developers, it saves us ton of time configuring our Spring projects. But the Spring Boot autoconfiguration is a double edge sword. Through sensible defaults, come defaults. Which open the door to hacking. I remember in the early days of Oracle, every Oracle database came with the account SCOTT, with the password TIGER. You also had the equivalent of a root account (aka god account) of SYSTEM, default password manager. Production Oracle databases were getting hacked, because someone forgot to change the password of SYSTEM from ‘manager’.

Spring Boot autoconfiguration is saving us Spring developers a TON of time. But don’t use that as an excuse to be lazy. Put your hacker hat on. Take a peek at what Spring Boot autoconfiguration is doing for you. Get familiar with it. Spring Boot should not be magical. Spring Boot should not be a black box. This is exactly why I’m going through the exercise of removing Spring Boot from a project for my students. I feel they will be better Spring developers if Spring Boot is not a mystery to them.

I encourage you to hack the Spring Boot autoconfiguration. And when you do, say to yourself –

“but most of all, Samy is my hero”

Free Introduction to Spring Tutorial

Are you new to the Spring Framework? Checkout my Free Introduction to Spring Online Tutorial.

The latest TIOBE index has Java language moving strongly into the #1 programming language for January 2016. If you’re not familiar with the TIOBE Index, it’s an index that looks at searches on the major search engines, blogs, forums, and Youtube (Did you know Youtube is now the second biggest search engine?) The “Popularity of Programming Language” index uses a slightly different approach, also has Java remaining at the #1 position for January 2016. Both indexes are giving Java over 20% of the market.

The Java Language Into the Future

I’ve read a lot of articles predicting the demise of the Java language. I don’t see that happening anytime soon. The Java language continues to evolve with the times. Java 7 was a fairly boring release. Java 8, however, has a number of exciting features. Java 8 lambdas are a really neat new feature to Java. It’s a feature that is long overdue. But I have to give kudos to the Java team. They did a real nice job of implementing lambdas.

It’s these new features that allow Java to evolve and remain relevant as modern programming languages. Functional programming has been a big buzz in the last few years. Guess what, with Java 8 and lambdas, you can do functional programming in Java now.

The JVM is the crown jewel of the Java community. With each release, the JVM becomes more stable and faster. Early releases of Java were dreadfully slow. Today, Java often approaches the performance of native code.

Another fun trend in the Java community is the rise of alternative JVM languages. That same Java runs more than just Java. My personal favorite alternative JVM languages are Groovy and Scala. Both are trending nicely in the programming indexes. And you’re seeing greater support for Groovy and Scala in Spring too. (Expect to see more posts on both in 2016!) If you account for these two alternative JVM languages, you can see how Java is truly dominating the Microsoft languages in the marketplace.

It’s going to be interesting to see what the future holds. I’m personally interested in the Swift programming language. Could Swift someday dethrone Java from the #1 spot? I think that’s going to depend on how the Swift open source community develops. I thought about building an enterprise class application in Swift. Is there a DI / IoC framework like Spring for Swift? No, not yet. An ORM like Hibernate for Swift? No, not yet. And Enterprise Integration framework like Spring Integration or Apache Camel for Swift? No, not yet. I find languages like Swift and Go very interesting, but they just don’t have the open source ecosystem which Java has. If a language is going to dethrone Java from the top, it’s going to need a thriving open source community behind it.

Like Java, all the popular open source projects continue to evolve with the language. So any calls for the demise of Java are premature. The future for the Java language is bright. The future for the Java community is brite!

java duke in shades

, ,

Embedded JPA Entities are nothing new to the JPA standard. By defining Embedded JPA Entities, you can define a common data type for your application. Unlike regular JPA Entities which generally follow a table per entity mapping strategy. Embedded JPA Entities are stored as additional columns in the underlying relational database table.

If you’re using Hibernate as your JPA provider under Spring Boot, and allowing Hibernate to generate the DDL for the database using the default Hibernate naming strategy provided in the default Spring Boot autoconfiguration, you may encounter exceptions when using more than Embedded JPA Entity property in a parent JPA Entity. The Spring Boot default Hibernate naming strategy does not support this. (As of Spring Boot 1.3.0)

I was coding an example for my Spring Core online course, when I ran into this problem.

Here is the exception I received when starting up the embedded Spring Boot Tomcat container.

In a nutshell, the Hibernate Mapping Exception is caused by non-unique column names for mapped to the properties of the Embedded JPA Entities.

One solution to this issue could be to use the @AttributeOverride  annotation to manually provide unique column names for my Embedded JPA Entities. But, looking at the examples in the Oracle documentation, this becomes kind of an eyesore of Annotations on my classes.

AttributeOverride Annotation Example

A More Elegant Solution to Support Multiple JPA Embedded Entities

Looking to escape annotation hell, and Google being my friend, I found a solution on Stackoverflow was to use Hibernate’s DefaultComponentSafeNamingStrategy. This will prepend the column name to the property names of the JPA Embedded Entities.

Spring Boot by default uses SpringNamingStrategy, which overrides HIbernate’s ImprovedNamingStrategy, but adds better support of foreign key naming.

To override the default Spring Boot Hibernate Naming Strategy, you just need to provide the full class name of the Hibernate Naming strategy you wish to use in your Spring Boot application.properties as follows:


By overriding the default Spring Boot Hibernate naming strategy I was able to reduce the annotation hell of my JPA mappings.

Embedded JPA Entities Under Spring Boot Example

Here is the complete example of Embedded JPA Entities from my Spring Core Course.

Embedded JPA Entity


JPA Entity


Spring Boot Configuration

  • Spring Boot 1.3.0.RELEASE
  • H2 Database (Embedded using Spring Boot defaults)
  • Hibernate 4.3.11 (via Spring Boot Starter Data JPA Maven dependency)


Resulting Database Table

h2 database table with embedded jpa entities


I personally feel JPA has enough annotations to make it work. Some days it feels like we traded XML hell for annotation hell. (The more I work with JPA, the more I miss working with GORM in Grails.)

I’m glad I found this cleaner solution for supporting multiple Embedded JPA Entities without wearing out my fingers typing annotations. Hope this helps you along the way!


In the 1.3.0 release of Spring Boot and new module is available called Spring Boot Developer Tools. This new Spring Boot module is aimed at improving developer productivity in building Spring Web Applications.

When you’re developing a web application in Java, or really any programming language, a common workflow is to code, compile, deploy, and then test in the browser. In scripting languages, such as PHP, there is no compile / deploy phase. The script is evaluated by the server at run time, thus negating the need for a compile / deploy phase.

In the world of Java web development, we don’t have this luxury. Our Java code is compiled down to Java byte code, then deployed to an application server such as Tomcat. The compile, deploy, test phase is a common step in the process of writing software. The longer it takes, the greater the impact it has on your productivity. I’ve seen this cycle take just a few seconds, to 30 minutes. Yes, 30 minutes! (It was a highly coupled legacy application from the early 90s – one of the most effing awful developer experiences I’ve ever encountered!)

For a long time, the Grails community has enjoyed the benefits of automatic class reloading. It’s such a pleasure coding a Java application, and only needing to save your file to have the code automatically reload in the Tomcat container – nearly instantly. This is one of the features that drew me to web development with Grails.

This feature has been missing from web development with Spring MVC for a long time. You could use a 3rd party tool such as jRebel, but at $475 annually for a license, it’s an expensive option for those coding outside the enterprise.

In the world of web development with just Spring MVC, this new feature available in Spring Boot Developer Tools has been long overdue. Way way overdue!

Reloading vs Restarting

The reloading agent from Grails is now its own project, called Spring Loaded. This takes a slightly different, but important approach than the one used in Spring Boot Developer Tools. In reloading, the agent reloads the Java class in the JVM. This avoids the need to start the Tomcat container and Spring context. But has some drawbacks. Works great for coding changes in the class itself. But change the package, or add a new class / Spring bean, and you still need to restart.

Spring Boot Developer Tools takes a different approach, it does a restart, not a reload. BUT – under the covers, it is using two class loaders. One for all jar classes in your project, and one for your project classes. Thus on a ‘restart’, only the project classes are reloaded. The 10’s of thousands of classes contained in jar files in your typical Java Spring project are not reloaded. By doing this, restarting Tomcat and the Spring context become VERY fast. Since the Spring context is being restarted, it addresses issues found with the approach used in Spring Loaded.

Use with Build Tools

The automatic restart is triggered when changes on the classpath are detected. Thus if you build with Maven or Gradle class files in the target directory will change and an automatic build will be triggered.

Use with IDEs

IntelliJ and Eclipse are the two most popular IDEs for Java development. There are some notable differences in use between the two IDEs.

Eclipse is the foundation for the Spring Tool Suite (aka STS). Development of the Spring Boot Developer Tools seems biased towards STS. Which is to be expected. Both are Pivotal products. An automatic restart in Eclipse is triggered with the save action. In Eclipse, this triggers a recompile of the change classes, which triggers the automatic restart.

With IntelliJ the process is slightly different. IntelliJ does not recompile on save, but unlike Eclipse, it does perform automatic file saves for you. IntelliJ can be configured to compile on save, but this gets disabled when an application is running. Thus in IntelliJ, you need to trigger the build manually, which will in turn fire off the automatic restart. So with the extra step, the developer experience in IntelliJ is not quite as smooth.

I prefer the developer experience with Spring Loaded, where the changes made to your *.java files will trigger the automatic restart/reload. If the Spring Boot Developer Tools had been developed the same way, the developer experience in both IDEs would be the same. Maybe the team developing the Developer Tools had a technical reason for this. Or maybe it was a choice by Pivotal to promote STS on the Eclipse platform.

This is inconvenient, but I’m not changing IDEs. The last time I tried STS, it was awful. Randomly hanging, or crashing. You just get what you pay for IMHO.

Live Reload

Another cool feature of the Spring Boot Developer Tools is the Live Reload integration. Live Reload is a browser plugin, which will trigger a page reload upon changes to the source. Thus when you change web content, the page in the browser will automatically refresh. Small improvement. But it is nice not clicking refresh in the browser all the time.

Live Reload is advertised to work for Firefox, Chrome, and Safari. I was unable to get the plugin working in Firefox – it may be currently broken with Firefox. I did get Live Reload working fine with Chrome. I did not try using it with Safari.

Free Introduction to Spring Tutorial

Are you new to the Spring Framework? Checkout my Free Introduction to Spring Online Tutorial.

Remote Development and Debug

The Spring Boot Developer Tools includes support of doing remote development and debugging. You can configure automatic restarts and debugging to a remote server. Kind of a cool thing to do. But I’m not sure where I would personally ever use this feature. The folks from Pivotal have a little demonstration of this towards the end of this video.

Demonstration of Spring Boot Developer Tools

I’ve described how Spring Boot Developer Tools can improve the development workflow and improve your productivity. But a seeing a demonstration is far more effective. I recorded this video to show you the Developer Tools in action.


The Spring Boot Developer Tools module brings some great (and long overdue) features to developing applications with Spring. The automatic restart feature will have a positive impact on your productivity in developing web applications. Use of Developer Tools will change how you develop applications, in a good way. This is the initial release, and the time I’ve spent using the Developer Tools has been short. I saw one quirky thing, but not a show stopper. My impression is the developer tools module is ready for production use. If you’re using Spring Boot to develop web applications, it’s time to upgrade to the 1.3.0 release.


At the heart of the Spring Framework is its support of dependency injection through its Inversion of Control container. In this video, I look at using some of the advanced autowire features of Spring.


By default, Spring will autowire by type. When you have more than one Spring Bean of a given type, you can use the @Primary annotation to give a specific bean preference over the others. If Spring cannot determine which bean should be wired by type when more than one is defined, the Spring context will fail on startup with an org.springframework.beans.factory.NoUniqueBeanDefinitionException  exception.


You also have the option of using the @Qualifier annotation in conjunction with the @Autowired annotation to control how beans are autowired in Spring. While the default behavior of Spring is to autowire by type. The Qualifier allows you to specify the id or name of the bean you wish autowired into the bean.


The following video demonstration is a module from my Spring Core online course. In this video, I show you how to work with Spring’s autowire by type. Then show you how to fine tune Spring’s Autowire functionality through the use of Spring Profiles, the @Primary annotation, and the @Qualifier annotation.

While this is a very simple demonstration, I hope you can see the amount of control you have when you are configuring Spring to autowire beans in your application. It’s not uncommon to be dealing with more than one datasource. Naturally, you’re going to wish to have control over which datasource is autowired into your bean.


I was recently asked on my Facebook page, “How do I become a Java Web Developer?” There is  really no simple answer for this question. There are many facets to becoming a Java web developer. I’ve encountered Java developers who were good front end developers, or good backend developers. By ‘front end’, I mean more of the browser side technologies – HTML, CSS, Javascript and then Java templating technologies such as Thymeleaf, Sitemesh, or just good old JSPs. Backend developers are going to have stronger skills with Java, databases (SQL and NoSQL), messaging (JMS/AQMP), and web services (SOAP / REST).

You also have what is known as a “full stack” Java developer. This is my personal skill set. A full stack developer is equally skilled as a front end developer and as a back end developer. This is probably the most difficult track to follow, just because of the diversity of technologies involved. One one day you might be debugging something in JQuery, and the next you’re performance tuning an Oracle database query. It takes time and experience to become a full stack Java developer.

Where to Start?

For aspiring developers, the technology landscape can be overwhelming. The technology landscape is always evolving too. Do you risk learning something that will soon be obsolete?

Client Side Technologies

My advice to new developers is to start with the basics. HTML, CSS, and Javascript. These technologies are core to web development. These technologies are also generic in the sense that it does not matter if you’re a Java web developer, or a Ruby web developer.


HTML – Hypertext Markup Language. This is what makes a web page. You need to have a solid understanding of HTML. Back in the beginning of the World Wide Web HTML was traditionally a file that was served by a web server to the browser. This worked great for static content. Stuff that never changed. But this is becoming rare. People want dynamic content. Thus, the HTML is no longer a static file, the HTML is generated on demand. As a Java Web Developer you’re going to be writing code that generates the HTML document for the web browser. You will need to have a solid understanding of the structure of an HTML document.


CSS – Cascading Style Sheets. This is what styles a page. It controls the fonts, the colors, the layout. While HTML defines the content of a web page, CSS defines how it looks when presented in a browser. For example, you may use one set of CSS rules for a desktop web application, and a different set of CSS rules for a mobile application. Same HTML, but two completely different looks when rendered by the browser.


Javascript – Do stuff on the web page. Do not confuse Javascript with Java. While there are some syntax similarities, these are two completely different programming languages. Javascript is what really drives Web 2.0 applications. Through the use of Javascript, you can dynamically change the HTML/CSS based on user actions, giving the web page a more application like feel for the user.


Hypertext Transfer Protocol – The communication between the client and the web server. I see too many web developers who do not understand HTTP. This is absolutely critical for you to understand. Especially as you get into working with AJAX. You need to know the difference between a POST and a GET. You should have memorized the meanings of HTTP status codes 200, 301, and 404 – and more. As a Java web developer you will work with HTTP everyday.

Server Side Technologies


Java – The question is how to become a Java web developer. So, of course you are going to need to know the Java programming language.In addition to just Java itself, you should be familiar with the Java Servlet API. There’s a number of Java web frameworks, which obscure the usage of the Java Servlet API. When things go wrong, you’re going to need to know what’s happening under the covers.


JPA – Java Persistence API – Using the database. JPA is the standard for working with traditional relational databases in Java. Hibernate is the most popular JPA implementation in use today. As a Java web developer, you’re going to be working with databases. You’ll get getting content from the database to display on a web page, or receiving content from the user to store in the database. Java Web Developers need to know how to use JPA.

Java Application Servers

Java Application Servers – The runtime container for Java web applications. Tomcat is, by far, the most popular Java application server. There is a Java standard for a Web Application Archive file – aka WAR file. These are deployed to Application servers such as Tomcat to provide the runtime environment for your web application. A decade ago, the trend was to use a more complex coupling between your application and the application server. However, the current trend is in favor of a looser coupling between your application and the application server.

Java Frameworks

Notice so far, I have not mentioned anything about the plethora of Java frameworks available for you to use? So far, I’ve described the different technologies you will use as a Java web developer. The client side technologies are completely independent of the server side technologies. Firefox doesn’t care if the server is running Java, Python or .NET. New developers often seem to forget this.

It is possible to do Java web development without using one of the Java frameworks. If you do, you will be writing a lot code to handle things which a framework would take care of for you. Which is why when developing Java web applications, you typically will want to use one of the frameworks.

Spring Framework

The Spring Framework is an outstanding collection of tools for building large scale web applications. Exact metrics are hard to determine, but I’ve seen some estimates which say Spring is used in over 60% of Java based web applications. Which really isn’t too surprising. You have the IoC container and dependency injection from Spring Core. Spring MVC, a mature and flexible MVC based web framework. Spring Security, best in class t