Skip to content
This repository was archived by the owner on Jul 26, 2022. It is now read-only.

JPA Spike

massfords edited this page Feb 28, 2015 · 9 revisions

Spring

Spring is used to assemble the application and perform basic dependency injection. For the most part, the declarative "AutoWired" / "Injected" annotations are avoided for production code. Instead, the beans in the XML files use the constructor-arg syntax to create the beans with all of their dependencies passed into their constructor. This makes for more configuration, but it's much more explicit and less like dark magic.

The one exception to this is the standard JPA annotation: @PersistenceContext. This is used on the Storage classes to tell the container to inject an instance of an EntityManager. Spring does a nice job of providing a proxy EntityManager that respects the Transactional annotations for calls.

For example:

public class PlateTemplateStorageImpl implements PlateTemplateStorage {
    @PersistenceContext
    private EntityManager em;
    ...
}

JPA

The domain entities are annotated with the standard JPA annotations. So far I have not needed any Hibernate specific annotations. Consider the Well class below:

@Entity

Tags the bean as an entity. By default, there will be a single table named "Well" that contains these entities. JPA favors convention over configuration to a limited extent. In many cases, the default behavior is reasonable, but it usually requires at least some hint to the entity manager about the relationship.

@Id

Each entity requires a primary key. In this case we're using a Long (as opposed to a long). Also note that there's an additional annotation for indicated that the value is an AUTO-INCREMENT type (or whatever the underlying DB offers). The choice of of a Long instead of a long is in order to have a null in the case of a non-persisted entity. Passing an entity to the EntityManager w/o an id tells it to INSERT it into the db. The entity will then get a value for the id.

@Embedded

Each well has a row and col that make up its coordinate. As expected, these are modeled in a RDBMS as simple columns for a row and column. However, it's nicer to have a cleaner model in Java since those two fields go together and there could be other opportunities for either specifying them together or perhaps using the row and col together as an identifier for the well. Consider that a Plate could only have a single Well with a coordinate of (0,0).

@NotNull

This is a javax.validation.constraints annotation. JPA supports the basic validation constraints like NotNull, Min, Max, and Size. These annotations are useful for validating the beans as they enter the system and again when they enter the persistence layer. The underlying JPA provider (Hibernate) will perform the validation either during the INSERT/UPDATE or include an integrity check in the generation of the DDL.

@OneToMany

Models the relationship between a Well and its Doses. A well can have multiple Doses in it. It is a one:many relationship. These annotations also support cascading operations such that inserting a Well can also automatically cascade and insert a Dose. Similarly, removing a Well from the system should also automatically remove all of its referenced Doses. Finally, the orphan removal attribute on this annotation ensures that a Well that is updated and no longer points to a previously inserted Dose, then that Dose is removed.

@Entity
public class Well extends BaseEntity {

    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @Embedded
    private Coordinate coordinate;

    private String label;

    @NotNull
    private WellType type;

    @OneToMany(orphanRemoval = true, cascade = CascadeType.ALL)
    private Set<Dose> contents = new LinkedHashSet<>();

    public Well() {}

    public Well(int row, int col) {
        this.coordinate = new Coordinate(row, col);
    }
    ...
}

Corresponding H2 DDL for the Well:

create table Well (
    id bigint generated by default as identity,
    col integer,
    row integer,
    label varchar(255),
    type integer not null,
    primary key (id)
);

DDL Generation

The most recent version of JPA provides a Schema Generator in order to construct DDL files for a system. Prior to this update, the realm of schema generation was vendor specific. Hibernate has had a schema generator and corresponding ANT / MAVEN plugin to do this as part of the build for a long time. Unfortunately, it looks like the Hibernate3 plugin never migrated to Hibernate4/5. As a result, I'm using a 3rd party schema generator plugin here as referenced in the we99-ddl pom.xml.

For many systems, it's possible to make it to a production 1.0 deployment w/o ever having to hand craft or even check in a DDL file. Longer term, there are issues of updates, but in the short term, the DDL is rebuilt with each build based on the latest annotations in the entities. Thus, adding a new table, relationship, index, etc, is as simple as an annotation change.

JAX-RS

Java API for RESTful Services is the standard set of annotations for making Java classes available as REST services. At this point, the services in this spike are little more than wrappers around their storage classes to perform the basic CRUD operations. We'll likely add more capabilities over time like listing calls or more complex business operations like creating a new Plate from a PlateTemplate.

There is on place where I stray from the standard JAX-RS specification, but for good reason. First, consider the following service interface:

@Path("/plateTemplate")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public interface PlateTemplateService {
    /**
     * Creates a new template in our system.
     * @param template
     * @return
     */
    @PUT
    PlateTemplate create(PlateTemplate template);

    /**
     * Gets an existing template or throws an exception with 404
     * @param id
     * @return
     */
    @GET
    @Path("{id}")
    PlateTemplate get(@PathParam("id")Long id);

    /**
     * Udpates an existing template or throws an exception with a 404 if not found.
     * @param id
     * @param template
     * @return
     */
    @POST
    @Path("{id}")
    PlateTemplate update(@PathParam("id") Long id, PlateTemplate template);

    /**
     * Deletes an existing template or throws an exception with a 404 if not found
     * @param id
     * @return
     */
    @POST
    @Path("{id}")
    Response delete(@PathParam("id") Long id);
}

Annotations

@Path

The path annotation tells the service container what service method to invoke when it receives an HTTP request. This routing works in conjunction with the Path annotation on the top level service like the annotation on the interface above as well as specific methods that may ADD to the path as shown in the get and update methods above. Finally, the HTTP verb is also considered when doing the routing.

@Consumes / @Produces

For our purposes, we're probably only going to work with JSON so specifying the default types of data we'll consume and produce at the top of the interface is sufficient. This can be overridden at the method level. The Content-Type of the request message is also included in the routing so it's important for REST clients to include this header. Also, it's possible to support multiple response types. For example, we could support XML and JSON at the same time simply through an additional MediaType annotation for either Consumes, Produces or both. The format selected by the container for the response will be determined by the caller's HTTP Accepts header.

@PUT, @GET, @POST, @DELETE

These are the HTTP verbs that describe the type of request that the service will respond to. In rare cases, some web servers don't allow the DELETE verb so you may need to fall back and do DELETE's via a POST.

@PathParam (and @QueryParam, not shown)

The PathParam annotation maps a portion of the path to a param on the method call. Thus sending a GET to /plateTemplate/1234 will return the PlateTemplate with the id 1234. The same concept applies to QueryParams except that they do not need to be mentioned in the Path ahead of time.

Response Readers / Writers

The service container for all of our REST services is configured to automatically read/write the request payloads and response payloads to Java / JSON via the Jackson library. We don't need to manually marshal values to/from JSON. This all happens declaratively.

Thus, the methods above in the interface all accept simple Java POJO's and respond similarly with a POJO. If a service call only needs to return a status code, then it can return the Response object with the appropriate HTTP status code.

Java Interfaces and Apache CXF

The main deviation from the JAX-RS specification is using Apache CXF's Client Proxy. As you may recall, JAX-WS (for WSDL based Web Services) has a nice feature where the interface that provides the stub for your remote Web Service can be instantiated locally as a java.lang.reflect.Proxy. This allows you to make calls to the remote service as if it were local, the the Proxy would handle all of the marshalling/unmarshalling to XML, including propagating wsdl:Fault responses to Exceptions. This allowed developers to work in a single language (Java) and not have to be split between the worlds of Java and XML.

Sadly, JAX-RS doesn't have anything similar in its specification. This is likely owing to the fragmentation of REST. There is no single artifact to define what a REST service is. I have found it very helpful to try to bring back the concept of a single artifact to describe the interface and thus use Java interfaces to model these REST services.

Apache CXF makes this easier as they provide the type of lightweight client proxy that JAX-WS did. Consider the following snippet from the PlateTemplateIT in the we99-web module:

    URL url = new URL("http://localhost:8080/we99/services/rest/");
    ClientFactory cf = new ClientFactory(url);

    plateTypeService = cf.create(PlateTypeService.class);

    PlateType plateType = plateTypeService.create(new PlateType()
            .withRows(4).withCols(3)
            .withManufacturer("Foo Inc."));

The URL above points to the JAX-RS container that has all of the REST services for our application. It DOES NOT point to a specific service. Thus, the URL and ClientFactory could be used to instantiate a proxy for ANY service at that URL.

Creating a service proxy is as simple as passing the Class for the interface that you want a proxy for. There are about 15 lines of code copied from the Apache CXF website inside of my ClientFactory to make the call simpler, but that's it. The CXF library will instantiate a dynamic proxy that intercepts every method call on that interface and maps it to an HTTP request that gets sent on the wire to the remote service. Similarly, it'll map the response to the return type described on the signature or throw if there was an exception.

The call to create a new PlateType looks just like regular Java and is in fact indistinguishable from a test that might work with a local copy of the service as opposed to a remote copy. The result is a single language in which to test these services that has compile time checking for the arguments and integrates nicely into our IDE's so refactoring of interfaces ripples into test code.

Jetty

Jetty is an embeddable web server and servlet container. It's used in this project for integration testing. The maven jetty plugin spins up a copy of our web app prior to running integration tests in we99-web and stops it after the tests are done.

Using a real web server for these tests is a little slower at build time (12 seconds so far for that module) but it adds the additional aspect of transport level testing. By invoking the services over the wire, we're ensuring that we're testing all of the various annotations/configs up and down the request/response channel to ensure that nothing was missed.

You can also run a copy of the web app locally as so:

cd we99-web
mvn jetty:run

TS: If you encounter an error, you made need to follow the instructions in this link to setup the jetty plugin.

Clone this wiki locally