Wednesday, November 22, 2023

Transition from Java EE to Jakarta EE

What happened and what you need to know


Java EE is undoubtedly one of the most recognizable frameworks for server-side Java. It essentially kick-started the industry for using Java on the server, and it goes all the way back to the very beginnings of Java in 1996 with Kiva Enterprise Server (GlassFish) and the Tengah application server (the Oracle WebLogic Server ancestor). Note that here, the word Tengah refers to an administrative region in the center of the island of Java in Indonesia.

Java EE, or J2EE (Java 2 Enterprise Edition) as it was known before, is perhaps best known for its Java Servlet specification and for servers implementing that, such as Tomcat and Jetty. These are often called servlet containers. Although there are alternatives, many server applications and third-party frameworks are based on the Java Servlet specification. Besides this specification, Java EE in later years became known for its specifications for persistence (Java Persistence API [JPA], mostly via Hibernate), REST (JAX-RS), WebSocket, and a slew of smaller specifications such as for transactions (Java Transaction API [JTA], mostly used under the covers by JPA), for validation (Bean Validation), and for JSON (JSON-P and JSON-B).

In practice, some applications that might not seem to be classified as Java EE applications might use a variety of Java EE APIs.

Full implementations of Java EE, traditionally used in application servers, have enjoyed considerable success as well: JBoss/WildFly, GlassFish/Payara, and, more recently, Open Liberty (the modern successor of WebSphere) are all well known.

Then there’s a group of products that are neither application servers nor servlet containers, but do support a variety of Java EE APIs out of the box. These include Quarkus (Contexts and Dependency Injection [CDI], JAX-RS, JPA), Helidon (CDI, JAX-RS, JPA, JTA), KumuluzEE (CDI, JAX-RS, JPA, Servlet, JavaServer Faces [JSF], WebSocket, Bean Validation, JSON-P), and Piranha (CDI, JAX-RS, Java EE Security, Expression Language [EL]).

Finally there’s the Java EE offspring platform called MicroProfile, which directly depends on Java EE APIs such as CDI, JAX-RS, and JSON. All together this makes the Java EE APIs quite relevant for a large group of users.

What Has Been Going on with Java EE?


The last release of Java EE proper was Java EE 8 in August 2017. This was a scope-reduced release, although it did contain important key functionality, such as Java EE Security. Oracle decided later that year to transfer Java EE fully to an open source foundation. In coordination with Java EE partners Red Hat and IBM, it was decided to transfer Java EE along with the full reference implementation and the Technology Compatibility Kit (TCK) to the Eclipse Foundation.

Due to the enormous amount of work involved with this transfer, the process was split into three stages.

Stage 1: Transfer API and implementation code and release a verified build. The first stage involved creating a new top-level project at Eclipse called Eclipse Enterprise for Java (EE4J). The EE4J project and its associated GitHub organization, eclipse-ee4jz, are home to both the specification and implementation projects. EE4J should not be confused with the new brand name for Java EE, Jakarta EE, which was selected several months later by the community.

Before the actual transfer of all the existing source code from the Oracle repository at github.com/javaee could be done, all the code had to be cleared legally, which among other things meant potentially controversial portions had to be removed. Weighing in at many million lines of code, this was clearly no small task. Applying this legal clearing to all the historical code as well would have been simply undoable. Therefore, the first thing to note is that only the latest released versions of the code were transferred. For instance, JSF 2.3 was transferred as a snapshot from its master branch. JSF 2.2 and earlier versions remain at their original location and are not maintained or supported by the Eclipse Foundation.

After the transfer of the source code, all the code was built using Eclipse build servers, and the result was staged to a Maven repository. The API JAR files had their Maven group ID changed from javax.* to jakarta.*, indicating that they are the build artifacts produced by Eclipse. From these, a new build of GlassFish was produced, and against this build the original Java EE 8 TCK was run. After the build passed the TCK tests, proving that all the code was transferred successfully, it was released as GlassFish 5.1.

By the way, the initial release of the APIs under the Jakarta group ID are Java EE 8 certified, not Jakarta EE 8 certified. For example, jakarta.faces:jakarta.faces-api:2.3.1 is identical to javax.faces:javax.faces-api:2.3 and both are Java EE 8 certified, but the first is built from github.com/eclipse-ee4j and the latter is from github.com/javaee.

Stage 2: Transfer TCK code, set up a new specification process, define new terms, and release a rebranded build. The second stage involved transferring the TCK and building new binaries from it for Jakarta EE 8 certification. A new certification process was set up: the Jakarta EE Specification Process (JESP). Also, a new specification license was created: the Eclipse Foundation Technology Compatibility Kit license.

In this stage, new simplified and more-consistent names were created for all specifications. The new names all start with Jakarta and are followed by a simple description of the specification, avoiding inconsistent filler words such as architecture, api, and service.

The old and new terms are shown in Table 1.

Transition from Java EE to Jakarta EE
Table 1. Old Java EE 8 terms compared to new Jakarta EE 8 terms

The Javadoc for all APIs was updated in this stage to reflect the new terms, and the resulting API JAR files were relicensed and then tested against GlassFish 5.1 with the rebranded TCK that was built from the transferred TCK source code. All this was done following the new JESP specification process.

The resulting API JAR files were all released with near-empty placeholder specification documents. These combined constitute Jakarta EE 8.

For the individual JAR files, this means that this stage is the third release of technically the same API, the second release using the Maven Jakarta group ID, and the first release that’s Jakarta EE certified. Table 2 shows an example for JSF/Jakarta Faces.

Transition from Java EE to Jakarta EE
Table 2. Example showing the JAR files for JSF and Jakarta Faces

There are two extra things to notice here.

The first is that for Jakarta EE 8, there wasn’t a corresponding GlassFish release, although GlassFish 5.1 was certified for Jakarta EE 8 in addition to the existing Java EE 8 certification.

The second is that, as mentioned above, Jakarta EE 8 was released with essentially empty specification documents. The reason for this is the large amount of time it takes to legally clear and transfer those documents, and this simply was not finished in time for the Jakarta EE 8 release. For now, users (nonimplementors) of the technologies can read the evaluation version of the corresponding Java EE 8 documents.

Updating to the Jakarta EE versions of the APIs is the first small step users can take to prepare themselves for the upcoming larger changes. In a Maven project, doing that is mostly as simple as replacing this:

<dependency>
    <groupid>javax</groupid>
    <artifactid>javaee-api</artifactid>
    <version>8.0</version>
    <scope>provided</scope>
</dependency>

With this:

<dependency>
    <groupid>jakarta.platform</groupid>
    <artifactid>jakarta.jakartaee-api</artifactid>
    <version>8.0.0</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupid>jakarta.platform</groupid>
    <artifactid>jakarta.jakartaee-api</artifactid>
    <version>8.0.0</version>
    <scope>provided</scope>
</dependency>

Or, when individual dependencies are used, replacing this:

<dependency>
    <groupid>javax.faces</groupid>
    <artifactid>javax.faces-api</artifactid>
    <version>2.3</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupid>javax.faces</groupid>
    <artifactid>javax.faces-api</artifactid>
    <version>2.3</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupid>javax.faces</groupid>
    <artifactid>javax.faces-api</artifactid>
    <version>2.3</version>
    <scope>provided</scope>
</dependency>

With this:

<dependency>
    <groupid>jakarta.faces</groupid>
    <artifactid>jakarta.faces-api</artifactid>
    <version>2.3.2</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupid>jakarta.faces</groupid>
    <artifactid>jakarta.faces-api</artifactid>
    <version>2.3.2</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupid>jakarta.faces</groupid>
    <artifactid>jakarta.faces-api</artifactid>
    <version>2.3.2</version>
    <scope>provided</scope>
</dependency>

<dependency>
    <groupid>jakarta.faces</groupid>
    <artifactid>jakarta.faces-api</artifactid>
    <version>2.3.2</version>
    <scope>provided</scope>
</dependency>

Because the APIs are essentially identical, there should be few issues after this update. Note, though, that Maven does not see the two dependencies as related with one being newer than the other.

For Maven, there are two totally different dependencies, and Maven will happily include both of them. This can happen, for instance, when a top-level dependency transitively brings in a Java EE dependency. Prior to the update to Jakarta, a transitively introduced javax.faces:javax.faces-api:2.2 would be overridden by, for example, a top-level javax.faces:javax.faces-api:2.3.

When that top-level dependency is changed to jakarta.faces:jakarta.faces-api:2.3.2, the 2.2 dependency will no longer be overridden and Maven will use them both, leading to all sorts of problems. If the transitive inclusion can’t be updated, this issue can typically be fixed by using exclusions, for example:

<dependency>
    <groupid>com.example</groupid>
    <artifactid>foo</artifactid>
    <scope>provided</scope>
    <exclusions>
        <exclusion>
            <groupid>javax.faces</groupid>
            <artifactid>javax.faces-api</artifactid>
         </exclusion>
     </exclusions>
 </dependency>


<dependency>
    <groupid>com.example</groupid>
    <artifactid>foo</artifactid>
    <scope>provided</scope>
    <exclusions>
        <exclusion>
            <groupid>javax.faces</groupid>
            <artifactid>javax.faces-api</artifactid>
         </exclusion>
     </exclusions>
 </dependency>


<dependency>
    <groupid>com.example</groupid>
    <artifactid>foo</artifactid>
    <scope>provided</scope>
    <exclusions>
        <exclusion>
            <groupid>javax.faces</groupid>
            <artifactid>javax.faces-api</artifactid>
         </exclusion>
     </exclusions>
 </dependency>


<dependency>
    <groupid>com.example</groupid>
    <artifactid>foo</artifactid>
    <scope>provided</scope>
    <exclusions>
        <exclusion>
            <groupid>javax.faces</groupid>
            <artifactid>javax.faces-api</artifactid>
         </exclusion>
     </exclusions>
 </dependency>

<dependency>
    <groupid>com.example</groupid>
    <artifactid>foo</artifactid>
    <scope>provided</scope>
    <exclusions>
        <exclusion>
            <groupid>javax.faces</groupid>
            <artifactid>javax.faces-api</artifactid>
         </exclusion>
     </exclusions>
 </dependency>

That brings me to the final step in the process.

Stage 3: Transfer and update the specification documents, prune the specifications, change the API package name, and release JDK 11. The final step of the transfer, which is currently in process and set to complete later this year, includes transferring the specification document source code (mostly in AsciiDoc). After the API code, implementation code, and TCK code, this is the final artifact to be transferred. Just like the Javadoc for the APIs, the specification documents will be updated to use the new terminology.

The highest impact item in this stage, however, is changing the package name in all the Java APIs from javax.* to jakarta.*. For instance javax.faces.context.FacesContext will become jakarta.faces.context.FacesContext. The consequence of this package name change is that code of existing applications will have to be updated, making this a nontrivial update.

Given the large amount of time that has passed since the Java EE 8 release, Jakarta EE 9 will officially require support for JDK 11. However, since JDK 8 is still so important, the APIs remain at JDK 8. Practically, this means the APIs have to be compiled with the JDK 8 source code level, but the implementations must pass the TCK running on JDK 11.

Because JDK 11 removed several specifications that had earlier been moved from Java EE into Java SE, these will now be moved back again. Jakarta Activation enters Jakarta EE as a required specification (specifically because it’s a required dependency of Jakarta Mail), while Jakarta XML Binding, Jakarta XML Web Services, Web Services Metadata, and SOAP with Attachments are added as optional specifications.

Jakarta Enterprise Beans (formerly EJB) will be reduced in size again. After entity beans (including EJB Query Language) and Java API for XML-based RPC (JAX-RPC) endpoints were pruned in EJB 3.1, it’s now time to prune the EJB 2.1 API group (for example, javax.ejb.EJBHome) and the so-called distributed interoperability.

Furthermore, the Deployment specification (JSR 88) and the Management specification (JSR 77) will be pruned as well. JSR 88 was already optional in Java EE 8, although JSR 77 was once slated for a major update; however, that failed to materialize. JAX-RPC, which was long ago superseded by JAX-WS and already optional in Java EE 8, will now finally be pruned as well, together with XML Registries, which also was already optional in Java EE 8.

Table 3 shows the JSF/Jakarta Faces example again updated for Jakarta EE 9, and changes related to Java EE 8 are in bold. The final row is tentative and subject to change still (though unlikely).

Transition from Java EE to Jakarta EE
Table 3. JSF/Jakarta example updated for Jakarta EE 9 (view larger image)

Conclusion

After Jakarta EE 9 has been released, and presumably all specification documents have been transferred and updated, the transfer of Java EE 8 will be considered done. At that point, everything related to Java EE will have been moved to the Eclipse Foundation and updated to the new branding.

Functionally speaking, Jakarta EE 9 is still essentially the same as Java EE 8, so from a purely functional perspective, neither Jakarta EE 8 nor Jakarta EE 9 are particularly enticing for users to update to. The purpose of those releases is, however, to give the community and the ecosystem (for example, tooling and library vendors) the opportunity to prepare their applications and products. Jakarta EE 10 will be the first version in which new functionality will appear. Table 4 shows the releases and release dates (tentative dates are denoted by *).

Transition from Java EE to Jakarta EE
Table 4. Releases and release dates

Monday, November 20, 2023

Using JSON Relational Duality Views with Micronaut Framework

Using JSON Relational Duality Views with Micronaut Framework

Oracle JSON Relational Duality delivers a capability that provides the benefits of both relational tables and JSON documents, without the trade-offs of either approach. The new feature in Oracle Database 23c that enables this capability is referred to as a JSON Relational Duality View.

Using Duality Views, data is still stored in relational tables in a highly efficient normalized format but is accessed by applications in the form of JSON documents. Developers can thus think in terms of JSON documents for data access while using highly efficient relational data storage, without having to compromise simplicity. In addition, Duality Views hide all the complexities of database level concurrency control from the developer, providing document-level serializability.

In this blog post, we provide an example of using the Micronaut Framework to create and interact with a JSON Relational Duality View.

The source for the example is available on github, and we'll look at particular snippets to demonstrate how to use Micronaut Data with Duality Views.

1. The Example Application


Our example is a simple relational database application that represents a student course schedule. A student has a course with a name, a time, a location, and a teacher. A simple example like this uses data stored in multiple normalized relational tables: a student table, a teacher table, a course table, and a table mapping students to their courses. But it is not always straightforward for developers, even in a simple example like this, to build the course schedule for one student, say, "Jill". The developer has to retrieve data from all four tables to assemble Jill's schedule. What the developer really wants is to build Jill's schedule using a single database operation.

What if we could use JSON documents to build this application? That would really simplify database access. JSON is very popular as an access and interchange format because it is so simple.

For example, the course schedule could be represented in a JSON document as a simple hierarchy of key-value pairs. So, Jill's schedule could be as simple as a single JSON document, providing details of each of her courses (name, time, location, and teacher).

However, JSON has limitations as a storage format because of data duplication and consistency. Even in the simple example of student schedules, the course and teacher information is stored redundantly in each student's course schedule document. Duplicate data is inefficient to store, expensive to update, and difficult to keep consistent.

JSON Document Relational Duality Views combine the benefits of the Relational and the Document approach.

A duality view declares the recipe for assembling normalized rows into a JSON document using SQL or GraphQL syntax. The structure of the view mirrors the structure of your desired JSON document. Then you can select from the duality view using SQL, and return Jill's course schedule as a JSON document. You can also update the JSON document that represents Jill's course schedule and the duality view updates the underlying database tables.

1.1. Application Configuration

The application is configured in src/main/resources/application.yml, as follows:

micronaut: 
  application:
    name: OracleJsonDemo
  server:
    thread-selection: io
datasources: # <2>
  default:
    schema-generate: none
    packages: org.com.example.entity
    dialect: oracle
test-resources: # <1>
  containers:
    oracle:
      image-name: gvenzl/oracle-free:latest-faststart
      startup-timeout: 360s
      db-name: test
flyway: # <3>
  datasources:
    default:
      enabled: true
      baseline-version: 0
      baseline-on-migrate: true

In addition to the name of the application, the configuration file contains three properties that are required by this example application:

1. Test resources: an oracle database container image.
2. Datasources: to indicate the database dialect, and the package(s) to be used.
3. Flyway: to automate the creation of the database schema, including the tables and relational duality view. Micronaut integration with Flyway automatically triggers schema migration before the Micronaut application starts.

1.2 Application Schema

Flyway reads SQL commands in the resources/db/migration/ directory, runs them if necessary, and verifies that the configured data source is consistent with them. The example application contains two files:

  • src/main/resources/db/migration/V1__schema.sql: this creates the COURSE, STUDENT, TEACHER, and STUDENT_COURSE tables, and adds foreign key constraints between them.
  • src/main/resources/db/migration/V2__view.sql: this creates the STUDENT_SCHEDULE relational duality view.

Let's take a closer look at the second of those two files:

CREATE OR REPLACE JSON RELATIONAL DUALITY VIEW "STUDENT_SCHEDULE" AS -- <1>
SELECT JSON{
        'studentId': s."ID", -- <2>
        'student': s."NAME" WITH UPDATE, -- <3>
        'averageGrade': s."AVERAGE_GRADE" WITH UPDATE,
        'schedule': [SELECT JSON{'id': sc."ID", -- <4>
                                 'course': (SELECT JSON{'courseId': c."ID", -- <5>
                                                       'teacher': (SELECT JSON{'teacherId': t."ID", -- <6>
                                                                                'teacher': t."NAME"}
                                                                    FROM "TEACHER" t WITH UPDATE WHERE c."TEACHER_ID" = t."ID"),
                                                       'room': c."ROOM",
                                                       'time': c."TIME",
                                                       'name': c."NAME" WITH UPDATE}
                                           FROM "COURSE" c WITH UPDATE WHERE sc."COURSE_ID" = c."ID")}
                      FROM "STUDENT_COURSE" sc WITH INSERT UPDATE DELETE WHERE s."ID" = sc."STUDENT_ID"]}
FROM "STUDENT" s WITH UPDATE INSERT DELETE;

  1. Create a duality view named STUDENT_SCHEDULE. It maps to the StudentScheduleView class described below.
  2. The ID column of the STUDENT table.
  3. The NAME column of the STUDENT table, which can be updated.
  4. The value of the schedule key is the result of a SELECT SQL operation.
  5. The value of the course key is the result of a SELECT SQL operation. It maps to the CourseView class described below.
  6. The value of the teacher key is the result of a SELECT SQL operation. It maps to the TeacherView class described below.

1.3. Application Domain

The example application consists of domain classes (in the package com.example.micronaut.entity) corresponding to the database tables (implemented as Java Record types):

  • Course
  • Student
  • Teacher
  • StudentCourse

It also includes the following view classes (in the com.example.micronaut.entity.view package) corresponding to JSON documents (also implemented as Java Record types):

  • CourseView: provides a JSON document view of a row in the COURSE table. It maps to the value of the course key described above.
  • StudentView: provides a JSON document view of a row in the STUDENT table.
  • TeacherView: provides a JSON document view of a row in the TEACHER table. It maps to the value of the teacher key described above.
  • StudentScheduleView: maps to the STUDENT_SCHEDULE view declared above.

Within the same package, the class Metadata is used to control concurrency.

Finally, the application provides a record named CreateStudentDto to represent the data transfer object to create a new student. The implementation is in the com.example.micronaut.dto package.

1.4. Database Operations

The application requires interfaces to define operations to access the database. Micronaut Data implements these interfaces at compile time. In the com.example.micronaut.repository package there is a repository interface corresponding to each table, as follows:

  • CourseRepository
  • StudentRepository
  • TeacherRepository
  • StudentCourseRepository

There is an additional interface in the com.example.micronaut.repository.view package named StudentViewRepository, which provides a repository for instances of StudentView.

1.5. Application Controller

The application controller, StudentController (defined in src/main/java/com/example/micronaut/controller/StudentController.java), provides the API to the application, as follows:

@Controller("/students") // <1>
public final class StudentController {

    private final CourseRepository courseRepository;
    private final StudentRepository studentRepository;
    private final StudentCourseRepository studentCourseRepository;
    private final StudentViewRepository studentViewRepository;

    public StudentController(CourseRepository courseRepository, StudentRepository studentRepository, StudentCourseRepository studentCourseRepository, StudentViewRepository studentViewRepository) { // <2>
        this.courseRepository = courseRepository;
        this.studentRepository = studentRepository;
        this.studentCourseRepository = studentCourseRepository;
        this.studentViewRepository = studentViewRepository;
    }

    @Get("/") // <3>
    public Iterable<StudentView> findAll() {
        return studentViewRepository.findAll();
    }

    @Get("/student/{student}") // <4>
    public Optional<StudentView> findByStudent(@NonNull String student) {
        return studentViewRepository.findByStudent(student);
    }

    @Get("/{id}") // <5>
    public Optional<StudentView> findById(Long id) {
        return studentViewRepository.findById(id);
    }

    @Put("/{id}/average_grade/{averageGrade}") // <6>
    public Optional<StudentView> updateAverageGrade(Long id, @NonNull Double averageGrade) {
        //Use a duality view operation to update a student's average grade
        return studentViewRepository.findById(id).flatMap(studentView -> {
            studentViewRepository.updateAverageGrade(id, averageGrade);
            return studentViewRepository.findById(id);
        });
    }

    @Put("/{id}/student/{student}") // <7>
    public Optional<StudentView> updateStudent(Long id, @NonNull String student) {
        //Use a duality view operation to update a student's name
        return studentViewRepository.findById(id).flatMap(studentView -> {
            studentViewRepository.updateStudentByStudentId(id, student);
            return studentViewRepository.findById(id);
        });
    }

    @Post("/") // <8>
    @Status(HttpStatus.CREATED) 
    public Optional<StudentView> create(@NonNull @Body CreateStudentDto createDto) {
      // Use a relational operation to insert a new row in the STUDENT table
      Student student = studentRepository.save(new Student(createDto.student(), createDto.averageGrade()));
      // For each of the courses in createDto parameter, insert a row in the STUDENT_COURSE table
      courseRepository.findByNameIn(createDto.courses()).stream()
          .forEach(course -> studentCourseRepository.save(new StudentCourse(student, course)));
      return studentViewRepository.findByStudent(student.name());
    }

    @Delete("/{id}") // <9>
    @Status(HttpStatus.NO_CONTENT)
    void delete(Long id) {
        //Use a duality view operation to delete a student
        studentViewRepository.deleteById(id);
    }

    @Get("/max_average_grade") // <10>
    Optional<Double> findMaxAverageGrade() {
        return studentViewRepository.findMaxAverageGrade();
    }
}

  1. The class is defined as a controller with the @Controller annotation mapped to the path /students.
  2. Use constructor injection to inject beans of types CourseRepository, StudentRepository, StudentCourseRepository, and StudentViewRepository.
  3. The @Get annotation maps a GET request to /students, which attempts to retrieve a list of students, represented as instances of StudentView.
  4. The @Get annotation maps a GET request to /students/student/{name}, which attempts to retrieve a student, represented as an instance of StudentView. This illustrates the use of a URL path variable (student).
  5. The @Get annotation maps a GET request to /students/{id}, which attempts to retrieve a student, represented as an instance of StudentView.
  6. The @Put annotation maps a PUT request to /students/{id}/average_grade/{averageGrade}, which attempts to update a student's average grade.
  7. The @Put annotation maps a PUT request to /students/{id}/student/{student}, which attempts to update a student's name.
  8. The @Post annotation maps a POST request to /students/, which attempts to create a new student. (The method uses relational operations to insert rows into the STUDENT and STUDENT_COURSE tables.)
  9. The @Delete annotation maps a DELETE request to /students/{id}, which attempts to delete a student.
  10. The @Get annotation maps a GET request to /students/max_average_grade, which returns the maximum average grade for all students.

1.6. Main Class

Like all Micronaut applications, the entry point for the example application is the the Application class in the package com.example.micronaut. It uses constructor injection to inject beans of type CourseRepository, StudentRepository, TeacherRepository, and StudentCourseRepository. It includes a main() method (which starts the application) and an init() method which populates the database tables using relational operations.

2. Run the Application


Run the application using the following command (it will start the application on port 8080):

Copy code snippet
Copied to ClipboardError: Could not CopyCopied to ClipboardError: Could not Copy
./gradlew run
./gradlew run

Wait until the application has started and created the database schema. Your output should look something like:

Jul 31, 2023 4:55:27 PM org.flywaydb.core.internal.schemahistory.JdbcTableSchemaHistory create
INFO: Creating Schema History table "TEST"."flyway_schema_history" ...
Jul 31, 2023 4:55:28 PM org.flywaydb.core.internal.command.DbMigrate migrateGroup
INFO: Current version of schema "TEST": << Empty Schema >>
Jul 31, 2023 4:55:28 PM org.flywaydb.core.internal.command.DbMigrate doMigrateGroup
INFO: Migrating schema "TEST" to version "1 - schema"
Jul 31, 2023 4:55:31 PM org.flywaydb.core.internal.command.DbMigrate doMigrateGroup
INFO: Migrating schema "TEST" to version "2 - view"
Jul 31, 2023 4:55:31 PM org.flywaydb.core.internal.command.DbMigrate logSummary
INFO: Successfully applied 2 migrations to schema "TEST", now at version v2 (execution time 00:00.772s)
16:55:34.164 [main] INFO  io.micronaut.runtime.Micronaut - Startup completed in 123859ms. Server Running: http://localhost:8080

3. Test the Application


Test the application by using curl to call the API, implemented by the StudentController class. (We recommend using jq to improve the readability of the JSON output.)

1. List all the students and their schedules by running the following command.

curl --silent http://localhost:8080/students | jq '.'

You should see output similar to the following.

[
  {
    "studentId": 1,
    "student": "Denis",
    "averageGrade": 8.5,
    "schedule": [
      {
        "id": 1,
        "course": {
          "courseId": 1,
          "name": "Math",
          "teacher": {
            "teacherId": 2,
            "teacher": "Mr. Graeme"
          },
          "room": "A101",
          "time": "10:00:00"
        }
      },
      {
        "id": 4,
        "course": {
          "courseId": 3,
          "name": "History",
          "teacher": {
            "teacherId": 1,
            "teacher": "Ms. Olya"
          },
          "room": "A103",
          "time": "12:00:00"
        }
      }
    ],
    "_metadata": {
      "etag": "FF95AEFCF102491B75E75DB54EF1385A",
      "asof": "000000000021C4BB"
    }
  },
...
]

2. Retrieve a schedule by student name.

curl --silent http://localhost:8080/students/student/Jill | jq '.'

3.Retrieve a schedule by student id. The output should look similar to above for the student named "Devjani".

curl --silent http://localhost:8080/students/3 | jq '.'

4. Create a new student with courses (and view that student's schedule). The output should be familiar.

curl --silent \
    -d '{"student":"Sandro", "averageGrade":8.7, "courses": ["Math", "English"]}' \
    -H "Content-Type: application/json" \
    -X POST http://localhost:8080/students | jq '.'

5. Update a student's average grade (by student id).

curl --silent -X PUT http://localhost:8080/students/1/average_grade/9.8| jq '.'

6. Retrieve the maximum average grade.

curl http://localhost:8080/students/max_average_grade

7. Update a student's name (by student id), for example, to correct a typo.

curl --silent -X PUT http://localhost:8080/students/1/student/Dennis | jq '.'

8. Delete a student (by student id) and retrieve the new maximum average grade (to confirm deletion).

curl -X DELETE http://localhost:8080/students/1
curl http://localhost:8080/students/max_average_grade

Discussion

We can see from the tests above how the view classes (in the com.example.micronaut.entity.view package) provide the output. Let's look at Jill's schedule in detail. The output is produced by the findByStudent() method; it returns an instance of StudentView, which is rendered as a String. You should see output similar to the following, which we have annotated. You can see that the structure of the output mirrors the structure of the STUDENT_SCHEDULE relational duality view created in src/main/resources/db/migration/V2__view.sql. If you have time, take a look at the view classes to see how they implement the structure below.

{ // Start of StudentView
  "studentId": 2,
  "student": "Jill",
  "averageGrade": 7.2,
  "schedule": [
    { // Start of StudentScheduleView
      "id": 2,
      "course": { // Start of CourseView
        "courseId": 1,
        "name": "Math",
        "teacher": { // Start of TeacherView
          "teacherId": 2,
          "teacher": "Mr. Graeme"
        }, // End of TeacherView
        "room": "A101",
        "time": "10:00:00"
      } // End of CourseView
    }, //End of StudentScheduleView
    { // Start of StudentScheduleView
      "id": 5,
      "course": { //Start of CourseView
        "courseId": 2,
        "name": "English",
        "teacher": { // Start of TeacherView
          "teacherId": 3,
          "teacher": "Prof. Yevhen"
        }, // End of TeacherView
        "room": "A102",
        "time": "11:00:00"
      } // End of CourseView
    } // End of StudentScheduleView
  ],
  "_metadata": {
    "etag": "5C51516688936720969FE3DBBAA3CEF5",
    "asof": "000000000021F3D4"
  }
} // End of StudentView

Source: oracle.com

Friday, November 17, 2023

Unleashing the Power of Java and JVM Development

Unleashing the Power of Java and JVM Development

Introduction


Welcome to the realm of Java and JVM development, where innovation meets efficiency. In this comprehensive guide, we delve into the intricacies of Java and JVM, exploring why they are pivotal in the world of software development. Our aim is not just to provide information but to equip you with insights that go beyond the ordinary.

Understanding Java: A Language of Versatility


Java stands as a stalwart in the programming landscape, renowned for its cross-platform compatibility and robust performance. It's the language that powers a myriad of applications, from mobile devices to large-scale enterprise systems. The versatility of Java lies in its ability to adapt and thrive in diverse environments.

The Java Advantage

Java's object-oriented programming paradigm allows developers to create modular, reusable code, fostering a more efficient development process. Additionally, its strong memory management and multi-threading capabilities contribute to the creation of high-performance applications.

The Heart of Java Development: Java Virtual Machine (JVM)


Decoding JVM

At the core of Java's prowess is the Java Virtual Machine (JVM), a runtime environment that executes Java bytecode. This virtualization enables Java applications to run seamlessly on various platforms without modification, ensuring a consistent user experience.

Efficiency Unleashed

JVM's just-in-time (JIT) compilation optimizes performance by translating bytecode into native machine code at runtime. This dynamic approach enhances execution speed, making Java applications not only powerful but also nimble.

Key Features That Set Java Apart


1. Platform Independence

Java's "Write Once, Run Anywhere" philosophy underscores its platform independence. Code compatibility across different systems reduces development time and costs.

2. Exception Handling

Robust exception handling in Java ensures graceful error management, enhancing the reliability of applications in real-world scenarios.

3. Rich Standard Library

Java's extensive standard library provides a wealth of pre-built functions, saving developers time and effort in coding common tasks.

Java in Action: Real-World Applications


Enterprise Solutions

Java's scalability and reliability make it a preferred choice for developing enterprise-level applications. Its ability to handle complex tasks seamlessly ensures the success of large-scale projects.

Mobile Development

In the mobile realm, Java has left an indelible mark. Android, the world's most popular mobile operating system, relies heavily on Java for app development.

Web Development

Java's compatibility with various web frameworks, such as Spring and JavaServer Faces (JSF), makes it a robust choice for web development, offering scalability and security.

Staying Ahead in the Java Ecosystem


Continuous Learning

The dynamic nature of technology necessitates continuous learning. Staying updated on the latest Java developments, frameworks, and best practices is essential for any Java developer aiming for excellence.

Community Engagement

Being part of the vibrant Java community provides a wealth of resources and support. Forums, conferences, and online communities offer opportunities for collaboration and knowledge-sharing.

Thursday, November 9, 2023

Announcing Java Card 3.2 Release

Announcing Java Card 3.2 Release

With the whole Java Card team, I am delighted to announce the new Java Card 3.2 release. It is now live and available on the portal of Oracle: Java Card 3.2

This release continues and completes the great Java Card achievements already described and presented on the occasion of the twenty-fifth anniversary of the technology (25 years anniversary and 25 years we sow).

Announcing Java Card 3.2 Release

Like any new Java Card release, this latest Java Card release comes with enhancements such as support for (D)TLS1.3 protocols, and API clarifications to help application developers and significantly increase the level of interoperability accross multiple implementations.

Configuration, compliance, certification, interoperability are keywords for making Secure Elements based products. Those four requirements have been the leitmotiv of the Java Card 3.2 release to sustain and move the technology ahead in synchronization with industry trends on various security hardware and for various markets: Banking, Mobile Payment, Identity, SIM and cellular connectivity (2 to 5G now), Access Control, Strong Authentication, IoT Security …

Source: oracle.com

Monday, November 6, 2023

Java records: Serialization, marshaling, and bean state validation

Java records: Serialization, marshaling, and bean state validation

Existing frameworks and libraries that access instance variables through getters and setters won’t work with records. Here’s what to do.

Records were first introduced in Java 14 as a preview feature. Recently, there has been a second preview with the arrival of Java 15. Record classes are therefore not yet a regular part of the JDK and they are still subject to change.

In brief, the main goal of record classes is to model plain data aggregates with less ceremony than normal classes. A record class declares a sequence of fields, and may also declare methods. The appropriate constructor, accessor, equals, hashCode, and toString methods are created automatically. The fields are final because the class is intended to serve as a simple data carrier.

A record class declaration consists of a name, a header (which lists the fields of the class, known as its components), and a body. The following is an example of a record declaration:

record RectangleRecord(double length, double width) {
}

In this article, I will focus on serialization and deserialization, marshaling and unmarshaling, and state validation of records. But first, take a look at the class members of a record using Java’s Reflection API.

Introspection


With the introduction of records to Java, two new methods have been added to java.lang.Class:

  • isRecord(), which is similar to isEnum() except that it returns true if the class was declared as a record
  • getRecordComponents(), which returns an array of java.lang.reflect.RecordComponent objects corresponding to the record components

I’ll use the latter with the record class declared above to get its components:

System.out.println("Record components:");
Arrays.asList(RectangleRecord.class.getRecordComponents())
        .forEach(System.out::println);

Here’s the output:

Record components:
double length
double width

As you can see, the components are the variables (type and name pairs) specified in the header of the record declaration. Now, look at the record fields that are derived from the components:

System.out.println("Record fields:");
Arrays.asList(RectangleRecord.class.getDeclaredFields())
        .forEach(System.out::println);

The following is the output:

Record fields:
private final double record.test.RectangleRecord.length
private final double record.test.RectangleRecord.width

Note that the fields are generated by the compiler with the private and final modifiers. The field accessors and the constructor parameters are also derived from the record components, for example:

System.out.println("Field accessors:");
Arrays.asList(RectangleRecord.class.getDeclaredMethods())
        .filter(m -> Arrays.stream(RectangleRecord.class.getRecordComponents()).map(c -> c.getName()).anyMatch(n -> n.equals(m.getName())))
        .forEach(System.out::println);

System.out.println("Constructor parameters:");
Arrays.asList(RectangleRecord.class.getDeclaredConstructors())
        .forEach(c -> Arrays.asList(c.getParameters())
        .forEach(System.out::println));

Here’s the output:

Field accessors:
public double record.test.RectangleRecord.length()
public double record.test.RectangleRecord.width()
Constructor parameters:
double length
double width

Notice that the name of the field accessors does not start with get and, therefore, does not conform to the JavaBeans conventions.

You’re probably not surprised to not see any methods for setting the contents of a field, because records are supposed to be immutable.

Record components can also be annotated in the same way you would do for constructor or method parameters. For this purpose, I’ve created a simple annotation such as the following one:

import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;

@Retention(RetentionPolicy.RUNTIME)
public @interface MyAnnotation {
}

Be sure to set the retention policy to RUNTIME; otherwise, the annotation is discarded by the compiler and will not be present at runtime. So, this is the modified record declaration with annotated components:

record Rectangle(@MyAnnotation double length, @MyAnnotation double width) {
}

The next step is to retrieve the annotation on the record components via reflection, for example:

System.out.println("Record component annotations:");
Arrays.asList(RectangleRecord.class.getRecordComponents())
        .forEach(c -> Arrays.asList(c.getDeclaredAnnotations())
        .forEach(System.out::println));

The following is the output:

Record component annotations:
@record.test.MyAnnotation()
@record.test.MyAnnotation()

As expected, the annotation is present on both components specified in the header of the record.

For records, however, the annotations that you add to the components are also propagated to the derived fields, accessors, and constructor parameters. I will quickly verify this by printing out the annotations of the component-derived artifacts:

Here are annotations on record fields:

System.out.println("Record field annotations:");
Arrays.asList(RectangleRecord.class.getDeclaredFields())
        .forEach(f -> Arrays.asList(f.getDeclaredAnnotations())
        .forEach(System.out::println));

And here is the output:

Record field annotations:
@record.test.MyAnnotation()
@record.test.MyAnnotation()

Here are annotations on field accessors:

System.out.println("Field accessor annotations:");
Arrays.asList(RectangleRecord.class.getDeclaredMethods())
        .filter(m -> Arrays.stream(RectangleRecord.class.getRecordComponents()).map(c -> c.getName()).anyMatch(n -> n.equals(m.getName())))
        .forEach(m -> Arrays.asList(m.getDeclaredAnnotations())
        .forEach(System.out::println));

And here is the output:

Field accessor annotations:
@record.test.MyAnnotation()
@record.test.MyAnnotation()

Finally, here are annotations on record constructor parameters:

System.out.println("Constructor parameter annotations:");
Arrays.asList(RectangleRecord.class.getDeclaredConstructors())
        .forEach(c -> Arrays.asList(c.getParameters())
        .forEach(p -> Arrays.asList(p.getDeclaredAnnotations())
        .forEach(System.out::println)));

And the following is the output:

Constructor parameter annotations:
@record.test.MyAnnotation()
@record.test.MyAnnotation()

As seen above, if you put an annotation on a record component, it will be automatically propagated to the derived artifacts. However, this behavior is not always desirable, because you might want the annotation to be present only on record fields, for instance. That’s why you can change this behavior by specifying the target of an annotation.

For example, if you want an annotation to be present only on the record fields, you would have to add a Target annotation with a parameter of ElementType.FIELD:

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

@Target(ElementType.FIELD)
@Retention(RetentionPolicy.RUNTIME)
public @interface MyAnnotation {
}

Rerunning the above code yields this output:

Record component annotations:
Record field annotations:
@record.test.MyAnnotation()
@record.test.MyAnnotation()
Field accessor annotations:
Constructor parameter annotations:

As you can see, the annotation is now present only on the record fields. In the same way, you can state that the annotation should be present only on the accessors (ElementType.METHOD), or the constructor parameters (ElementType.PARAMETER), or any combination of those two and the record fields.

Be aware that in any of these cases, you must put the annotation always on the record components, because the fields, accessors, and constructor parameters simply don’t exist in a record declaration. Those are generated and annotated (according to the element types specified in the annotation declaration) by the compiler and, thus, are present only in the compiled record class.

Serialization and deserialization


Because they are ordinary classes, records can also be serialized and deserialized. The only thing you need to do is to add the java.io.Serializable interface to the record’s header, for example:

record RectangleRecord(double length, double width) implements Serializable {
}

Here’s the code to serialize a record:

private static final List<RectangleRecord> SAMPLE_RECORDS = List.of(
        new RectangleRecord(1, 5),
        new RectangleRecord(2, 4),
        new RectangleRecord(3, 3),
        new RectangleRecord(4, 2),
        new RectangleRecord(5, 1)
);

try (
        var fos = new FileOutputStream("C:/Temp/Records.txt");
        var oos = new ObjectOutputStream(fos)) {
    oos.writeObject(SAMPLE_RECORDS);
}

And the following code can be used to deserialize a record:

try (
        var fis = new FileInputStream("C:/Temp/Records.txt");
        var ois = new ObjectInputStream(fis)) {
    List<RectangleRecord> records = (List<RectangleRecord>) ois.readObject();
    records.forEach(System.out::println);
    assertEquals(SAMPLE_RECORDS, records);
}

This is the output:

RectangleRecord[length=1.0, width=5.0]
RectangleRecord[length=2.0, width=4.0]
RectangleRecord[length=3.0, width=3.0]
RectangleRecord[length=4.0, width=2.0]
RectangleRecord[length=5.0, width=1.0]

However, there’s one major difference compared to ordinary classes: When a record is deserialized, its fields are set, via the record constructor, to the values deserialized from the stream. By contrast, a normal class is first instantiated by invoking the no-argument constructor, and then its fields are set via reflection to the values deserialized from the stream.

Thus, records are deserialized using their constructor. This behavior allows you to add invariants to the constructor to check the validity of the deserialized data. Since this is not possible with normal classes, there’s always a certain risk of deserializing bad or even hazardous data, which should not be underestimated, especially if the data comes from external sources.

import java.io.Serializable;
import java.lang.IllegalArgumentException;
import java.lang.StringBuilder;

public record RectangleRecord(double length, double width) implements Serializable {

    public RectangleRecord {
        StringBuilder builder = new StringBuilder();
        if (length <= 0) {
            builder.append("\nLength must be greater than zero: ").append(length);
        }
        if (width <= 0) {
            builder.append("\nWidth must be greater than zero: ").append(width);
        }
        if (builder.length() > 0) {
            throw new IllegalArgumentException(builder.toString());
        }
    }

}

Note that this code is using the record’s compact constructor here, so there’s no need to specify the parameters or to set the record fields explicitly. If you now deserialize the previously serialized records, every single instance is supposed to have a valid state; otherwise, an IllegalArgumentException is thrown by the record constructor.

You can verify this by modifying the serialized data of just one record in such a way that it doesn’t conform to the validation logic anymore: RectangleRecord[length=0.0, width=-5.0].

If you now execute the deserialization code from above, you’ll get the expected

IllegalArgumentException:
java.lang.IllegalArgumentException: 
Length must be greater than zero: 0.0
Width must be greater than zero: -5.0
  at record.test.RectangleRecord.<init>(RectangleRecord.java:18)
  at java.base/java.io.ObjectInputStream.readRecord(ObjectInputStream.java:2320)

If you tried the same process with a normal class, no exception would occur, since the class’s constructor wouldn’t be called. The object would be deserialized with the erroneous data, without anyone noticing.

Look at the following RectangleClass, which is the counterpart of the RectangleRecord:

import java.io.Serializable;
import java.util.Objects;

public class RectangleClass implements Serializable {

    private final double width;
    private final double length;

    public RectangleClass(double width, double length) {
        StringBuilder builder = new StringBuilder();
        if (length <= 0) {
            builder.append("\nLength must be greater than zero: ").append(length);
        }
        if (width <= 0) {
            builder.append("\nWidth must be greater than zero: ").append(width);
        }
        if (builder.length() > 0) {
            throw new IllegalArgumentException(builder.toString());
        }
        this.width = width;
        this.length = length;
    }

    @Override
    public String toString() {
        return "RectangleClass[" + "width=" + width + ", length=" + length + ']';
    }

    @Override
    public int hashCode() {
        return Objects.hash(width, length);
    }

    @Override
    public boolean equals(Object obj) {
        if (this == obj) {
            return true;
        }
        if (obj == null) {
            return false;
        }
        if (getClass() != obj.getClass()) {
            return false;
        }
        RectangleClass other = (RectangleClass) obj;
        return Objects.equals(length, other.length) && Objects.equals(width, other.width);
    }

    public double width() {
        return width;
    }

    public double length() {
        return length;
    }

}

Although the constructor of the RectangleClass contains the same validation logic as the constructor of the RectangleRecord, it is not called during the deserialization process and, therefore, cannot prevent the creation of objects with invalid state.

Marshaling and unmarshaling


Just like normal classes, records can also be unmarshaled from and marshaled to a format of your choice, such as JSON, XML, or CSV. If you’d like to use an existing library to do so, be aware that it has to access the class fields via the Field.set(Object obj, Object value) method and not via the getter and setter methods, because records don’t have those methods.

However, you should know about some restrictions. In JDK 15’s second preview of Java records, a record’s field can no longer be accessed via the Field.set(Object obj, Object value) method (which was possible in JDK 14).

The reason for this restriction is to ensure the immutability of records by preventing this kind of backdoor manipulation by libraries. However, most of the current libraries aren’t aware of records yet. The libraries therefore treat records as ordinary classes and try to set the field values via the Field.set(Object obj, Object value) method. That’s not going to work.

Here is an example that uses the popular Gson library to demonstrate the above restriction. With this library, marshaling to JSON should work without any problem because Gson reads the record data using the Field.get(Object obj) method:

private static final List<RectangleRecord> SAMPLE_RECORDS = List.of(
        new RectangleRecord(1, 5),
        new RectangleRecord(2, 4),
        new RectangleRecord(3, 3),
        new RectangleRecord(4, 2),
        new RectangleRecord(5, 1)
);

try (Writer writer = new FileWriter("C:/Temp/Records.json")) {
    new Gson().toJson(SAMPLE_RECORDS, writer);
}

And here is the file output:

[{"length":1.0,"width":5.0},{"length":2.0,"width":4.0},{"length":3.0,"width":3.0},{"length":4.0,"width":2.0},{"length":5.0,"width":1.0}]

But a problem will occur during the unmarshaling process in which Gson tries to set the field values using the Field.set(Object obj, Object value) method:

try (Reader reader = new FileReader("C:/Temp/Records.json")) {
    List<RectangleRecord> records = new Gson().fromJson(reader, new TypeToken<List<RectangleRecord>>(){}.getType());
    records.forEach(System.out::println);
}

The output:

java.lang.IllegalAccessException: Can not set final double field record.test.RectangleRecord.length to java.lang.Double
  at java.base/jdk.internal.reflect.UnsafeFieldAccessorImpl.throwFinalFieldIllegalAccessException(UnsafeFieldAccessorImpl.java:76)
  at java.base/jdk.internal.reflect.UnsafeFieldAccessorImpl.throwFinalFieldIllegalAccessException(UnsafeFieldAccessorImpl.java:80)
  at java.base/jdk.internal.reflect.UnsafeQualifiedDoubleFieldAccessorImpl.set(UnsafeQualifiedDoubleFieldAccessorImpl.java:79)
  at java.base/java.lang.reflect.Field.set(Field.java:793)

Note that write access to the RectangleRecord.length field has been prevented by throwing a java.lang.IllegalAccessException. This means that the current libraries will need to be changed to take this restriction into account when dealing with records.

At the present time, the only way to set the field values of a record is by using its constructor. And if the constructor arguments are all immutable themselves (for example, when using primitive data types), it will indeed become very hard to change a record’s state. Fortunately, this restriction also helps ensure consistent state validation of records, as discussed in the earlier section about deserialization.

If you currently have to unmarshal records from JSON or any other format, you’ll probably have to write your own unmarshaler. Most libraries won’t support explicit marshaling or unmarshaling for records until they have become a regular Java feature.

As long as they’re not, they are still subject to change. Record field access has been restricted in JDK 15 by no longer allowing the fields to be changed via reflection, something that was still possible in JDK 14 (the first preview of records). That’s a change in behavior that should not be neglected—especially not by library designers—as everyone looks forward to JDK 16.

Bean validation


You may think that records can’t be subject to the bean validation specification (also known as JSR 303) because they do not adhere to the JavaBeans standard. That’s only partly true. A record’s state cannot be validated through its getters or setters, because records don’t have any getters or setters. However, a record’s state can very well be validated via its constructor parameters or its fields.

The Bean Validation API defines a way for expressing and validating constraints using Java annotations. Because these annotations are reusable, they help to avoid code duplication and, thus, contribute to more-concise and less error-prone code. By putting constraint annotations on the components of a record, you can enforce constraint validation and guarantee that a record’s state is always valid. Since records are immutable, you need to validate the constraints only once when you create a record instance. If no constraints are violated, the created instance always meets its invariants.

The following example shows how a record’s state can be validated. To do so, I’m using the bean validation reference implementation, which is the Hibernate Validator.

But first, I’ll add the necessary dependencies with the help of a favorite build tool:

<dependency>
    <groupId>org.hibernate</groupId>
    <artifactId>hibernate-validator</artifactId>
    <version>6.1.5.Final</version>
</dependency>
<dependency>
    <groupId>org.glassfish</groupId>
    <artifactId>javax.el</artifactId>
    <version>3.0.0</version>
</dependency>

Note that the Hibernate Validator also requires an implementation of the Expression Language to evaluate dynamic expressions in constraint violation messages.

Now, I’ll add some validation constraints to the RectangleRecord by the means of the @javax.validation.constraints.Positive annotation, which checks whether the element is strictly positive (zero values are considered invalid).

import javax.validation.constraints.Positive;

public record RectangleRecord(
    @Positive(message = "Length is ${validatedValue} but must be greater than zero.") double length,
    @Positive(message = "Width is ${validatedValue} but must be greater than zero.") double width
) {}

To be able to validate the state of a record, you need an instance of javax.validation.Validator. But to get a Validator instance, you first have to create a ValidatorFactory, for example:

ValidatorFactory factory = Validation.buildDefaultValidatorFactory();
Validator validator = factory.getValidator();

Now you can validate the state of a record instance as follows:

RectangleRecord rectangle = new RectangleRecord(0, -5);
Set<ConstraintViolation<RectangleRecord>> constraintViolations = validator.validate(rectangle);
constraintViolations.stream().map(ConstraintViolation::getMessage).forEach(System.out::println);

Here’s the output:

Length is 0.0 but must be greater than zero.
Width is -5.0 but must be greater than zero.

The previous example demonstrates that record classes can be validated like normal classes using the Bean Validation API. However, since records do not conform to JavaBeans conventions, their state cannot be validated using getters or setters, for instance.

Wouldn’t it be better to check the validity of an object’s state during its construction process and, thus, avoid the creation of an instance with incorrect data? Well, this is possible by calling the constraint validation logic in the record’s constructor itself.

In order not to have to add the above validation code to every single record constructor, I am going to implement it by using an interface. Because records are final, they cannot extend any other record class to inherit its methods. But a similar behavior can be achieved by declaring a default method in an interface, for example:

import java.lang.reflect.Constructor;
import java.util.Set;
import java.util.stream.Collectors;
import javax.validation.ConstraintViolation;
import javax.validation.ConstraintViolationException;
import javax.validation.Validator;

public interface Validatable {

    default void validate(Object... args) {
        Validator validator = ValidatorProvider.getValidator();
        Constructor constructor = getClass().getDeclaredConstructors()[0];
        Set<ConstraintViolation<?>> violations = validator.forExecutables()
                .validateConstructorParameters(constructor, args);
        if (!violations.isEmpty()) {
            String message = violations.stream()
                    .map(ConstraintViolation::getMessage)
                    .collect(Collectors.joining(System.lineSeparator()));
            throw new ConstraintViolationException(message, violations);
        }
    }

}

The following class provides the required Validator instance:

import javax.validation.Validation;
import javax.validation.Validator;
import javax.validation.ValidatorFactory;

public class ValidatorProvider {

    private static final Validator VALIDATOR;

    static {
        ValidatorFactory factory = Validation.buildDefaultValidatorFactory();
        VALIDATOR = factory.getValidator();
    }

    public static Validator getValidator() {
        return VALIDATOR;
    }

}

Now, everything’s in place to call the interface’s validate method in my record constructor. To do so, I have to specify an explicit constructor, which allows me to call the validate method:

import javax.validation.constraints.Positive;

public record RectangleRecord(double length, double width) implements Validatable {

    public RectangleRecord (
            @Positive(message = "Length is ${validatedValue} but must be greater than zero.") double length,
            @Positive(message = "Width is ${validatedValue} but must be greater than zero.") double width
        ) {
        validate(length, width);
        this.length = length;
        this.width = width;
    }

}

Note that when you provide an explicit constructor, you have to annotate the constructor parameters and not the components of the record. You have previously seen that the annotations added to the components are also propagated to the derived fields, accessors, and constructor parameters. Regarding the constructor parameters, this is true only as long as you do not provide an explicit constructor.

Now, I’ll try to create a RectangleRecord instance with an invalid length and width:

RectangleRecord rectangle = new RectangleRecord(0, -5);

Here’s the output:

javax.validation.ConstraintViolationException: 
Length is 0.0 but must be greater than zero.
Width is -5.0 but must be greater than zero.
  at record.test.Validatable.validate(Validatable.java:21)
  at record.test.RectangleRecord.<init>(RectangleRecord.java:11)

So, with the validation logic called already at instantiation time (in the record constructor), you can prevent the creation of an object with invalid data. In the first bean validation example from above, you first had to create an object with invalid state before you were able to validate it. But that’s exactly what you want to avoid: creating records with invalid state.

However, by providing an explicit canonical constructor, you also have to explicitly specify all the constructor parameters and set all the record field values manually. But isn’t that again quite a lot of clutter that you are trying to avoid when using records? In the following section, I’m going to show how you can omit an explicit constructor declaration and still get the record’s data validated during the instantiation process.

Byte Buddy


Byte Buddy is a library for creating and modifying Java classes during the runtime of Java applications without the need of a compiler. Unlike the code generation utilities included in the JDK (such as the Java Instrumentation API), Byte Buddy allows you to create arbitrary classes, and it does not require the implementation of any interface to create runtime proxies.

In addition, it offers a convenient API. Using the API, you can change classes either manually using a Java agent or during a build. You can use the library to manipulate existing classes, create new classes on demand, or intercept method calls, for instance. Using Byte Buddy does not require you to have an understanding of Java bytecode or the class file format. However, you can define custom bytecode, if needed.

The API was designed to be nonintrusive, so Byte Buddy does not leave any traces in class files after the code manipulation has taken place. That’s why the generated classes do not require Byte Buddy on the classpath.

Byte Buddy is a lightweight library that depends only on the visitor API of the ASM Java bytecode parser library, so it offers excellent runtime performance.

What I am interested in here is code manipulation at build time, which can be achieved easily by using a dedicated Maven plugin that ships with the Byte Buddy library.

As you probably know, a Maven build lifecycle consists of phases. One of these phases is the so-called compile phase after which Byte Buddy plugs in and changes the Java bytecode according to your instructions. Hence, there’s no code manipulation at runtime that could affect runtime performance.

I’ll start by adding the required dependencies for the Byte Buddy library:

<dependency>
    <groupId>net.bytebuddy</groupId>
    <artifactId>byte-buddy</artifactId>
    <version>1.10.14</version>
</dependency>

The following XML adds the Byte Buddy Maven plugin to the build lifecycle:

<plugin>
    <groupId>net.bytebuddy</groupId>
    <artifactId>byte-buddy-maven-plugin</artifactId>
    <version>1.10.14</version>
    <executions>
        <execution>
            <goals>
                <goal>transform</goal>
            </goals>
        </execution>
    </executions>
    <configuration>
        <transformations>
            <transformation>
                <plugin>
                    record.test.RecordValidationPlugin
                </plugin>
            </transformation>
        </transformations>
    </configuration>
</plugin>

The Byte Buddy Maven plugin uses a custom class called RecordValidationPlugin that implements the net.bytebuddy.build.Plugin interface, for example:

import java.io.IOException;
import javax.validation.Constraint;

import static net.bytebuddy.matcher.ElementMatchers.hasAnnotation;
import static net.bytebuddy.matcher.ElementMatchers.annotationType;

import net.bytebuddy.build.Plugin;
import net.bytebuddy.description.method.MethodDescription;
import net.bytebuddy.description.type.TypeDescription;
import net.bytebuddy.dynamic.ClassFileLocator;
import net.bytebuddy.dynamic.DynamicType.Builder;
import net.bytebuddy.dynamic.scaffold.TypeValidation;
import net.bytebuddy.implementation.MethodDelegation;
import net.bytebuddy.implementation.SuperMethodCall;

public class RecordValidationPlugin implements Plugin {

    @Override
    public boolean matches(TypeDescription target) {
        return target.isRecord() && target.getDeclaredMethods()
                .stream()
                .anyMatch(m -> m.isConstructor() && hasConstrainedParameters(m));
    }

    @Override
    public Builder<?> apply(Builder<?> builder, TypeDescription typeDescription, ClassFileLocator classFileLocator) {
        try {
            builder = new ByteBuddy().with(TypeValidation.DISABLED).rebase(Class.forName(typeDescription.getName()));
        } catch (ClassNotFoundException ex) {
            throw new RuntimeException(ex);
        }
        return builder.constructor(this::hasConstrainedParameters)
                .intercept(SuperMethodCall.INSTANCE.andThen(MethodDelegation.to(RecordValidationInterceptor.class)));
    }

    private boolean hasConstrainedParameters(MethodDescription m) {
        return m.getParameters()
                .asDefined()
                .stream()
                .anyMatch(p -> !p.getDeclaredAnnotations()
                .asTypeList()
                .filter(hasAnnotation(annotationType(Constraint.class)))
                .isEmpty());
    }

    @Override
    public void close() throws IOException {
    }

}

The interface has three methods: matches, apply, and close. I don’t need to implement the last one.

The first method is used by Byte Buddy to find all the classes whose code I want to change. I need only the record classes that have a constructor with constrained parameters (having bean validation annotations). This is where the new method Class.isRecord() comes into play.

The second method applies the changes to the bytecode generated during the compile phase. It adds to those record constructors that have constrained parameters a call to a method in a custom class called RecordValidationInterceptor.

Also, note that I have to use a custom Builder instance as follows, because Java records are still a preview feature and, therefore, type validation needs to be disabled:

builder = new ByteBuddy().with(TypeValidation.DISABLED).rebase(Class.forName(typeDescription.getName()));

And here’s the code for the RecordValidationInterceptor:

import java.lang.reflect.Constructor;
import java.util.Set;
import java.util.stream.Collectors;
import javax.validation.ConstraintViolation;
import javax.validation.ConstraintViolationException;
import javax.validation.Validation;
import javax.validation.Validator;
import javax.validation.ValidatorFactory;
import net.bytebuddy.implementation.bind.annotation.AllArguments;
import net.bytebuddy.implementation.bind.annotation.Origin;

public class RecordValidationInterceptor {

    private static final Validator VALIDATOR;

    static {
        ValidatorFactory factory = Validation.buildDefaultValidatorFactory();
        VALIDATOR = factory.getValidator();
    }

    public static <T> void validate(@Origin Constructor<T> constructor, @AllArguments Object[] args) {
        Set<ConstraintViolation<T>> violations = VALIDATOR.forExecutables()
                .validateConstructorParameters(constructor, args);
        if (!violations.isEmpty()) {
            String message = violations.stream()
                    .map(ConstraintViolation::getMessage)
                    .collect(Collectors.joining(System.lineSeparator()));
            throw new ConstraintViolationException(message, violations);
        }
    }

}

As a result of the code manipulation, the validate method gets called from the record constructor and passes a Constructor object along with the according parameter values to the bean validator instance.

You can give the method any name; Byte Buddy will identify it with the help of its own annotations such as @Origin or @AllArguments.

Now I’ll build the project using the previously declared RectangleRecord with validation constraints added to the components, for example:

import javax.validation.constraints.Positive;

public record RectangleRecord(
    @Positive(message = "Length is ${validatedValue} but must be greater than zero.") double length,
    @Positive(message = "Width is ${validatedValue} but must be greater than zero.") double width
) {}

After the build has completed, you can look at the resulting bytecode. To do so, execute the following command (allowing you to disassemble a class file) from the command line:

javap -c RectangleRecord

In the following, I show only the constructor bytecode:

public record.test.RectangleRecord(double, double);
    Code:
       0: aload_0
       1: dload_1
       2: dload_3
       3: aconst_null
       4: invokespecial #75                 // Method "<init>":(DDLrecord/test/RectangleRecord$auxiliary$Vd34tcl4;)V
       7: getstatic     #79                 // Field cachedValue$RxYQQtAf$d63lk91:Ljava/lang/reflect/Constructor;
      10: iconst_2
      11: anewarray     #81                 // class java/lang/Object
      14: dup
      15: iconst_0
      16: dload_1
      17: invokestatic  #87                 // Method java/lang/Double.valueOf:(D)Ljava/lang/Double;
      20: aastore
      21: dup
      22: iconst_1
      23: dload_3
      24: invokestatic  #87                 // Method java/lang/Double.valueOf:(D)Ljava/lang/Double;
      27: aastore
      28: invokestatic  #93                 // Method csv/to/records/RecordValidationInterceptor.validate:(Ljava/lang/reflect/Constructor;[Ljava/lang/Object;)V
      31: return

Notice the last instruction just before the return statement. That’s where the method RecordValidationInterceptor.validate is called.

Now I’ll test the code refactored by Byte Buddy:

RectangleRecord rectangle = new RectangleRecord(0, -5);

Here’s the output:

javax.validation.ConstraintViolationException: 
Length is 0.0 but must be greater than zero.
Width is -5.0 but must be greater than zero.
  at csv.to.records.RecordValidationInterceptor.validate(RecordValidationInterceptor.java:32)
  at record.test.RectangleRecord.<init>(RectangleRecord.java)

As you can see, the creation of a RectangleRecord instance with invalid data has been avoided just by using regular bean validation constraints on record components. The use of the Byte Buddy plugin helps you to enforce Java record invariants through the means of bean validation.

Source: oracle.com