Monday, May 30, 2022
Quiz yourself: Declaring and accessing Java modules
Sunday, May 29, 2022
You don’t need an application server to run Jakarta EE applications
Depending on the requirements, you can do well with Helidon, Piranha, or Hammock.
Jakarta EE (formerly Java EE) and the concept of an application server have been intertwined for so long that it’s generally thought that Jakarta EE implies an application server. This article will look at whether that’s still the case—and, if Jakarta EE isn’t an application server, what is it?
Let’s start with definitions. The various Jakarta EE specifications use the phrase application server but don’t specifically define it. The phrase is often used in a way where it would be interchangeable with terms such as runtime, container, or platform. For instance, the specification documents from the following specs mention things like the following:
◉ Jakarta authorization: “The Application server must bundle or install the PolicyContext class…”
◉ Jakarta messaging: “A ServerSessionPool object is an object implemented by an application server to provide a pool of ServerSession objects…”
◉ Jakarta connectors: “This method is called by an application server (that is capable of lazy connection association optimization) in order to dissociate a ManagedConnection…”
The Jakarta EE 9 platform specification doesn’t explicitly define an application server either, but section 2.12.1 does say the following:
A Jakarta EE Product Provider is the implementor and supplier of a Jakarta EE product that includes the component containers, Jakarta EE platform APIs, and other features defined in this specification. A Jakarta EE Product Provider is typically an application server vendor, a web server vendor, a database system vendor, or an operating system vendor. A Jakarta EE Product Provider must make available the Jakarta EE APIs to the application components through containers.
The term container is equivalent to engine, and very early J2EE documents speak about the “Servlet Engine.”
Thus, the various specification documents do not really specify what an application server is, and when they do mention it, it’s basically the same as a container or runtime. In practice, when someone speaks about an application server, this means something that includes all of the following:
◉ It is separately installed on the server or virtual machine.
◉ It listens to networking ports after it is started (and typically includes an HTTP server).
◉ It acts as a deployment target for applications (typically in the form of a well-defined archive), which can be both deployed and undeployed.
◉ It runs multiple applications at the same time (typically weakly isolated from each other in some way).
◉ It has facilities for provisioning resources (such as database connections, messaging queues, and identity stores) for the application to consume.
◉ It contains a full-stack API and the API’s implementation for consumption by applications.
In addition, an application server may include the following:
◉ A graphical user interface or command-line interface to administer the application server
◉ Facilities for clustering (so load and data can be distributed over a network)
Pushback against the full-fledged application server
The application server model has specific advantages when shrink-wrapped software needs to be deployed into an organization and integrated with other software running there. In such situations, for example, an application server can eliminate the need for users to authenticate themselves with every application. Instead, the application server might use a central Lightweight Directory Access Protocol (LDAP) service as its identity store for employees, allowing applications running on the application server to share that service.
This model, however, can be less ideal when an organization develops and operates its own public-facing web applications. In that case, the application needs to exert more control. For instance, instead of using the same LDAP service employees use, the organization’s customers would have a separate registration system and login screen implemented by the developers.
Here, an application server can be an obstacle because part of the security would need to be done outside of the application, often by an IT operations (Ops) team, who may not even know the application developers.
For example, an application server that hosts production applications is often shielded from the developer (Dev) team and is touched by only the Ops team. However, some problems on the server really belong in the Dev domain because they apply to in-house application development. This can lead to tickets that bounce between the Dev and Ops teams as the Dev team tries to steer the actions that only the Ops team is allowed to perform. (Note: The DevOps movement is an attempt to solve this long-standing problem.)
Another issue concerns the installed libraries within the application server. The exact versions of these the libraries, and the potential need to patch or update them, is often a Dev concern. For instance, the Ops team manages routers, networks, Linux servers, and so on and that team might not be very aware of the intricate details of Mojarra 2.3.1 versus Mojarra 2.3.2 or the need to patch the FacesServlet implementation.
Modern obsolescence: Docker and Kubernetes
The need for having an installed application server as the prime mechanism to share physical server resources started to diminish somewhat with the rise of virtual servers. A decade ago, you might see teams deploying a single application to a single dedicated application server running inside a virtual server. This practice, though, was uncommon enough that in 2013 the well-known Java EE consultant Adam Bien wrote a dedicated blog post about this practice that received some pushback. One of the arguments against Bien’s idea was that running an entire (virtual) operating system for a single application would waste resources.
Almost exactly at the same time as Bien wrote his post, the Docker container platform was released. Containers run on a single operating system and, therefore, largely take the resource-wasting argument away. While containers themselves had been around since the 1970s, Docker added a deployment model and a central repository for container images (Docker Hub) that exploded in popularity.
With a deployment tool at hand, and the knowledge that fewer resources are wasted, deploying an application server running a single application finally went mainstream. Specifically, the native deployment feature and, above all, the undeployment feature of an application server are not really used, because most Docker deployments are static.
In 2015 the Kubernetes container orchestration system appeared, and it quickly became popular for managing many instances of (mostly) Docker containers. For Java EE application servers, Kubernetes means that Java EE’s native clustering facilities are not really used, because tasks such as load balancing are now managed by Kubernetes.
Around the same time, the serverless microservices execution model became more popular with cloud providers. This meant that the deployment unit didn’t need its own HTTP server. Instead, the deployment unit contained code that is called by the serverless server of the cloud provider. The result was that for such an environment, the built-in HTTP server of a Java EE or Jakarta EE application server is not needed anymore. Obviously, such code needs to provide an interface the cloud provider can call. There’s currently no standard for this, though Oracle is working with the Cloud Native Computing Foundation on a specification for this area.
The Jakarta EE APIs
Without deployments, without running multiple applications, without an HTTP server, and without clustering, a Jakarta EE application server is essentially reduced to the Jakarta EE APIs.
Interestingly, this is how the Servlet API (the first Java EE API) began. In its early versions, servlet containers had no notion of a deployed application archive and they didn’t have a built-in HTTP server. Instead, servlets were functions that were individually added to a servlet container, which was then paired with an existing HTTP server.
Despite some initial resistance from within the Java EE community, the APIs that touched the managed-container application server model started to transition to a life outside the application server. This included HttpServletRequest#login, which began the move away from the strict container-managed security model, and @DataSourceDefinition, which allowed an application to define the same type of data source (and connection pool) that before could be defined only on the application server.
In Java EE 8 (released in 2017), application security received a major overhaul with Jakarta Security, which had an explicit goal of making security fully configurable without any application server specifics.
For application servers, several choices are available, such as Oracle WebLogic Server.
What if you’re not looking for a full application server? Because there is tremendous value in the Jakarta EE APIs themselves, several products have sprung up that are not application servers but that do provide Jakarta EE APIs. Among these products are Helidon, Piranha Cloud, and Hammock, which shall be examined next.
Project Helidon
Project Helidon is an interesting runtime that’s not an application server. Of the three platforms I’ll examine, it’s the only one fully suitable for production use today.
Helidon is from Oracle, which is also known for Oracle WebLogic Server—one of the first application servers out there (going back all the way to the 1990s) and one that best embodies the application server concept.
Helidon is a lightweight set of libraries that doesn’t require an application server. It comes in two variants: Helidon SE and Helidon MP (MicroProfile).
Helidon SE. Helidon SE does not use any of the Servlet APIs but instead uses its own lightweight API, which is heavily inspired by Java functional programming. The following shows a very minimal example:
WebServer.create(
Routing.builder()
.get(
"/hello",
(request, response) -> response.send("Hi!"))
.build())
.start();
Here’s an example that uses a full handler class, which is somewhat like using a servlet.
WebServer.create(
Routing.builder()
.register(
"/hello",
rules -> rules.get("/", new HelloHandler()))
.build())
.start();
public class HelloHandler implements Handler {
@Override
public void accept(ServerRequest req, ServerResponse res) {
res.send("Hi");
}
}
In the Helidon native API, many types are functional types (one abstract method), so despite a class being used for the Handler in this example to contrast it to a servlet, a lambda could have been used. By default, Helidon SE launches an HTTP server in these examples at a random port, but it can be configured to use a specific port as well.
Helidon MP. With Helidon MP, you declare a dependency to Helidon in the pom.xml file and add it to the Helidon parent POM. You then write normal MicroProfile/Jakarta EE code (for the Jakarta EE libraries that Helidon MP supports). After the build, you get a runnable JAR file including your code. A minimal example for such a pom.xml file looks as follows:
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation=
"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>io.helidon.applications</groupId>
<artifactId>helidon-mp</artifactId>
<version>2.2.2</version>
<relativePath />
</parent>
<groupId>com.example</groupId>
<artifactId>example</artifactId>
<version>1.0</version>
<dependencies>
<dependency>
<groupId>io.helidon.microprofile.bundles</groupId>
<artifactId>helidon-microprofile</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<executions>
<execution>
<id>copy-libs</id>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
This runnable JAR file neither contains nor loads the Helidon libraries. Instead, those libraries are referenced via META-INF/MANIFEST.MF to be in the /libs folder relative to where the executable JAR file exists.
After building, such as by using the mvn package, you can run the generated JAR file using the following command:
java -jar target/example.jar
This command will start the Helidon server, and this time you do get a default port at 8080. Besides the MicroProfile APIs, which is what Helidon focuses on primarily, the following Jakarta EE APIs are supported:
◉ Jakarta CDI: Weld
◉ Jakarta JSON-P/B: Yasson
◉ Jakarta REST: Jersey
◉ Jakarta WebSockets: Tyrus
◉ Jakarta Persistence: EclipseLink, Hibernate
◉ Jakarta Transactions: Narayana
Piranha Cloud
Piranha Cloud is a relatively new project; although it started as a very low-key project a couple of years ago, it didn’t pick up pace until October 2019, as you can see in Figure 1, which shows the GitHub commit graph.
Hammock
Friday, May 27, 2022
The not-so-hidden gems in Java 18: The JDK Enhancement Proposals
Five of Java 18’s JEPs add new features or major enhancements. See what they do and how they work.
Java 18 was officially released in March, and it included nine implemented JDK Enhancement Proposals (JEPs).
This article covers five of the JEPs in Java 18. The companion article, “The hidden gems in Java 18: The subtle changes,” goes over Java 18’s small improvements and changes that aren’t reflected in those JEPs. It also talks about the four JEPs that are incubators, previews, and deprecations.
Because these articles were researched prior to the March 22 general availability date, I used the Java 18 RC-build 36 jshell tool to demonstrate the code. However, if you would like to test the features, you can follow along with me by downloading the latest release candidate (or the general availability version), firing up a terminal window, checking your version, and running jshell, as follows. Note that you might see a newer version of the build.
[mtaman]:~ java -version
openjdk version "18" 2022-03-22
OpenJDK Runtime Environment (build 18+36-2087)
OpenJDK 64-Bit Server VM (build 18+36-2087, mixed mode, sharing)
[mtaman]:~ jshell --enable-preview
| Welcome to JShell -- Version 18
| For an introduction type: /help intro
jshell>
The nine JEPs in the Java 18 release as are follows:
◉ JEP 400: UTF-8 by default
◉ JEP 408: Simple Web Server
◉ JEP 413: Code snippets in Java API documentation
◉ JEP 416: Reimplement core reflection with method handles
◉ JEP 417: Vector API (third incubator)
◉ JEP 418: Internet address resolution service provider interface
◉ JEP 419: Foreign Function and Memory API (second incubator)
◉ JEP 420: Pattern matching for switch (second preview)
◉ JEP 421: Deprecate finalization for removal
Two of the JEPs are incubator features, one is a preview, and is one is a deprecation, that is, a JEP that prepares developers for the removal of a feature. In this article, I’ll walk through the other five JEPs, and I will not explore the incubators, the preview, and the deprecation.
JEP 400: UTF-8 by default
UTF-8 is a variable-width character encoding for electronic communication and is considered the web’s standard character set (charset). It can encode all the characters of any human language.
The Java standard charset determines how a string is converted to bytes and vice versa. It is used by many methods of the JDK library classes, mainly when you’re reading and writing text files using, for example, the FileReader, FileWriter, InputStreamReader, OutputStreamWriter, Formatter, and Scanner constructors, as well as the URLEncoder encode() and decode() static methods.
Prior to Java 18, the standard Java charset could vary based on the runtime environment’s language settings and operating systems. Such differences could lead to unpredictable behavior when an application was developed and tested in one environment and then run in another environment where the Java default charset changed.
Consider a pre-Java 18 example that writes and reads Arabic text. Run the following code on Linux or macOS to write Arabic content (which means “Welcome all with Java JEP 400”) to the Jep400-example.txt file:
private static void writeToFile() {
try (FileWriter writer = new FileWriter("Jep400-example.txt");
BufferedWriter bufferedWriter = new BufferedWriter(writer)) {
bufferedWriter.write("مرحبا بكم في جافا جيب ٤٠٠");
}
catch (IOException e) {
System.err.println(e);
}
}
Then read the file back using a Windows system and the following code:
private static void readFromFile() {
try (FileReader reader = new FileReader("Jep400-example.txt");
BufferedReader bufferedReader = new BufferedReader(reader)) {
String line = bufferedReader.readLine();
System.out.println(line);
}
catch (IOException e) {
System.err.println(e);
}
}
The output will be the following:
٠رØبا ب٠٠٠٠جا٠ا ج٠ب Ù¤Ù Ù
This rubbish output happens because, under Linux and macOS, Java saves the file with the default UTF-8 encoding format, but Windows reads the file using the Windows-1252 encoding format.
This problem becomes even worse when you mix text in the same program by writing the file using an older facility, FileWriter, and then reading the contents using newer I/O class methods such as Files writeString(), readString(), newBufferedWriter(), and newBufferedReader(), as in the following:
private static void writeReadFromFile() {
try (FileWriter writer = new FileWriter("Jep400-example.txt");
BufferedWriter bufferedWriter = new BufferedWriter(writer)) {
bufferedWriter.write("مرحبا بكم في جافا جيب ٤٠٠");
bufferedWriter.flush();
var message = Files.readString(Path.of("Jep400-example.txt"));
System.out.println(message);
}
catch (IOException e) {
System.err.println(e);
}
}
Running the program under Linux and macOS will print the Arabic content without any problem, but doing so on Windows will print the following:
????? ??? ?? ???? ??? ???
These question marks are printed because newer APIs don’t respect the default OS character set and always used UTF-8; so in this case, the file is written with FileWriter using the Windows-1252 charset. When Files.readString(Path.of("Jep400-example.txt")) reads back the file, the method use UTF-8 regardless of the standard charset.
The best solution is to specify the charset when you’re reading or writing files and when calling all methods that convert strings to bytes (and vice versa).
You might think another solution is to set the default charset with the file.encoding system property, as follows:
var fileWriter = new FileWriter("Jep400-example.txt", StandardCharsets.UTF_8);
var fileReader = new FileReader("Jep400-example.txt", StandardCharsets.UTF_8);
Files.readString(Path.of("Jep400-example.txt"), StandardCharsets.UTF_8);
What happens?
[mtaman]:~ java -Dfile.encoding=US-ASCII FileReaderWriterApplication.java
?????????????????????????????????
The result is rubbish because the FileWriter respects the default encoding system property, while Files.readString() ignores it and uses UTF-8. Therefore, the correct encoding should be used to run correctly with -Dfile.encoding=UTF-8.
The goal of JEP 400 is to standardize the Java API default charset to UTF-8, so that APIs that depend on the default charset will behave consistently across all implementations, operating systems, locales, and configurations. Therefore, if you run the previous snippets on Java 18, the code will produce the correct contents.
JEP 408: Simple Web Server
Many development environments let programmers start up a rudimentary HTTP web server to test some functionality for static files. That capability comes to Java 18 through JEP 408.
The simplest way to start the web server is with the jwebserver command. By default, this command listens to localhost on port 8000. The server also provides a file browser to the current directory.
[mtaman]:~ jwebserver
Binding to loopback by default. For all interfaces use "-b 0.0.0.0" or "-b ::".
Serving /Users/mohamed_taman/Hidden Gems in Java 18/code and subdirectories on 127.0.0.1 port 8000
URL http://127.0.0.1:8000/
If you visit http://127.0.0.1:8000/, you will see a directory listing in your web browser, and in the terminal you will see the following two lines:
127.0.0.1 - - [06/Mar/2022:23:27:13 +0100] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [06/Mar/2022:23:27:13 +0100] "GET /favicon.ico HTTP/1.1" 404 –
You can change the jwebserver defaults with several parameters, as follows:
[mtaman]:~ jwebserver -b 127.1.2.200 -p 9999 -d /tmp -o verbose
Serving /tmp and subdirectories on 127.0.0.200 port 9999
URL http://127.0.0.200:9999/
Here are a few of the parameters.
◉ -b specifies the IP address on which the server should listen.
◉ -p changes the port.
◉ -d changes the directory the server should serve.
◉ -o configures the log output.
For a complete list of configuration options, run jwebserver -h.
The web server is simple, as its name implies, and has the following limitations:
◉ Only the HTTP GET and HEAD methods are allowed.
◉ The only supported protocol is HTTP/1.1.
◉ HTTPS is not provided.
The API. Fortunately, you can extend and run the server programmatically from the Java API. That’s because jwebserver itself is not a standalone tool; it is a wrapper that calls java -m jdk.httpserver. This command calls the main() method of the sun.net.httpserver.simpleserver.Main class of the jdk.httpserver module, which, in turn, calls SimpleFileServerImpl.start(). This starter evaluates the command-line parameters and creates the server via SimpleFileServer.createFileServer().
You can start a server via Java code as follows:
// Creating the server
HttpServer server = SimpleFileServer.createFileServer(new InetSocketAddress(9999), Path.of("\tmp"), OutputLevel.INFO);
// Starting the server
server.start();
With the Java API, you can extend the web server. For example, you can make specific file system directories accessible via different HTTP paths, and you can implement your handlers to use your choice of paths and HTTP methods, such as PUT and POST. Check the JEP 408 documentation for more API details.
JEP 413: Code snippets in Java API documentation
Prior to Java 18, if you wanted to integrate multiline code snippets into Java documentation (that is, into Javadoc), you had to do it tediously via <pre> ... </pre> and sometimes with {@code ...}. With these tags, you must pay attention to the following two points:
◉ You can’t put line breaks between <pre> and </pre>, which means you may not get the formatting you want.
◉ The code starts directly after the asterisks, so if there are spaces between the asterisks and the code, they also appear in the Javadoc.
Here’s an example that uses <pre> and </pre>.
/**
* How to read a text file with Java 8:
*
* <pre><b>try</b> (var reader = Files.<i>newBufferedReader</i>(path)) {
* String line = reader.readLine();
* System.out.println(line);
* }</pre>
*/
This code will appear in the Javadoc as shown in Figure 1.