Monday, January 31, 2022
The best HotSpot JVM options and switches for Java 11 through Java 17
Wednesday, January 26, 2022
Simpler object and data serialization using Java records
Learn how you can leverage the design of Java’s records to improve Java serialization.
Record classes enhance Java’s ability to model plain-data aggregates without a lot of coding verbosity or, in the phrase used in JEP 395, without too much ceremony. A record class declares some immutable state and commits to an API that matches that state. This means that record classes give up a freedom that classes usually enjoy—the ability to decouple their API from their internal representation—but in return, record classes become significantly more concise.
Record classes were a preview feature in Java 14 and Java 15 and became final in Java 16 in JEP 395. Here is a record class declared in the JDK’s jshell tool.
jshell> record Point (int x, int y) { }
| created record Point
The state of Point consists of two components, x and y. These components are immutable and can be accessed only via accessor methods x() and y(), which are automatically added to the Point class during compilation. Also added during compilation is a canonical constructor for initializing the components. For the Point record class, it is equivalent to the following:
public Point(int x, int y) {
this.x = x;
this.y = y;
}
Unlike the no-argument default constructor added to normal classes, the canonical constructor of a record class has the same signature as the state. (If an object needs mutable state, or state that is unknown when the object is created, a record class is not the right choice; you should declare a normal class instead.)
Here is Point being instantiated and used. In terms of terminology, say that p, the instance of Point, is a record.
jshell> Point p = new Point(5, 10)
p ==> Point[x=5, y=10]
jshell> System.out.println("value of x: " + p.x())
value of x: 5
Taken together, the elements of a record class form a succinct protocol for you to rely on: The elements include a concise description of the state, a canonical constructor to initialize the state, and controlled access to the state. This design has many benefits, such as for object serialization.
What is object serialization?
Serialization is the process of converting an object into a format that can be stored on disk or transmitted over the network (also termed serialized or marshaled) and from which the object can later be reconstituted (deserialized or unmarshaled).
Serialization provides the mechanics for extracting an object’s state and translating it to a persistent format, as well as the means for reconstructing an object with equivalent state from that format. Given their nature as plain data carriers, records are well suited for this use case.
The idea of serialization is powerful, and many frameworks have implemented it, one of them being Java Object Serialization in the JDK, which we’ll refer to simply as Java Serialization.
In Java Serialization, any class that implements the java.io.Serializable interface is serializable. That’s suspiciously simple, right? However, the interface has no members and serves only to mark a class as serializable.
During serialization, the state of all nontransient fields is scraped (even for private fields) and written to the serial byte stream. During deserialization, a superclass no-argument constructor is called to create an object before its fields are populated with the state read from the serial byte stream. The format of the serial byte stream (the serialized form) is chosen by Java Serialization unless you use the special methods writeObject and readObject to specify a custom format.
Problems with Java Serialization
It’s not news that Java Serialization has flaws, and Brian Goetz’s June 2019 blog post, “Towards better serialization,” provides a summary of the problems.
The core of the problem is that Java Serialization was not designed as part of Java’s object model. This means that Java Serialization works with objects using backdoor techniques such as reflection, rather than relying on the API provided by an object’s class. For example, it is possible to create a new deserialized object without invoking one of its constructors, and data read from the serial byte stream is not validated against constructor invariants.
Serialization with records
With Java Serialization, a record class is made serializable just like a normal class, simply by implementing java.io.Serializable.
jshell> record Point (int x, int y) implements Serializable { }
| created record Point
However, under the hood, Java Serialization treats a record (that is, an instance of a record class) very differently than an instance of a normal class. (This July 2020 blog post by Chris Hegarty and Alex Buckley provides a good comparison.) The design aims to keep things as simple as possible and is based on two properties.
◉ The serialization of a record is based solely on its state components.
◉ The deserialization of a record uses only the canonical constructor.
Important note: No customization of the serialization process is allowed for records. That’s by design: The simplicity of this approach is enabled by, and is a logical continuation of, the semantic constraints placed on records.
Because a record is an immutable data carrier, a record can only ever have one state, which is the value of its components. Therefore, there is no need to allow customization of the serialized form.
Similarly, on the deserialization side, the only way to create a record is through the canonical constructor of its record class, whose parameters are known because they are identical to the state description.
Going back to the sample record class Point, the serialization of a Point object using Java Serialization looks as follows:
jshell> var out = new ObjectOutputStream(new FileOutputStream("serial.data"));
out ==> java.io.ObjectOutputStream@5f184fc6
jshell> out.writeObject(new Point(5, 10));
jshell> var in = new ObjectInputStream(new FileInputStream("serial.data"));
in ==> java.io.ObjectInputStream@504bae78
jshell> in.readObject();
$5 ==> Point[x=5, y=10]
Under the hood, a serialization framework can use the x() and y() accessors of Point during serialization to extract the state of p’s components, which are then written to the serial byte stream. During deserialization, the bytes are read from serial.data and the state is passed to the canonical constructor of Point to obtain a new record.
Overall, the design of records naturally fits the demands of serialization. The tight coupling of the state and the API facilitates an implementation that is more secure and easier to maintain. Furthermore, the design allows for some interesting efficiencies of the deserialization of records.
Optimizing record deserialization
For normal classes, Java Serialization relies heavily on reflection to set the private state of a newly deserialized object. However, record classes expose their state and means of reconstruction through a well-specified public API—which Java Serialization leverages.
The constrained nature of record classes drives a re-evaluation of Java Serialization’s strategy of reflection.
If, as outlined above, the API of a record class describes the state of a record, and since this state is immutable, the serial byte stream no longer has to be the single source of truth and the serialization framework doesn’t need to be the single interpreter of that truth.
Instead, the record class can take control of its serialized form, which can be derived from the components. Once the serialized form is derived, you can generate a matching instantiator based on that form ahead of time and store it in the class file of the record class.
In this way, control is inverted from Java Serialization (or any other serialization framework) to the record class. The record class now determines its own serialized form, which it can optimize, store, and make available as required.
This control inversion can enhance record deserialization in several ways, with two interesting areas being class evolution and throughput.
More freedom to evolve record classes. The potential for this arises from an existing well-specified feature of record deserialization: default value injection for absent stream fields. When no value is present in the serial byte stream for a particular record component, its default value is passed to the canonical constructor. The following example demonstrates this with an evolved version of the record class Point:
jshell> record Point (int x, int y, int z) implements Serializable { }
| created record Point
After you serialized a Point record in the previous example, the serial.data file contained a representation of a Point with values for x and y only, not for z. For reasons of compatibility, however, you might want to be able to deserialize that original serialized object in the context of the new Point declaration. Thanks to the default value injection for absent field values, this is possible, and deserialization completes successfully.
jshell> var in = new ObjectInputStream(new FileInputStream("serial.data"));
in ==> java.io.ObjectInputStream@421faab1
jshell> in.readObject();
$3 ==> Point[x=5, y=10, z=0]
This feature can be taken advantage of in the context of record serialization. If you inject default values during deserialization, do those default values need to be represented in the serialized form? In this case, a more compact serialized form could still fully capture the state of the record object.
More generally, this feature also helps support record class versioning, and it makes serialization and deserialization overall more resilient to changes in record state across versions. Compared with normal classes, record classes are therefore even more suitable candidates for storing data.
More throughput when processing records. The other interesting area for enhancement is throughput during deserialization. Object creation during deserialization usually requires reflective API calls, which are expensive and hard to get right. These two problems can be addressed by making the reflective calls more efficient and by encapsulating the instantiation mechanics in the record class itself.
For this, you can leverage the power of method handles combined with dynamically computed constants.
The method handle API in java.lang.invoke was introduced in Java 7 and offers a set of low-level operations for finding, adapting, combining, and invoking methods/setting fields. A method handle is a typed reference that allows transformations of arguments and return types and can be faster than traditional reflection from Java 1.1, if it’s used wisely. In this case, several method handles can be chained together to tailor the creation of records based on the serialized form of their record class.
This method handle chain can be stored as a dynamically computed constant in the class file of the record class, which is lazily computed at first invocation.
Dynamically computed constants are amenable to optimizations by the JVM’s dynamic compiler, so the instantiation code adds only a small overhead to the footprint of the record class. With this, the record class is now in charge of both its serialized form and its instantiation code, and it no longer relies on other intermediaries or frameworks.
This strategy further improves performance and code reuse. It also reduces the burden on the serialization framework, which can now simply use the deserialization strategy provided by the record class, without writing complex and potentially unsafe mapping mechanisms.
Source: oracle.com
Monday, January 24, 2022
Java: Why a Set Can Contain Duplicate Elements
In low-latency applications, the creation of unnecessary objects is often avoided by reusing mutable objects to reduce memory pressure and thus the load on the garbage collector. This makes the application run much more deterministically and with much less jitter. However, care must be taken as to how these reused objects are used or else unexpected results might manifest themselves, for example in the form of a Set containing duplicate elements such as [B, B].
HashCode and Equals
Java’s built-in ByteBuffer provides direct access to heap and native memory using 32-bit addressing. Chronicle Bytes is a 64-bit addressing open-source drop-in replacement allowing much larger memory segments to be addressed. Both these types provide a hashCode() and an equals() method that depends on the byte contents of the objects’ underlying memory segment. While this can be useful in many situations, mutable objects like these should not be used in most of Java’s built-in Set types and not as a key in most built-in Map types.
Note: In reality, only 31 and 63 bits may be used as an effective address offset (e.g. using int and long offset parameters)
Mutable Keys
Below, a small code example is presented illustrating the problem with reused mutable objects. The code shows the use of Bytes but the very same problem exists for ByteBuffer.
Set<CharSequence> set = new HashSet<>();
Bytes<?> bytes = Bytes.from("A");
set.add(bytes);
// Reuse
bytes.writePosition(0);
// This mutates the existing object already
// in the Set
bytes.write("B");
// Adds the same Bytes object again but now under
// another hashCode()
set.add(bytes);
System.out.println(“set = “ + set);
The code above will first add an object with “A” as content meaning that the set contains [A]. Then the content of that existing object will be modified to “B”, which has the side effect of changing the set to contain [B] but will leave the old hash code value and the corresponding hash bucket unchanged (effectively becoming stale). Lastly, the modified object is added to the set again but now under another hash code leading to the previous entry for that very same object will remain!
As a result, rather than the perhaps anticipated [A, B], this will produce the following output:
set = [B, B]
ByteBuffer and Bytes Objects as Keys in Maps
When using Java’s ByteBuffer objects or Bytes objects as keys in maps or as elements in sets, one solution is using an IdentityHashMap or Collections.newSetFromMap(new IdentityHashMap<>()) to protect against the mutable object peculiarities described above. This makes the hashing of the objects agnostic to the actual byte content and will instead use the System.identityHashCode() which never changes during the object’s life.
Another alternative is to use a read-only version of the objects (for example by invoking ByteBuffer.asReadOnlyBuffer()) and refrain from holding any reference to the original mutable object that could provide a back-door to modifying the supposedly read-only object’s content.
Chronicle Map and Chronicle Queue
Chronicle Map is an open-source library that works a bit differently than the built-in Java Map implementations in the way that objects are serialized and put in off-heap memory, opening up for ultra-large maps that can be larger than the RAM memory allocated to the JVM and allows these maps to be persisted to memory-mapped files so that applications can restart much faster.
The serialization process has another less known advantage in the way that it actually allows reusable mutable objects as keys because the content of the object is copied and is effectively frozen each time a new association is put into the map. Subsequent modifications of the mutable object will therefore not affect the frozen serialized content allowing unrestricted object reuse.
Open-source Chronicle Queue works in a similar fashion and can provide queues that can hold terabytes of data persisted to secondary storage and, for the same reason as Chronicle Map, allows object reuse of mutable elements.
Source: javacodegeeks.com
Friday, January 21, 2022
Compile Time Polymorphism in Java
Polymorphism in Java refers to an object’s capacity to take several forms. Polymorphism allows us to perform the same action in multiple ways in Java.
Polymorphism is divided into two types:
1. Compile-time polymorphism
2. Run time polymorphism
Note: Run time polymorphism is implemented through Method overriding. Whereas, Compile Time polymorphism is implemented through Method overloading and Operator overloading.
In this article, we will see Compile time polymorphism.
Compile-time Polymorphism
Compile-time polymorphism is also known as static polymorphism or early binding. Compile-time polymorphism is a polymorphism that is resolved during the compilation process. Overloading of methods is called through the reference variable of a class. Compile-time polymorphism is achieved by method overloading and operator overloading.
1. Method overloading
We can have one or more methods with the same name that are solely distinguishable by argument numbers, type, or order.
Method Overloading occurs when a class has many methods with the same name but different parameters. Two or more methods may have the same name if they have other numbers of parameters, different data types, or different numbers of parameters and different data types.
Example:
void ojc() { ... }
void ojc(int num1 ) { ... }
void ojc(float num1) { ... }
void ojc(int num1 , float num2 ) { ... }
(a). Method overloading by changing the number of parameters
In this type, Method overloading is done by overloading methods in the function call with a varied number of parameters
Example:
show( char a )
show( char a ,char b )
In the given example, the first show method has one parameter, and the second show method has two methods. When a function is called, the compiler looks at the number of parameters and decides how to resolve the method call.
// Java program to demonstrate the working of method
// overloading by changing the number of parameters
public class MethodOverloading {
// 1 parameter
void show(int num1)
{
System.out.println("number 1 : " + num1);
}
// 2 parameter
void show(int num1, int num2)
{
System.out.println("number 1 : " + num1
+ " number 2 : " + num2);
}
public static void main(String[] args)
{
MethodOverloading obj = new MethodOverloading();
// 1st show function
obj.show(3);
// 2nd show function
obj.show(4, 5);
}
}
Output
Wednesday, January 19, 2022
Quiz yourself: Java abstract classes and access modifiers for abstract methods
It’s essential to declare classes properly to ensure methods are accessible.
Download a PDF of this article
Your software uses two classes that are part of the Object Relational Mapping (ORM) framework.
package orm.core;
public abstract class Connection {
abstract void connect(String url);
}
package orm.impl;
import orm.core.Connection;
public abstract class DBConnection extends Connection {
protected void connect(String url) { /* open connection */ }
}
You have decided to create your own concrete connection class based on the DBConnection class.
package server;
import orm.impl.DBConnection;
public class ServerDBConnection extends DBConnection {
...
}
Which statement is correct? Choose one.
A. The Connection class fails to compile.
B. The DBConnection class fails to compile.
C. The ServerDBConnection class cannot be properly implemented.
D. The ServerDBConnection class successfully compiles if you provide the following method inside the class body:
public void connect(String url) { /* */ }
Answer. This question investigates abstract classes and access modifiers for abstract methods.
Option A is incorrect because the Connection class is properly declared: It declares an abstract method, but that’s permitted since it is an abstract class. However, notice that the connect() method has a default accessibility, which means that it’s accessible only inside the orm.core package. This has consequences for how it can be implemented.
As a side note, an abstract method cannot have private accessibility. A private element of a parent class is essentially invisible from the source of a child type. Consequently a private abstract method could never be implemented, so that combination is prohibited.
Consider option B. The DBConnection class successfully compiles. Although it neither sees nor implements the Connection.connect() method, that does not cause a problem. Why? Because the DBConnection class is marked as abstract, it’s acceptable for it to contain abstract methods, whether from a superclass or declared in its own body. Because the class compiles, option B is incorrect.
Option D is also incorrect: Attempting to add a public connect() method in the ServerDBConnection class cannot provide an implementation for the abstract method in the Connection class because it’s not in the orm.core package.
Unless the ServerDBConnection class is in the package orm.core, the ServerDBConnection class cannot implement the Connection.connect() method. Knowing this fact is at the heart of this question.
Because the code cannot implement all the abstract methods from the ServerDBConnection class’s parentage, it cannot be properly defined as a concrete class. This makes option C correct.
To fix the code, you can add the protected access modifier before the Connection.connect() method. The modifier will make DBConnection.connect() implement the method properly, and the ServerDBConnection class could even compile without providing an implementation of the connect() method.
Alternatively, moving the ServerDBConnection class into the orm.core package would allow a proper implementation of the connect() method in its current form.
Conclusion. The correct answer is option C.
Source: oracle.com
Monday, January 17, 2022
12 handy debugging tips from Cay Horstmann’s Core Java
From using jconsole to monitoring uncaught exceptions, here are a dozen tips that may be worth trying before you launch your favorite IDE’s debugger.
Download a PDF of this article
[This article on Java debugging is adapted from Core Java Volume I: Fundamentals, 12th Edition, by Cay S. Horstmann, published by Oracle Press. —Ed.]
Suppose you wrote your Java program and made it bulletproof by catching and properly handling all the exceptions. Then you run it, and it does not work correctly.
Now what? (If you never have this problem, you can skip this article.)
Of course, it is best if you have a convenient and powerful debugger, and debuggers are available as a part of IDEs. That said, here are a dozen tips worth trying before you launch your IDE’s debugger.
Tip 1. You can print or log the value of any variable with code like the following:
System.out.println("x=" + x);
or
Logger.getGlobal().info("x=" + x);
If x is a number, it is converted to its string equivalent. If x is an object, Java calls its toString method. To get the state of the implicit parameter object, print the state of the this object.
Logger.getGlobal().info("this=" + this);
Most of the classes in the Java library are very conscientious about overriding the toString method to give you useful information about the class. This is a real boon for debugging. You should make the same effort in your classes.
Tip 2. One seemingly little-known but very useful trick is putting a separate main method in each class. Inside it, you can put a unit test stub that lets you test the class in isolation.
public class MyClass
{
// the methods and fields
. . .
public static void main(String[] args)
{
// the test code
}
}
Make a few objects, call all methods, and check that each of them does the right thing. You can leave all these main methods in place and launch the Java Virtual Machine separately on each of the files to run the tests.
When you run an applet, none of these main methods are ever called.
When you run an application, the JVM calls only the main method of the startup class.
Tip 3. If you liked the preceding tip, you should check out JUnit. JUnit is a very popular unit testing framework that makes it easy to organize suites of test cases.
Run the tests whenever you make changes to a class, and add another test case whenever you find a bug.
Tip 4. A logging proxy is an object of a subclass that intercepts method calls, logs them, and then calls the superclass. For example, if you have trouble with the nextDouble method of the Random class, you can create a proxy object as an instance of an anonymous subclass, as follows:
var generator = new Random()
{
public double nextDouble()
{
double result = super.nextDouble();
Logger.getGlobal().info("nextDouble: "
+ result);
return result;
}
};
Whenever the nextDouble method is called, a log message is generated.
To find out who called the method, generate a stack trace.
Tip 5. You can get a stack trace from any exception object by using the printStackTrace method in the Throwable class. The following code catches any exception, prints the exception object and the stack trace, and rethrows the exception so it can find its intended handler:
try
{
. . .
}
catch (Throwable t)
{
t.printStackTrace();
throw t;
}
You don’t even need to catch an exception to generate a stack trace. Simply insert the following statement anywhere into your code to get a stack trace:
Thread.dumpStack();
Tip 6. Normally, the stack trace is displayed on System.err. If you want to log or display the stack trace, here is how you can capture it into a string.
var out = new StringWriter();
new Throwable().printStackTrace(new PrintWriter(out));
String description = out.toString();
Tip 7. It is often handy to trap program errors in a file. However, errors are sent to System.err, not System.out. Therefore, you cannot simply trap them by running
java MyProgram > errors.txt
Instead, capture the error stream as
java MyProgram 2> errors.txt
To capture both System.err and System.out in the same file, use
java MyProgram 1> errors.txt 2>&1
This works in bash and in the Windows shell.
Tip 8. Having the stack traces of uncaught exceptions show up in System.err is not ideal. These messages are confusing to end users if they happen to see them, and they are not available for diagnostic purposes when you need them.
A better approach is to log the uncaught exceptions to a file. You can change the handler for uncaught exceptions with the static Thread.setDefaultUncaughtExceptionHandler method.
Thread.setDefaultUncaughtExceptionHandler(
new Thread.UncaughtExceptionHandler()
{
public void uncaughtException(Thread t,
Throwable e)
{
// save information in log file
};
});
Tip 9. To watch classes loading, launch the JVM with the -verbose flag. You will get a printout such as in Figure 1.
Wednesday, January 12, 2022
Java Program to Find the Biggest of 3 Numbers
A Simple Java Program To Find Largest Of Three Numbers.
1. Overview
You’ll be learning today how to find the biggest of 3 numbers. This is also a very basic interview question. But the interviewer will look for the optimized and fewer lines code. We will show you all the possible programs and how most of java developers think.
For example, given three numbers 4 67 8. Among these three 67 is bigger. For this, we need to perform a comparison with all numbers.
2. Program 1: To find the biggest of three numbers using if-else
First, an example program to read the three values from the user using Scanner class and nextInt() method. Then next, use the if-else condition to find the largest number.
Scanner has to be closed at the of the class.
a > b && a > c is true then a is the largest.
b > a && b > c is true then b is the largest
else c is the largest.
package com.oraclejavacertified.engineering.programs;
import java.util.Scanner;
public class BiggestOfThree1 {
public static void main(String[] args) {
Scanner scanner = new Scanner(System.in);
System.out.println("Enter first number : ");
int a = scanner.nextInt();
System.out.println("Enter second number : ");
int b = scanner.nextInt();
System.out.println("Enter third number : ");
int c = scanner.nextInt();
if (a > b && a > c) {
System.out.println(a + " is the largest");
} else if (b > a && b > c) {
System.out.println(b + " is the largest");
} else {
System.out.println(c + " is the largest");
}
}
}
Output:
Enter first number : 10
Enter second number : 30
Enter third number : 20
30 is the largest
3. Program 2: To find the biggest of three numbers using nested if-else
package com.oraclejavacertified.engineering.programs;
import java.util.Scanner;
public class BiggestOfThree2 {
public static void main(String[] args) {
int a = 10;
int b = 30;
int c = 20;
if (a > b) {
if(a > c) {
System.out.println(a + " is the largest");
} else {
System.out.println(c + " is the largest");
}
} else if (b > a && b > c) {
if(b > c) {
System.out.println(b + " is the largest");
} else {
System.out.println(c + " is the largest");
}
} else {
System.out.println(c + " is the largest");
}
}
}
This code produces the same output as above. But code looks not clear and is difficult to understand.
4. Program 3: To find the biggest of three numbers using if-else with reducing the condition logic
package com.oraclejavacertified.engineering.programs;
public class BiggestOfThree3 {
public static void main(String[] args) {
int a = 10;
int b = 30;
int c = 20;
if (a > b && a > c) {
System.out.println(a + " is the largest");
} else if (b > c) {
System.out.println(b + " is the largest");
} else {
System.out.println(c + " is the largest");
}
}
}
This code is clear and easy to understand. If a > b && a > c is true then a is the largest, false means value ‘a’ not biggest which means biggest is might be either b or c. Next checking b > c, returns true if value ‘b’ is bigger else value ‘c’ bigger.
5. Program 4: To find the biggest of three numbers using the nested ternary operator
The below code is based on the ternary operator which returns a value. We have wrapped all conditions into a single line which is effective but not readable.
package com.oraclejavacertified.engineering.programs;
public class BiggestOfThree4 {
public static void main(String[] args) {
int a = 10;
int b = 30;
int c = 20;
int biggest = (a > b && a > c) ? a : ((b > c) ? b : c);
System.out.println(biggest + " is the largest");
}
}
Source: javacodegeeks.com