Monday, January 31, 2022

The best HotSpot JVM options and switches for Java 11 through Java 17

The best HotSpot JVM options and switches for Java 11 through Java 17

In this article, you will learn about some of the systems within the OpenJDK HotSpot Java Virtual Machine (HotSpot JVM) and how they can be tuned to best suit your program and its execution environment.

The HotSpot JVM is an amazing and flexible piece of technology. It is available as a binary release for every major operating system and CPU architecture from the tiny Raspberry Pi Zero all the way up to “big iron” servers containing hundreds of CPU cores and terabytes of RAM. With OpenJDK being an open source project, the HotSpot JVM can be compiled for almost any other system—and it can be fine-tuned using options, switches, and flags.

First, here’s some background. The language of the HotSpot JVM is bytecode. At the time of this writing, there are more than 30 programming languages that can be compiled into HotSpot JVM–compatible bytecode, but by far the most popular, with over 8 million developers worldwide, is, of course, Java.

Java source code is compiled into bytecode (as shown in Figure 1), in the form of class files, using the javac compiler. In modern development this is likely abstracted away by build tools such as Maven, Gradle, or an IDE-based compiler.

Core Java, JVM, Oracle Java Career, Oracle Java Preparation, Oracle Java Guides, Oracle Java Exam Prep

Figure 1. Process for compiling bytecode

The bytecode representation of a program is executed by the HotSpot JVM on a virtual stack machine that knows as many as 256 different instructions, and each instruction is identified by an 8-bit numerical opcode; hence, the name bytecode.

The bytecode program is executed by an interpreter that fetches each instruction, pushes its operands onto the stack, and then executes the instruction, removing the operands and leaving the result on the stack, as shown in Figure 2.

Core Java, JVM, Oracle Java Career, Oracle Java Preparation, Oracle Java Guides, Oracle Java Exam Prep

Figure 2. Results on the stack after the interpreter executes the bytecode

The abstraction of program execution from the underlying environment is what gives Java its “Write once, run anywhere” portability advantage. Class files compiled on one architecture can execute on a HotSpot JVM running on a completely different architecture.

If you’re thinking that this abstraction from the underlying hardware comes at a performance cost, you are correct. That’s often where the switches, options, and flags come in.

Just-in-time compilation


How can programs written in portable, feature-rich, high-level languages such as Java challenge the performance of those compiled to architecture-specific native code from lower-level, less-programmer-friendly languages such as C?

The answer is that the HotSpot JVM contains performance-boosting just-in-time (JIT) compilation technology that profiles your program’s execution and selectively optimizes the parts it decides will benefit the most. These are known as your program’s hot spots (hence, the name of the HotSpot JVM), and it does this by compiling them into native code on the fly using knowledge of the underlying system architecture.

The HotSpot JVM contains two JIT compilers, known as C1 (the client compiler) and C2 (the server compiler), which offer different optimization trade-offs.

◉ C1 offers fast, simple optimizations.
◉ C2 offers advanced optimizations that require more profiling and are more expensive to apply.

Ever since the release of JDK 8, the default behavior has been to use both compilers together in a mode called tiered compilation, where C1 provides rapid speed boosts while C2 gathers enough profiling information before making its advanced optimizations. The native code produced is stored in a memory region of the HotSpot JVM called the code cache, as shown in Figure 3.

Core Java, JVM, Oracle Java Career, Oracle Java Preparation, Oracle Java Guides, Oracle Java Exam Prep

Figure 3. The Java compilation process

Taking out the trash


In addition to JIT technology, the HotSpot JVM also includes productivity- and performance-boosting features such as multithreading and automatic memory management with a choice of garbage collection (GC) strategies.

Objects are allocated in a memory region of the HotSpot JVM called the heap, and once those objects are no longer referenced, they can be cleaned up by the garbage collector and the memory they used is reclaimed.

The ergonomic HotSpot JVM


With so much flexibility and dynamic behavior in the HotSpot JVM, you might worry about how to configure it to best match the requirements of your program. Fortunately, for a lot of use cases, you won’t need to do any manual tuning. The HotSpot JVM contains a process called ergonomics that examines the execution environment at startup and chooses some sensible defaults for the GC strategy, heap size, and JIT compilers based on the number of CPU cores and amount of RAM available. The current defaults are

◉ Garbage collector: G1GC
◉ Initial heap: 1/64th of physical memory
◉ Maximum heap: 1/4th of physical memory
◉ JIT compiler: Tiered compilation using both C1 and C2

You can see all of the ergonomic defaults the HotSpot JVM will choose for your environment by using the option -XX:+PrintFlagsFinal and using the grep command to search for ergonomic, as follows:

Copy code snippet
Copied to ClipboardError: Could not CopyCopied to Clipboard

java -XX:+PrintFlagsFinal | grep ergonomic

  intx CICompilerCount             = 4              {product} {ergonomic}
  uint ConcGCThreads               = 2              {product} {ergonomic}
  uint G1ConcRefinementThreads     = 8              {product} {ergonomic}
  size_t G1HeapRegionSize          = 2097152        {product} {ergonomic}
  uintx GCDrainStackTargetSize     = 64             {product} {ergonomic}
  size_t InitialHeapSize           = 526385152      {product} {ergonomic}
  size_t MarkStackSize             = 4194304        {product} {ergonomic}
  size_t MaxHeapSize               = 8403288064     {product} {ergonomic}
  size_t MaxNewSize                = 5041553408     {product} {ergonomic}
  size_t MinHeapDeltaBytes         = 2097152        {product} {ergonomic}
  uintx NonNMethodCodeHeapSize     = 5836300        {pd product} {ergonomic}
  uintx NonProfiledCodeHeapSize    = 122910970      {pd product} {ergonomic}
  uintx ProfiledCodeHeapSize       = 122910970      {pd product} {ergonomic}
  uintx ReservedCodeCacheSize      = 251658240      {pd product} {ergonomic}
  bool SegmentedCodeCache          = true           {product} {ergonomic}
  bool UseCompressedClassPointers  = true           {lp64_product} {ergonomic}
  bool UseCompressedOops           = true           {lp64_product} {ergonomic}
  bool UseG1GC                     = true           {product} {ergonomic}

This output above is from JDK 11 on a machine with 32 GB of RAM, so the initial heap is set to 1/64th of 32 GB (approximately 512 MB) and the maximum heap is 1/4th of 32 GB (8 GB).

Taking control


If you think the default settings chosen by the ergonomics process will not be a good match for your application, you’ll be pleased to know the HotSpot JVM is highly configurable in every area.

There are three main types of options.

◉ Standard: Basic startup options such as -classpath that are common across HotSpot JVM implementations.

◉ -X: Nonstandard options used to configure common properties of the HotSpot JVM such as controlling the maximum heap size (-Xmx); these are not guaranteed to be supported on all HotSpot JVM implementations.

◉ -XX: Advanced options used to configure advanced properties of the HotSpot JVM. According to the documentation, these are subject to change without notice, but the Java team has a well-managed process for removing them.

The -XX options


Many of the -XX options can be further characterized as follows:

Product. These are the most commonly used -XX options.

Experimental. These are options related to experimental features in the HotSpot JVM that may not yet be production-ready. These options allow you to try out new HotSpot JVM features, and they need to be unlocked by specifying the following:

-XX:+UnlockExperimentalVMOptions

For example, the ZGC garbage collector in JDK 11 can be accessed using

java -XX:+UnlockExperimentalVMOptions -XX:+UseZGC

Once an experimental feature becomes production-ready, the options that control it are no longer classed as experimental and do not require unlocking. The ZGC collector became a product option in JDK 15.

Manageable. These options can also be set at runtime via the MXBean API or other JDK tools. For example, to show locks held by java.util.concurrent classes in a HotSpot JVM thread dump use

java -XX:+PrintConcurrentLocks

Diagnostic. These options are related to accessing advanced diagnostic information about the HotSpot JVM. These options require you to use the following before they can be used:

-XX:+UnlockDiagnosticVMOptions

An example diagnostic option is

-XX:+LogCompilation

It instructs the HotSpot JVM to output a log file containing details of all the optimizations made by the JIT compilers. You can inspect this output to understand which parts of your program were optimized and to identify parts of your program that might not have been optimized as you expected.

The LogCompilation output is verbose but can be visualized in a tool such as JITWatch, which can tell you about method inlining, escape analysis, lock elision, and other optimizations that the HotSpot JVM made to your running code.

Developmental. These options allow configuration and debugging of the most-advanced HotSpot JVM settings, and they require a special debug HotSpot JVM build before you can access them.

Added and removed options


The addition and removal of option switches follows the arrival or deprecation of major features in the HotSpot JVM. Here are some notable points.

◉ In JDK 9 many of the -XX:+Print... and -XX:+Trace... logging options were removed and replaced by the -Xlog option for controlling the unified logging subsystem introduced by JEP 158.

◉ The option count peaked in JDK 11 at a whopping 1,504 after the addition of options for the experimental ZGC, Epsilon, and Shenandoah garbage collectors.

◉ There was a large drop in JDK 14 with the removal of the Concurrent Mark Sweep (CMS) garbage collector, as discussed in JEP 363.

Figure 4 shows the number of HotSpot JVM options in each version of OpenJDK.

Core Java, JVM, Oracle Java Career, Oracle Java Preparation, Oracle Java Guides, Oracle Java Exam Prep

Figure 4. Total number of options (including product, experimental, diagnostic, and developmental) in each version of OpenJDK

Table 1 shows the HotSpot JVM options that were in OpenJDK 16 that were removed from OpenJDK 17. Table 2 shows the new HotSpot JVM options added to OpenJDK 17.

Table 1. The HotSpot JVM options in OpenJDK 16 that were removed from OpenJDK 17

Core Java, JVM, Oracle Java Career, Oracle Java Preparation, Oracle Java Guides, Oracle Java Exam Prep

Table 2. The new HotSpot JVM options added to OpenJDK 17

Core Java, JVM, Oracle Java Career, Oracle Java Preparation, Oracle Java Guides, Oracle Java Exam Prep

So long and farewell!


So how does the HotSpot JVM development team manage the removal of options? Since JDK 9, the process for removing -XX options was extended to a three-step process of deprecate, obsolete, and expire to give users plenty of warning that their Java command line may soon need to be updated.

Let’s look at how the HotSpot JVM responds to the -XX:+AggressiveOpts option, which was deprecated in JDK 11, obsoleted in JDK 12, and finally expired in JDK 13.

Deprecated options. These options are supported, but a warning is printed to let you know that support may be removed in the future, for example

./jdk11/bin/java -XX:+AggressiveOpts
OpenJDK 64-Bit Server VM warning: Option AggressiveOpts was deprecated in version 11.0 and will likely be removed in a future release.

Obsolete options. These options have been removed but they are still accepted on the command line. A warning is printed to let you know that these options might not be accepted in the future, for example

./jdk12/bin/java -XX:+AggressiveOpts
OpenJDK 64-Bit Server VM warning: Ignoring option AggressiveOpts; support was removed in 12.0

Expired options. These are deprecated or obsolete options that have an accept_until version less than or equal to the current JDK version. A warning is printed when these options are used in the JDK version in which they expired, for example

./jdk13/bin/java -XX:+AggressiveOpts
OpenJDK 64-Bit Server VM warning: Ignoring option AggressiveOpts; support was removed in 12.0

Total failure. Once you go past the JDK version in which an option was expired, the HotSpot JVM will fail to start when the option is passed and a warning is printed, for example

./jdk14/bin/java -XX:+AggressiveOpts
Unrecognized VM option 'AggressiveOpts'
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.

Sadly, not every option is retired in this orderly manner.

For example, JDK 9 dropped support for a large number of options when it introduced unified logging and the powerful -Xlog option, which is covered in detail in Nicolai Parlog’s blog. There is also a page on the Java documentation website for converting deprecated logging options to Xlog.

Migrating to a later JDK


So, how can you prepare for migrating your Java command line to a later JDK? Perhaps you’ve inherited a command line full of options you’ve never heard of and you are afraid to touch them for fear of destabilizing your application.

I have created a tool to help you: JaCoLine, the Java Command Line Inspector. You can paste in your command line, choose your target platform, and get an analysis of how your options will work. See Figure 5.

Core Java, JVM, Oracle Java Career, Oracle Java Preparation, Oracle Java Guides, Oracle Java Exam Prep

Figure 5. Analyzing command-line options with JaCoLine

These are a few of my favorite things!


Admit it: You came here seeking the magic turbo button that will supercharge your applications. Well, while there is no one-size-fits-all advice when it comes to HotSpot JVM tuning, there are certainly options I believe will help you better understand the execution of your program and make sensible configuration choices.

The following options are available in JDK 11 and later. I’m choosing these switches because many developers have not moved to later versions of Java. And remember, these are all optional; the HotSpot JVM’s defaults are very good.

First, understand memory usage. Allocating memory in the HotSpot JVM is cheap. Garbage collection cost is the tax that falls due sometime later in the form of execution pauses while the HotSpot JVM cleans up the no-longer-needed objects in the heap.

Understanding the heap allocations made by your code and the resulting GC behavior could be the lowest hanging fruit when it comes to improving application performance and stability, because a mismatch between the heap and GC configuration and your application’s allocation behavior can lead to excessive pauses that interrupt the progress of your application.

The JaCoLine Statistics web page confirms that configuring the heap and GC logging are the most popular options across all the command lines JaCoLine examined.

To configure the heap, consider the answers to the following questions:

◉ What is the maximum expected heap usage under normal conditions?
   ◉ -Xmx sets the maximum heap size, for example, -Xmx8g.
   ◉ -XX:MaxRAMPercentage=n sets the maximum heap as a percentage of total RAM.
◉ How quickly do you expect the heap to reach its maximum size?
   ◉ -Xms sets the initial heap size, for example, -Xms256m.
   ◉ -XX:InitialRAMPercentage=n sets the maximum heap as a percentage of total RAM.
   ◉ If you expect the heap to grow rapidly, you can set the initial heap closer to the maximum heap value.

To deal with OutOfMemory errors, consider how the HotSpot JVM should behave if your application runs out of memory.

◉ -XX:+ExitOnOutOfMemoryError tells the HotSpot JVM to exit on the first OutOfMemory error. This can be useful if the HotSpot JVM will be automatically restarted.
◉ -XX:+HeapDumpOnOutOfMemoryError will help you diagnose memory leaks by dumping the contents of the heap to a file (java_pid.hprof in the working directory).
◉ -XX:HeapDumpPath defines the path for the heap dump.

Second, choose a garbage collector. The G1GC collector will be selected by default by the JDK 11 ergonomics process on most hardware, but it is not the only choice in JDK 11 and later.

Other garbage collectors available are

◉ -XX:+UseSerialGC selects the serial collector, which performs all GC work on a single thread.
◉ -XX:+UseParallelGC selects the parallel (throughput) collector, which can perform compaction using multiple threads.
◉ -XX:+UseConcMarkSweepGC selects the CMS collector. Note that the CMS collector was deprecated in JDK 9 and was removed in JDK 14.
◉ -XX:+UnlockExperimentalVMOptions -XX:+UseZGC selects the ZGC collector (experimental in JDK 11 and a standard feature in JDK 14 and later; thus you won’t need this switch).

Advice on selecting a collector for your application can be found in the HotSpot Virtual Machine Garbage Collection Tuning Guide. That’s the version of the document for JDK 11; if you are on a later version of Java, search for the updated documentation.

To avoid premature promotion, consider whether your application creates short-lived objects at a high allocation rate. That can lead to the premature promotion of short-lived objects to the old-generation heap space, where they will accumulate until a full garbage collection is needed.

◉ -XX:NewSize=n defines the initial size for the young generation.
◉ -XX:MaxNewSize=n defines the maximum size for the young generation.
◉ -XX:MaxTenuringThreshold=n is the maximum number of young-generation collections an object can survive before it is promoted to the old generation.

To log memory usage and GC activity, do the following:

◉ Get a full breakdown of the HotSpot JVM’s memory usage upon exit by using
-XX:+UnlockDiagnosticVMOptions ‑XX:NativeMemoryTracking=summary ‑XX:+PrintNMTStatistics.
◉ Enable GC logging with the following:
   ◉ -Xlog:gc provides basic GC logging.
   ◉ -Xlog:gc* provides verbose GC logging.

Finally, understand how the JIT compilers optimized your code. Once you are happy that your application’s GC pauses are at an acceptable level, you can check that the HotSpot JVM’s JIT compilers are optimizing the parts of your program you think are important for performance.

Enable simple compilation logging, as follows:

◉ -XX:+PrintCompilation prints basic information about each JIT compilation to the console.
◉ -XX:+UnlockDiagnosticVMOptions ‑XX:+PrintCompilation ‑XX:+PrintInlining adds information about method inlining.

Example output:

java -XX:+PrintCompilation
  77    1    3    java.lang.StringLatin1::hashCode (42 bytes)
  78    2    3    java.util.concurrent.ConcurrentHashMap::tabAt (22 bytes)
  78    3    3    jdk.internal.misc.Unsafe::getObjectAcquire (7 bytes)
  80    4    3    java.lang.Object:: (1 bytes)
  80    5    3    java.lang.String::isLatin1 (19 bytes)
  80    6    3    java.lang.String::hashCode (49 bytes)

The items in the output are (from left to right) as follows:

Core Java, JVM, Oracle Java Career, Oracle Java Preparation, Oracle Java Guides, Oracle Java Exam Prep

Source: oracle.com

Wednesday, January 26, 2022

Simpler object and data serialization using Java records

Data Serialization, Java Records, Core Java, Oracle Java Preparation, Oracle Java Exam, Oracle Java Career, Java Skills

Learn how you can leverage the design of Java’s records to improve Java serialization.

Record classes enhance Java’s ability to model plain-data aggregates without a lot of coding verbosity or, in the phrase used in JEP 395, without too much ceremony. A record class declares some immutable state and commits to an API that matches that state. This means that record classes give up a freedom that classes usually enjoy—the ability to decouple their API from their internal representation—but in return, record classes become significantly more concise.

Record classes were a preview feature in Java 14 and Java 15 and became final in Java 16 in JEP 395. Here is a record class declared in the JDK’s jshell tool.

jshell> record Point (int x, int y) { }

| created record Point

The state of Point consists of two components, x and y. These components are immutable and can be accessed only via accessor methods x() and y(), which are automatically added to the Point class during compilation. Also added during compilation is a canonical constructor for initializing the components. For the Point record class, it is equivalent to the following:

public Point(int x, int y) {

 this.x = x;

 this.y = y;

}

Unlike the no-argument default constructor added to normal classes, the canonical constructor of a record class has the same signature as the state. (If an object needs mutable state, or state that is unknown when the object is created, a record class is not the right choice; you should declare a normal class instead.)

Here is Point being instantiated and used. In terms of terminology, say that p, the instance of Point, is a record.

jshell> Point p = new Point(5, 10)

p ==> Point[x=5, y=10]

jshell> System.out.println("value of x: " + p.x())

value of x: 5

Taken together, the elements of a record class form a succinct protocol for you to rely on: The elements include a concise description of the state, a canonical constructor to initialize the state, and controlled access to the state. This design has many benefits, such as for object serialization.

What is object serialization?

Serialization is the process of converting an object into a format that can be stored on disk or transmitted over the network (also termed serialized or marshaled) and from which the object can later be reconstituted (deserialized or unmarshaled).

Serialization provides the mechanics for extracting an object’s state and translating it to a persistent format, as well as the means for reconstructing an object with equivalent state from that format. Given their nature as plain data carriers, records are well suited for this use case.

The idea of serialization is powerful, and many frameworks have implemented it, one of them being Java Object Serialization in the JDK, which we’ll refer to simply as Java Serialization.

In Java Serialization, any class that implements the java.io.Serializable interface is serializable. That’s suspiciously simple, right? However, the interface has no members and serves only to mark a class as serializable.

During serialization, the state of all nontransient fields is scraped (even for private fields) and written to the serial byte stream. During deserialization, a superclass no-argument constructor is called to create an object before its fields are populated with the state read from the serial byte stream. The format of the serial byte stream (the serialized form) is chosen by Java Serialization unless you use the special methods writeObject and readObject to specify a custom format.

Problems with Java Serialization

It’s not news that Java Serialization has flaws, and Brian Goetz’s June 2019 blog post, “Towards better serialization,” provides a summary of the problems.

The core of the problem is that Java Serialization was not designed as part of Java’s object model. This means that Java Serialization works with objects using backdoor techniques such as reflection, rather than relying on the API provided by an object’s class. For example, it is possible to create a new deserialized object without invoking one of its constructors, and data read from the serial byte stream is not validated against constructor invariants.

Serialization with records

With Java Serialization, a record class is made serializable just like a normal class, simply by implementing java.io.Serializable.

jshell> record Point (int x, int y) implements Serializable { }

|  created record Point

However, under the hood, Java Serialization treats a record (that is, an instance of a record class) very differently than an instance of a normal class. (This July 2020 blog post by Chris Hegarty and Alex Buckley provides a good comparison.) The design aims to keep things as simple as possible and is based on two properties.

◉ The serialization of a record is based solely on its state components.

◉ The deserialization of a record uses only the canonical constructor.

Important note: No customization of the serialization process is allowed for records. That’s by design: The simplicity of this approach is enabled by, and is a logical continuation of, the semantic constraints placed on records.

Because a record is an immutable data carrier, a record can only ever have one state, which is the value of its components. Therefore, there is no need to allow customization of the serialized form.

Similarly, on the deserialization side, the only way to create a record is through the canonical constructor of its record class, whose parameters are known because they are identical to the state description.

Going back to the sample record class Point, the serialization of a Point object using Java Serialization looks as follows:

jshell> var out = new ObjectOutputStream(new FileOutputStream("serial.data"));

out ==> java.io.ObjectOutputStream@5f184fc6

jshell> out.writeObject(new Point(5, 10));

jshell> var in = new ObjectInputStream(new FileInputStream("serial.data"));

in ==> java.io.ObjectInputStream@504bae78

jshell> in.readObject();

$5 ==> Point[x=5, y=10]

Under the hood, a serialization framework can use the x() and y() accessors of Point during serialization to extract the state of p’s components, which are then written to the serial byte stream. During deserialization, the bytes are read from serial.data and the state is passed to the canonical constructor of Point to obtain a new record.

Overall, the design of records naturally fits the demands of serialization. The tight coupling of the state and the API facilitates an implementation that is more secure and easier to maintain. Furthermore, the design allows for some interesting efficiencies of the deserialization of records.

Optimizing record deserialization

For normal classes, Java Serialization relies heavily on reflection to set the private state of a newly deserialized object. However, record classes expose their state and means of reconstruction through a well-specified public API—which Java Serialization leverages.

The constrained nature of record classes drives a re-evaluation of Java Serialization’s strategy of reflection.

If, as outlined above, the API of a record class describes the state of a record, and since this state is immutable, the serial byte stream no longer has to be the single source of truth and the serialization framework doesn’t need to be the single interpreter of that truth.

Instead, the record class can take control of its serialized form, which can be derived from the components. Once the serialized form is derived, you can generate a matching instantiator based on that form ahead of time and store it in the class file of the record class.

In this way, control is inverted from Java Serialization (or any other serialization framework) to the record class. The record class now determines its own serialized form, which it can optimize, store, and make available as required.

This control inversion can enhance record deserialization in several ways, with two interesting areas being class evolution and throughput.

More freedom to evolve record classes. The potential for this arises from an existing well-specified feature of record deserialization: default value injection for absent stream fields. When no value is present in the serial byte stream for a particular record component, its default value is passed to the canonical constructor. The following example demonstrates this with an evolved version of the record class Point:

jshell> record Point (int x, int y, int z) implements Serializable { }

|  created record Point

After you serialized a Point record in the previous example, the serial.data file contained a representation of a Point with values for x and y only, not for z. For reasons of compatibility, however, you might want to be able to deserialize that original serialized object in the context of the new Point declaration. Thanks to the default value injection for absent field values, this is possible, and deserialization completes successfully.

jshell> var in = new ObjectInputStream(new FileInputStream("serial.data"));

in ==> java.io.ObjectInputStream@421faab1

jshell> in.readObject();

$3 ==> Point[x=5, y=10, z=0]

This feature can be taken advantage of in the context of record serialization. If you inject default values during deserialization, do those default values need to be represented in the serialized form? In this case, a more compact serialized form could still fully capture the state of the record object.

More generally, this feature also helps support record class versioning, and it makes serialization and deserialization overall more resilient to changes in record state across versions. Compared with normal classes, record classes are therefore even more suitable candidates for storing data.

More throughput when processing records. The other interesting area for enhancement is throughput during deserialization. Object creation during deserialization usually requires reflective API calls, which are expensive and hard to get right. These two problems can be addressed by making the reflective calls more efficient and by encapsulating the instantiation mechanics in the record class itself.

For this, you can leverage the power of method handles combined with dynamically computed constants.

The method handle API in java.lang.invoke was introduced in Java 7 and offers a set of low-level operations for finding, adapting, combining, and invoking methods/setting fields. A method handle is a typed reference that allows transformations of arguments and return types and can be faster than traditional reflection from Java 1.1, if it’s used wisely. In this case, several method handles can be chained together to tailor the creation of records based on the serialized form of their record class.

This method handle chain can be stored as a dynamically computed constant in the class file of the record class, which is lazily computed at first invocation.

Dynamically computed constants are amenable to optimizations by the JVM’s dynamic compiler, so the instantiation code adds only a small overhead to the footprint of the record class. With this, the record class is now in charge of both its serialized form and its instantiation code, and it no longer relies on other intermediaries or frameworks.

This strategy further improves performance and code reuse. It also reduces the burden on the serialization framework, which can now simply use the deserialization strategy provided by the record class, without writing complex and potentially unsafe mapping mechanisms.

Source: oracle.com

Monday, January 24, 2022

Java: Why a Set Can Contain Duplicate Elements

Core Java, Oracle Java Certification, Oracle Java Preparation, Oracle Java Guides, Oracle Java Skills

In low-latency applications, the creation of unnecessary objects is often avoided by reusing mutable objects to reduce memory pressure and thus the load on the garbage collector. This makes the application run much more deterministically and with much less jitter. However, care must be taken as to how these reused objects are used or else unexpected results might manifest themselves, for example in the form of a Set containing duplicate elements such as [B, B].

HashCode and Equals

Java’s built-in ByteBuffer provides direct access to heap and native memory using 32-bit addressing. Chronicle Bytes is a 64-bit addressing open-source drop-in replacement allowing much larger memory segments to be addressed. Both these types provide a hashCode() and an equals() method that depends on the byte contents of the objects’ underlying memory segment. While this can be useful in many situations, mutable objects like these should not be used in most of Java’s built-in Set types and not as a key in most built-in Map types.

Note: In reality, only 31 and 63 bits may be used as an effective address offset (e.g. using int and long offset parameters)

Mutable Keys

Below, a small code example is presented illustrating the problem with reused mutable objects. The code shows the use of Bytes but the very same problem exists for ByteBuffer.

Set<CharSequence> set = new HashSet<>();

Bytes<?> bytes = Bytes.from("A");

set.add(bytes);

// Reuse

bytes.writePosition(0);

// This mutates the existing object already

// in the Set

bytes.write("B");

// Adds the same Bytes object again but now under

// another hashCode()

set.add(bytes);

System.out.println(“set = “ + set);

The code above will first add an object with “A” as content meaning that the set contains [A]. Then the content of that existing object will be modified to “B”, which has the side effect of changing the set to contain [B] but will leave the old hash code value and the corresponding hash bucket unchanged (effectively becoming stale). Lastly, the modified object is added to the set again but now under another hash code leading to the previous entry for that very same object will remain!

As a result, rather than the perhaps anticipated [A, B], this will produce the following output:

set = [B, B]

ByteBuffer and Bytes Objects as Keys in Maps

When using Java’s ByteBuffer objects or Bytes objects as keys in maps or as elements in sets, one solution is using an IdentityHashMap or Collections.newSetFromMap(new IdentityHashMap<>()) to protect against the mutable object peculiarities described above. This makes the hashing of the objects agnostic to the actual byte content and will instead use the System.identityHashCode() which never changes during the object’s life.

Another alternative is to use a read-only version of the objects (for example by invoking ByteBuffer.asReadOnlyBuffer()) and refrain from holding any reference to the original mutable object that could provide a back-door to modifying the supposedly read-only object’s content.

Chronicle Map and Chronicle Queue

Chronicle Map is an open-source library that works a bit differently than the built-in Java Map implementations in the way that objects are serialized and put in off-heap memory, opening up for ultra-large maps that can be larger than the RAM memory allocated to the JVM and allows these maps to be persisted to memory-mapped files so that applications can restart much faster.

The serialization process has another less known advantage in the way that it actually allows reusable mutable objects as keys because the content of the object is copied and is effectively frozen each time a new association is put into the map. Subsequent modifications of the mutable object will therefore not affect the frozen serialized content allowing unrestricted object reuse.

Open-source Chronicle Queue works in a similar fashion and can provide queues that can hold terabytes of data persisted to secondary storage and, for the same reason as Chronicle Map, allows object reuse of mutable elements.

Source: javacodegeeks.com

Friday, January 21, 2022

Compile Time Polymorphism in Java

Oracle Java Exam Prep, Oracle Java Preparation, Oracle Java Guide, Oracle Java Learning, Core Java, Java Skills, Java Jobs

Polymorphism in Java refers to an object’s capacity to take several forms. Polymorphism allows us to perform the same action in multiple ways in Java.

Polymorphism is divided into two types:

1. Compile-time polymorphism

2. Run time polymorphism

Note: Run time polymorphism is implemented through Method overriding. Whereas, Compile Time polymorphism is implemented through Method overloading and Operator overloading

In this article, we will see Compile time polymorphism.

Compile-time Polymorphism

Compile-time polymorphism is also known as static polymorphism or early binding. Compile-time polymorphism is a polymorphism that is resolved during the compilation process. Overloading of methods is called through the reference variable of a class. Compile-time polymorphism is achieved by method overloading and operator overloading.

1. Method overloading

We can have one or more methods with the same name that are solely distinguishable by argument numbers, type, or order.

Method Overloading occurs when a class has many methods with the same name but different parameters. Two or more methods may have the same name if they have other numbers of parameters, different data types, or different numbers of parameters and different data types. 

Example: 

void ojc() { ... }

void ojc(int num1 ) { ... }

void ojc(float num1) { ... }

void ojc(int num1 , float num2 ) { ... } 

(a). Method overloading by changing the number of parameters 

In this type, Method overloading is done by overloading methods in the function call with a varied number of parameters

Example:

show( char a )

show( char a ,char b )

 In the given example, the first show method has one parameter, and the second show method has two methods. When a function is called, the compiler looks at the number of parameters and decides how to resolve the method call.

// Java program to demonstrate the working of method

// overloading by changing the number of parameters

public class MethodOverloading {

// 1 parameter

void show(int num1)

{

System.out.println("number 1 : " + num1);

}

// 2 parameter

void show(int num1, int num2)

{

System.out.println("number 1 : " + num1

+ " number 2 : " + num2);

}

public static void main(String[] args)

{

MethodOverloading obj = new MethodOverloading();

// 1st show function

obj.show(3);

// 2nd show function

obj.show(4, 5);

}

}

Output

number 1 : 3
number 1 : 4  number 2 : 5

In the above example, we implement method overloading by changing several parameters. We have created two methods, show(int num1 ) and show(int num1, int num2 ). In the show(int num1) method display, one number and the void show(int num1, int num2 ) display two numbers

(b). Method overloading by changing Datatype of parameter

In this type, Method overloading is done by overloading methods in the function call with different types of parameters

Example:

show( float a float b)
show( int a, int b ) 

In the above example, the first show method has two float parameters, and the second show method has two int parameters. When a function is called, the compiler looks at the data type of input parameters and decides how to resolve the method call.

Program:

// Java program to demonstrate the working of method
// overloading by changing the Datatype of parameter

public class MethodOverloading {

// arguments of this function are of integer type
static void show(int a, int b)
{
System.out.println("This is integer function ");
}

// argument of this function are of float type
static void show(double a, double b)
{
System.out.println("This is double function ");
}

public static void main(String[] args)
{
// 1st show function
show(1, 2);
// 2nd show function
show(1.2, 2.4);
}
}

Output

This is integer function 
This is double function 

In the above example, we changed the data type of the parameters of both functions. In the first show() function datatype of the parameter is int. After giving integer type input, the output will be ‘ This is integer function.’ In the second show() function datatype of a parameter is double. After giving double type input, the output would be ‘This is double function.’ 

(c). By changing the sequence of parameters 

In this type, overloading is dependent on the sequence of the parameters 

Example:

show( int a, float b ) 
show( float a, int b )

Here in this example, The parameters int and float are used in the first declaration. The parameters are int and float in the second declaration, but their order in the parameter list is different.

// Java program to demonstrate the working of method
// overloading by changing the sequence of parameters

public class MethodOverloading {

// arguments of this function are of int and char type
static void show(int a, char ch)
{
System.out.println("integer : " + a
+ " and character : " + ch);
}

// argument of this function are of char and int type
static void show(char ch, int a)
{
System.out.println("character : " + ch
+ " and integer : " + a);
}

public static void main(String[] args)
{
// 1st show function
show(6, 'O');

// 2nd show function
show('O', 7);
}
}

Output

integer : 6 and character : O
character : O and integer : 7

In the above example, in the first show, function parameters are int and char, and in the second shoe, function parameters are char, and int. changed the sequence of data type. 

Invalid cases of method overloading

Method overloading does not allow changing the return type of method( function ); it occurs ambiguity.

Examples

int sum(int, int);
String sum(int, int);

Because the arguments are matching, the code above will not compile. Both methods have the same amount of data types and the same sequence of data types in the parameters.

2. Operator Overloading 

An operator is said to be overloaded if it can be used to perform more than one function. Operator overloading is an overloading method in which an existing operator is given a new meaning. In Java, the + operator is overloaded. Java, on the other hand, does not allow for user-defined operator overloading. To add integers, the + operator can be employed as an arithmetic addition operator. It can also be used to join strings together.

// Java program to demonstrate the
// working of operator overloading

public class OJC {

// function for adding two integers
void add(int a, int b)
{
int sum = a + b;
System.out.println(" Addition of two integer :"
+ sum);
}

// function for concatenating two strings
void add(String s1, String s2)
{
String con_str = s1 + s2;
System.out.println("Concatenated strings :"
+ con_str);
}

public static void main(String args[])
{
OJC obj = new OJC();
// addition of two numbers
obj.add(10, 10);
// concatenation of two string
obj.add("Operator ", " overloading ");
}
}

Output

Addition of two integer :20
Concatenated strings :Operator  overloading 

In the above example, The ‘+’ operator has been overloaded. When we send two numbers to the overloaded method, we get a sum of two integers, and when we pass two Strings, we get the concatenated text.

Advantages of Compile-time Polymorphism:

1. It improves code clarity and allows for the use of a single name for similar procedures.
2. It has a faster execution time since it is discovered early in the compilation process.

The only disadvantage of compile-time polymorphism is that it doesn’t include inheritance.

Source: geeksforgeeks.org

Wednesday, January 19, 2022

Quiz yourself: Java abstract classes and access modifiers for abstract methods

Core Java, Oracle Java Certification, Oracle Java Preparation, Oracle Java Exam Preparation, Oracle Java Career, Oracle Java Guides

It’s essential to declare classes properly to ensure methods are accessible.

Download a PDF of this article

Your software uses two classes that are part of the Object Relational Mapping (ORM) framework.

package orm.core;

public abstract class Connection {

  abstract void connect(String url);  

}

package orm.impl;

import orm.core.Connection;

public abstract class DBConnection extends Connection {

  protected void connect(String url) { /* open connection */ }

}

You have decided to create your own concrete connection class based on the DBConnection class.

package server;

import orm.impl.DBConnection;

public class ServerDBConnection extends DBConnection {

  ...

}

Which statement is correct? Choose one.

A. The Connection class fails to compile.

B. The DBConnection class fails to compile.

C. The ServerDBConnection class cannot be properly implemented.

D. The ServerDBConnection class successfully compiles if you provide the following method inside the class body:

public void connect(String url) { /* */ }

Answer. This question investigates abstract classes and access modifiers for abstract methods.

Option A is incorrect because the Connection class is properly declared: It declares an abstract method, but that’s permitted since it is an abstract class. However, notice that the connect() method has a default accessibility, which means that it’s accessible only inside the orm.core package. This has consequences for how it can be implemented.

As a side note, an abstract method cannot have private accessibility. A private element of a parent class is essentially invisible from the source of a child type. Consequently a private abstract method could never be implemented, so that combination is prohibited.

Consider option B. The DBConnection class successfully compiles. Although it neither sees nor implements the Connection.connect() method, that does not cause a problem. Why? Because the DBConnection class is marked as abstract, it’s acceptable for it to contain abstract methods, whether from a superclass or declared in its own body. Because the class compiles, option B is incorrect.

Option D is also incorrect: Attempting to add a public connect() method in the ServerDBConnection class cannot provide an implementation for the abstract method in the Connection class because it’s not in the orm.core package.

Unless the ServerDBConnection class is in the package orm.core, the ServerDBConnection class cannot implement the Connection.connect() method. Knowing this fact is at the heart of this question.

Because the code cannot implement all the abstract methods from the ServerDBConnection class’s parentage, it cannot be properly defined as a concrete class. This makes option C correct.

To fix the code, you can add the protected access modifier before the Connection.connect() method. The modifier will make DBConnection.connect() implement the method properly, and the ServerDBConnection class could even compile without providing an implementation of the connect() method.

Alternatively, moving the ServerDBConnection class into the orm.core package would allow a proper implementation of the connect() method in its current form.

Conclusion. The correct answer is option C.

Source: oracle.com

Monday, January 17, 2022

12 handy debugging tips from Cay Horstmann’s Core Java

From using jconsole to monitoring uncaught exceptions, here are a dozen tips that may be worth trying before you launch your favorite IDE’s debugger.

Download a PDF of this article

[This article on Java debugging is adapted from Core Java Volume I: Fundamentals, 12th Edition, by Cay S. Horstmann, published by Oracle Press. —Ed.]

Suppose you wrote your Java program and made it bulletproof by catching and properly handling all the exceptions. Then you run it, and it does not work correctly.

Now what? (If you never have this problem, you can skip this article.)

Of course, it is best if you have a convenient and powerful debugger, and debuggers are available as a part of IDEs. That said, here are a dozen tips worth trying before you launch your IDE’s debugger.

Tip 1. You can print or log the value of any variable with code like the following:

System.out.println("x=" + x);

or

Logger.getGlobal().info("x=" + x);

If x is a number, it is converted to its string equivalent. If x is an object, Java calls its toString method. To get the state of the implicit parameter object, print the state of the this object.

Logger.getGlobal().info("this=" + this);

Most of the classes in the Java library are very conscientious about overriding the toString method to give you useful information about the class. This is a real boon for debugging. You should make the same effort in your classes.

Tip 2. One seemingly little-known but very useful trick is putting a separate main method in each class. Inside it, you can put a unit test stub that lets you test the class in isolation.

public class MyClass

{

   // the methods and fields

   . . .

   public static void main(String[] args)

   {

      // the test code

   }

}

Make a few objects, call all methods, and check that each of them does the right thing. You can leave all these main methods in place and launch the Java Virtual Machine separately on each of the files to run the tests.

When you run an applet, none of these main methods are ever called.

When you run an application, the JVM calls only the main method of the startup class.

Tip 3. If you liked the preceding tip, you should check out JUnit. JUnit is a very popular unit testing framework that makes it easy to organize suites of test cases.

Run the tests whenever you make changes to a class, and add another test case whenever you find a bug.

Tip 4. A logging proxy is an object of a subclass that intercepts method calls, logs them, and then calls the superclass. For example, if you have trouble with the nextDouble method of the Random class, you can create a proxy object as an instance of an anonymous subclass, as follows:

var generator = new Random()

   {

   public double nextDouble()

      {

      double result = super.nextDouble();

      Logger.getGlobal().info("nextDouble: "

        + result);

      return result;

      }

   };

Whenever the nextDouble method is called, a log message is generated.

To find out who called the method, generate a stack trace.

Tip 5. You can get a stack trace from any exception object by using the printStackTrace method in the Throwable class. The following code catches any exception, prints the exception object and the stack trace, and rethrows the exception so it can find its intended handler:

try

{

   . . .

}

catch (Throwable t)

{

   t.printStackTrace();

   throw t;

}

You don’t even need to catch an exception to generate a stack trace. Simply insert the following statement anywhere into your code to get a stack trace:

Thread.dumpStack();

Tip 6. Normally, the stack trace is displayed on System.err. If you want to log or display the stack trace, here is how you can capture it into a string.

var out = new StringWriter();

new Throwable().printStackTrace(new PrintWriter(out));

String description = out.toString();

Tip 7. It is often handy to trap program errors in a file. However, errors are sent to System.err, not System.out. Therefore, you cannot simply trap them by running

java MyProgram > errors.txt

Instead, capture the error stream as

java MyProgram 2> errors.txt

To capture both System.err and System.out in the same file, use

java MyProgram 1> errors.txt 2>&1

This works in bash and in the Windows shell.

Tip 8. Having the stack traces of uncaught exceptions show up in System.err is not ideal. These messages are confusing to end users if they happen to see them, and they are not available for diagnostic purposes when you need them.

A better approach is to log the uncaught exceptions to a file. You can change the handler for uncaught exceptions with the static Thread.setDefaultUncaughtExceptionHandler method.

Thread.setDefaultUncaughtExceptionHandler(

   new Thread.UncaughtExceptionHandler()

   {

      public void uncaughtException(Thread t,

         Throwable e)

      {

         // save information in log file

      };

   });

Tip 9. To watch classes loading, launch the JVM with the -verbose flag. You will get a printout such as in Figure 1.

Figure 1. What you see after using the -verbose flag.

This report can occasionally be helpful to diagnose classpath problems.

Tip 10. The -Xlint option tells the compiler to spot common code problems. For example, if you compile with the command

javac -Xlint sourceFiles

the compiler will report missing break statements in switch statements. (The word lint originally described a tool for locating potential problems in C programs but is now generically applied to any tools that flag constructs that are questionable but not illegal.)

You will get messages such as “warning: [fallthrough] possible fall-through into case.”

The string in square brackets identifies the warning category. You can enable and disable each category. Since most of them are quite useful, it seems best to leave them all in place and disable only those that you don’t care about, as follows:

javac -Xlint:all,-fallthrough,-serial sourceFiles

You can see a list of all available warnings by using this command.

javac --help -X

Tip 11. The JVM supports the monitoring and management of Java applications by allowing the installation of agents in the virtual machine that track memory consumption, thread usage, class loading, and so on. These features are particularly important for large and long-running Java programs, such as application servers.

As a demonstration of these capabilities, the JDK ships with a graphical tool called jconsole that displays statistics about the performance of a virtual machine (see Figure 2). Start your program, and then start jconsole and pick your program from the list of running Java programs.

Oracle Java Exam Prep, Oracle Java Preparation, Core Java, Java Career, Java Skills, Oracle Java Guides
Figure 2. jconsole gives you a wealth of information about your running program.

Tip 12. Java Mission Control is a professional-level profiling and diagnostics tool, available for download.

Like jconsole, Java Mission Control can attach to a running virtual machine. It can also analyze the output from Oracle Java Flight Recorder, a tool that collects diagnostic and profiling data from a running Java application.

Source: oracle.com

Wednesday, January 12, 2022

Java Program to Find the Biggest of 3 Numbers

Oracle Java Certification, Core Java, Java Certification, Oracle Java Career, Java Preparation, Oracle Java Skills, Java Jobs

A Simple Java Program To Find Largest Of Three Numbers.

1. Overview

You’ll be learning today how to find the biggest of 3 numbers. This is also a very basic interview question. But the interviewer will look for the optimized and fewer lines code. We will show you all the possible programs and how most of java developers think.

For example, given three numbers 4 67 8. Among these three 67 is bigger. For this, we need to perform a comparison with all numbers.

2. Program 1: To find the biggest of three numbers using if-else 

First, an example program to read the three values from the user using Scanner class and nextInt() method. Then next, use the if-else condition to find the largest number.

Scanner has to be closed at the of the class.

a > b && a > c is true then a is the largest.

b > a && b > c is true then b is the largest

else c is the largest.

package com.oraclejavacertified.engineering.programs;

import java.util.Scanner;

public class BiggestOfThree1 {

    public static void main(String[] args) {

        Scanner scanner = new Scanner(System.in);

        System.out.println("Enter first number : ");

        int a = scanner.nextInt();

        System.out.println("Enter second number : ");

        int b = scanner.nextInt();

        System.out.println("Enter third number : ");

        int c = scanner.nextInt();

        if (a > b && a > c) {

            System.out.println(a + " is the largest");

        } else if (b > a && b > c) {

            System.out.println(b + " is the largest");

        } else {

            System.out.println(c + " is the largest");

        }

    }

}

Output:

Enter first number : 10

Enter second number : 30

Enter third number :  20

30 is the largest

3. Program 2: To find the biggest of three numbers using nested if-else

package com.oraclejavacertified.engineering.programs;

import java.util.Scanner;

public class BiggestOfThree2 {

    public static void main(String[] args) {

        int a = 10;

        int b = 30;

        int c = 20;

        if (a > b) {

            if(a > c) {

                System.out.println(a + " is the largest");

            } else {

                System.out.println(c + " is the largest");

            }

        } else if (b > a && b > c) {

            if(b > c) {

                System.out.println(b + " is the largest");

            } else {

                System.out.println(c + " is the largest");

            }

        } else {

            System.out.println(c + " is the largest");

        }

    }

}

This code produces the same output as above. But code looks not clear and is difficult to understand.

4. Program 3: To find the biggest of three numbers using if-else with reducing the condition logic

package com.oraclejavacertified.engineering.programs;

public class BiggestOfThree3 {

    public static void main(String[] args) {

        int a = 10;

        int b = 30;

        int c = 20;

        if (a > b && a > c) {

            System.out.println(a + " is the largest");

        } else if (b > c) {

            System.out.println(b + " is the largest");

        } else {

            System.out.println(c + " is the largest");

        }

    }

}

This code is clear and easy to understand. If a > b && a > c is true then a is the largest, false means value ‘a’ not biggest which means biggest is might be either b or c. Next checking b > c, returns true if value ‘b’ is bigger else value ‘c’ bigger.

5. Program 4: To find the biggest of three numbers using the nested ternary operator

The below code is based on the ternary operator which returns a value. We have wrapped all conditions into a single line which is effective but not readable.

package com.oraclejavacertified.engineering.programs;

public class BiggestOfThree4 {

    public static void main(String[] args) {

        int a = 10;

        int b = 30;

        int c = 20;

        int biggest = (a > b && a > c) ? a : ((b > c) ? b : c);

        System.out.println(biggest + " is the largest");

    }

}

Source: javacodegeeks.com