Sunday, January 22, 2023

Oracle 1Z0-811 Certification: The Way to Get Powerful Achievement

1z0-811 dumps, 1z0-811, 1z0-811 study guide pdf, 1z0-811 practice test, java foundations 1z0-811 questions, java foundations 1z0-811 pdf, 1z0-811 exam questions, java foundations 1z0-811, java foundations 1z0-811 dumps, java foundations oracle

Qualifying for the Oracle Java Foundations (1Z0-811) exam leads the candidates to earn the Java Foundations Certified Junior Associate credentials. The process provides the candidate with the fundamentals of Java programming, enabling them to showcase their conceptual understanding and abilities.

This Oracle 1Z0-811 certification validates the candidate’s capabilities to a future employer, bestowing their potential to become an increasingly valuable asset to any organization as they progress into the OCA level during their early stage of employment and later to OCP.

The Java Foundations certification is also called the Oracle Certified Foundations Associate exam. Earning the associated certification means that you are competent with the fundamentals of Java programming, enabling you to demonstrate both conceptual knowledge and practical skills.

Some Tips for How to Get Oracle 1Z0-811 Certification?

The Oracle 1Z0-811 certification exam is one of the most well-known credentials that help professionals advance their careers. Acquiring this certificate can give your career a new perspective and direction, and it connects you with high-paying opportunities in a variety of sectors around the world.

1. Creating Your Study Plan

Consider two main factors when developing your study plan: Budget and Time.

How much you intend to invest in your Oracle 1Z0-811 exam preparation is essential to your overall study plan. Determining how much you are prepared to invest in the exam preparation will help you determine how much material you will tackle.

Setting a precise budget for resources, training, courses, simulators, etc., will help you create an accurate schedule of the material you are going through for your preparation. Start with setting up a specific budget, and then research the best resources you can rely on for studying.

2. Familiarize Yourself with the Evaluation Information

Before moving any further, it is wise to gather crucial information about the actual exam, figure out the eligibility criteria, and understand its format. This knowledge is required to create a proper study plan and test success strategy.

3. Join the Brigade

The 1Z0-811 certification aspirants have an added advantage over others regarding community support. Before you, more than 1,000,000 people have already earned this prestigious credential and are ready to offer a helping hand to those who have just embarked on or planning to commence the 1Z0-811 certification journey. In particular, the Oracle website and the LinkedIn group are two highly dependable places to be when you yearn for real-world, verified, and practical exam prep advice.

4. Add the Online 1Z0-811 Exam Simulators to Your Prep Strategy

There is an adage that says practice makes perfect. The Oracle 1Z0-811 exam is one of the most challenging exams in the world. As part of your preparation, you should practice doing several sample questions.

Taking practice exams is an integral part of preparing for any exam, and it is all the more crucial for the 1Z0-811 certification. So, online 1Z0-811 simulators will enable you to assess your preparation level while increasing your confidence and diligence to tackle exam pressure.

5. Flashcards for the Win

With this incredibly compact and cost-effective strategy, you are prepared and fired up to comprehend more complex ideas in an enjoyable manner that stimulates your mind. Making your own Oracle 1Z0-811 exam flashcards can also greatly assist when you are studying. However, electronic ones also function nicely if time is of the essence.

6. Community Learning

Discussion boards and study groups are excellent tools for improving retention. You can assist others, get your 1Z0-811 exam questions answered, and pick up time-saving tips and tricks by actively participating in them.

7. Give the 1Z0-811 Exam and Believe in Yourself

You will pass with flying colors as long as you are passionate about managing client expectations, developing a detailed project plan, defining the project's scope, and assigning team members to specific tasks.

At the core of this 1Z0-811 certification and the career path of passing an Oracle Java Foundations lies excellent communication skills, negotiating, conflict resolution, and promoting teamwork. Working on your interpersonal and presentation skills is a great way to stand apart in your exam and get noticed.

Final Say

No certification journey is easy, and the strict and tedious structure of the 1Z0-811 certification test makes it a tough nut to crack. However, as mentioned earlier, the right attitude carved after referring to the pointers can crash it. So, follow the above tips and win over all difficulties.

Best of luck!

Friday, January 20, 2023

Minborg’s Java Pot

Did you know you can allocate memory segments that are larger than the physical size of your machine’s RAM and indeed larger than the size of your entire file system? Read this article and learn how to make use of mapped memory segments that may or may not be “sparse” and how to allocate 64 terabytes of sparse data on a laptop.

Mapped Memory


Mapped memory is virtual memory that has been assigned a one-to-one mapping to a portion of a file. The term “file” is quite broad here and may be represented by a regular file, a device, shared memory or any other thing that the operating system may refer to via a file descriptor.

Accessing files via mapped memory is often much faster than accessing a file via the standard file operations like read and write. Because mapped memory is operated on directly, some interesting solutions can also be constructed via atomic memory operations such as compare-and-set operations, allowing very efficient inter-thread and inter-process communication channels. 

Because not all parts of the mapped virtual memory must reside in real memory at the same time, a mapped memory segment might be much larger than the physical RAM in the machine it is running in. If a portion of the mapped memory is not available when accessed, the operating system will temporarily suspend the current thread and load the missing page after which operation may resume again.

Other advantages of mapped files are; they can be shared across processes running different JVMs and, the files remain persistent and can be inspected using any file tool like hexdump.

Setting up a Mapped Memory Segment


The new Foreign Function and Memory feature that previews for the second time in Java 20 allows large memory segments to be mapped to a file. Here is how you can create a memory segment of size 4 GiB backed by a file.

Set<OpenOption> opts = Set.of(CREATE, READ, WRITE);
try (FileChannel fc = FileChannel.open(Path.of("myFile"), opts);
     Arena arena = Arena.openConfined()) {
 
    MemorySegment mapped = 
 
            fc.map(READ_WRITE, 0, 1L << 32, arena.scope());
    use(mapped);
} // Resources allocated by "mapped" is released here via TwR

Sparse Files


A sparse file is a file where information can be stored in an efficient way if not all portions of the file are actually used. A file with large unused “holes” is an example of such a file whereby only the used sections are actually stored in the underlying physical file. In reality, however, the unused holes also consume some resources albeit much less than their used counterparts.

Java Pot, Oracle Java, Oracle Java Exam, Oracle Java Exam Prep, Oracle Java Preparation, Oracle Java Tutorial and Materials, Oracle Java Career, Oracle Java Skills, Oracle Java Jobs
Figure 1, Illustrates a logical sparse file where only actual data elements are stored in the physical file.

As long as the sparse file is not filled with too much data, it is possible to allocate a sparse file that is much larger than the available physical disk space. For example, it is possible to allocate an empty 10 TB memory segment backed by a sparse file on a filesystem with very little available capacity. 

It should be noted that not all platforms support sparse files.

Setting up a Sparsely Mapped Memory Segment 


Here is an example of how to create and access the Contents of a file via a memory-mapped MemorySegment whereby the Contents is sparse. For example, expanding the real underlying data in the file as needed automatically:

Set<OpenOption> sparse = Set.of(CREATE_NEW, SPARSE, READ, WRITE);
 
try (var fc = FileChannel.open(Path.of("sparse"), sparse);
 
     var arena = Arena.openConfined()) {
 
     memorySegment mapped = 
 
             fc.map(READ_WRITE, 0, 1L << 32, arena.scope());
 
    use(mapped);
 
} // Resources allocated by "mapped" is released here via TwR

Note: The file will appear to consist of 4 GiB of data but in reality the file does not use any (apparent) file-system space at all:

pminborg@pminborg-mac ntive % ll sparse 

-rw-r–r–  1 pminborg  staff  4294967296 Nov 14 16:12 sparse

pminborg@pminborg-mac ntive % du -h sparse 

  0B sparse

Going Colossal


Java Pot, Oracle Java, Oracle Java Exam, Oracle Java Exam Prep, Oracle Java Preparation, Oracle Java Tutorial and Materials, Oracle Java Career, Oracle Java Skills, Oracle Java Jobs
The implementation of sparse files varies across the many platforms that are supported by Java and consequently, various sparse-file properties will vary depending on where an application is deployed.

I am using a Mac M1 under macOS Monteray (12.6.1) with 32 GiB RAM and 1 TiB storage (of which 900 GiB are available). 

I was able to map a single sparse file of up to 64 TiB using a single mapped memory segment on my machine (using its standard settings):

  4 GiB -> ok as demonstrated above

  1 TiB -> ok

 32 TiB -> ok

 64 TiB -> ok

128 TiB -> failed with OutOfMemoryError

It is possible to increase the amount of mappable memory but this is out of the scope for this article. In real applications, it is better to have smaller portions of a sparse file mapped into memory rather than mapping the entire sparse file in one chunk. These smaller mappings will then act as “windows” into the larger underlying file.

 Anyhow, this looks pretty colossal:

-rw-r–r–   1 pminborg  staff  70368744177664 Nov 22 13:34 sparse

Creating the empty 64 TiB sparse file took about 200 ms on my machine.

Unrelated Observations on Thread Confinement


As can be seen above, it is possible to access the same underlying physical memory from different threads (and indeed even different processes) with file mapping despite being viewed through several distinct thread-confined MemorySegment instances.

Source: javacodegeeks.com

Wednesday, January 18, 2023

Quiz yourself: Multithreading and the Java keyword synchronized

The goal is to obtain consistent results and avoid unwanted effects.


Imagine that you are working with multiple instances of the following SyncMe class, and the instances are used by multiple Java threads:

Quiz Yourself, Multithreading, Java Keyword Synchronized, Oralce Java Certification, Java Prep, Java Preparation, Java Tutorial and Materials

public class SyncMe {
    protected static synchronized void hi() {
        System.out.print("hi ");
        System.out.print("there! ");
    }
    public synchronized void bye() {
        System.out.print("bye ");
        System.out.print("there! ");
    }
    public synchronized void meet() {
        hi();
        bye();
    }
}

What statements are true about the class? Choose two.

A. Concurrent calls to the hi() methods can sometimes print hi hi.

B. Concurrent calls to the bye() methods can sometimes print bye bye.

C. Concurrent calls to the meet() method always print hi there! bye there!.

D. Concurrent calls to the meet() method can print bye bye.

E. Concurrent calls to the meet() method can print hi hi.

Answer. This question investigates the meaning and effect of the keyword synchronized and the possible behavior of code that uses it in a multithreaded environment.

One fundamental aspect of the keyword synchronized is that it behaves rather like a door.

◉ When a thread encounters such a door, it cannot execute past that point unless that thread carries, or can obtain, the right key to open the door.

◉ When the thread enters the region behind the door (the synchronized block), it keeps the key until it exits that region.

◉ When the thread exits the synchronized block, the thread is supposed to put the key back on the hook, meaning that another thread could potentially take the key and pass through the door.

Upon simple analysis, this behavior prevents any other thread from passing through that door into the synchronized block while the first thread is executing behind the door.

(This discussion won’t go into what happens if the key were already held by the thread at the point when it reached the door. Although that’s important to understand in the big scheme, it’s not necessary for this question because it does not happen in this example. Frankly, we’re also ignoring quite a bit of additional complexity that can arise in situations more complex than this question presents.)

In the real world, of course, it’s possible that several doors might require the same key or they might require different keys. The same is true in Java code, and for this question you must understand the different keys and the doors those keys open. Then you must think through how the code might behave when it’s run in a multithreaded environment.

The general form of the keyword synchronized is that it takes an object as a parameter, such as the following:

void doSyncStuff() {
  synchronized(this.rv) {
    // inside
  }
}

In this situation, the key required to open the door and enter the synchronized block is associated with the object referred to by the this.rv field. When a thread reaches the door, and assuming it doesn’t already have the key, it tries to take that key from the hook, which is that object. If the key is not on that hook, the thread waits until after the key is returned to that hook.

In the code for this question, it is crucial to realize that if there are two instances of the enclosing object and a different thread is executing on each of those instances, it’s likely there are two different keys: one for the door that’s encountered by one thread and another for the door encountered by the other thread. This is potentially confusing since it’s the same line of code, but the key required to open the door depends on the object referred to by this.rv.

Of course, the code for this question does not have a parameter after the keyword synchronized. Instead, synchronized is used as a modifier on the method. This is effectively a shortcut.

To explain, if the method is a static method, such as this

synchronized static void dSS() {
  // method body
}

and the enclosing class is MySyncClass, then the code is equivalent to this

static void dSS() {
  synchronized (MySyncClass.class) {
    // method body
  }
}

Notice that in this case, all the static synchronized methods in a single class will use the same key.

However, if the method is a synchronized instance method, like this

synchronized void dIS() {
  // method body
}

then it is equivalent to this

void dIS() {
  synchronized(this) {
    // method body
  }
}

It’s critical to notice that if you have two threads executing this same method on different object instances, different keys are needed to open the doors.

Given this discussion and noting that the hi() method is static but the other two are instance methods, and also that the question states that multiple objects exist, recognize that only one thread at a time can be executing the hi() method, but more than one thread might be executing the other two methods.

That tells you that whenever hi has been printed, another hi cannot be printed until after the printing of there!. You might see any of the output from invocations of the bye() method between hi and there!, but you’ll never see hi hi printed. From that you know that option A is incorrect.

Using the same logic as above, concurrent calls to meet() cannot result in hi hi being printed either, since that output is impossible no matter how the hi() method is invoked. That means that option E must also be incorrect.

By contrast, concurrent calls to the bye() method can execute concurrently if they are invoked on different instances of the class. In such a situation the output of the two invocations can become interleaved, and you might in fact see bye bye printed. That makes option B correct, and at the same time and for the same reason, it makes D correct, because concurrent calls to meet() can result in concurrent calls to the bye() method.

Option C must be incorrect, because it contradicts the notion that you can ever see bye bye printed.

Conclusion. The correct answers are options B and D.

Source: oracle.com

Friday, January 13, 2023

Quiz yourself: Understanding the syntax of Java’s increment and decrement operators

Java’s increment and decrement operators, Oracle Java Career, Java Skills, Java Jobs, Java Tutorial and Materials, Oracle Java Preparation, Oracle Java Guides

Do Java expressions such as ii[++i] = 0, ii[i++]++, or i = +(i--) give you a headache?


Given the following Calc class

class Calc {
    Integer i;
    final int[] ii = {0};
    {
        ii[++i] = 0; // line 1
        i--;         // line 2
        ii[i++]++;   // line 3
        (i--)--;     // line 4
        i = +(i--);  // line 5
    }
}

Which statement is correct? Choose one.

A. Compilation fails at line 1 only.

B. Compilation fails at line 2 only.

C. Compilation fails at line 3 only.

D. Compilation fails at line 4 only.

E. Compilation fails at line 5 only.

F. Compilation fails at more than one line.

G. All lines compile successfully.

Answer. This question tests your knowledge of syntax for a variety of expressions.

First, notice there are two object field variables.

◉ The first is an uninitialized field of Integer type, called i. Because this is an uninitialized object field, it would have a null value when it is initialized. However, the compiler does not care about this; from that perspective, it’s simply a variable and the lack of explicit initialization is irrelevant.

◉ The second field is an int array called ii. This field is marked final and is initialized with an array literal containing a single element having a value of zero. It’s important to remember that marking the field as final merely means the field can never be modified to refer to any other array. It does not prevent the elements of the array from being changed (though it’s also true that you can never grow or shrink an array in Java).

Java’s increment and decrement operators, Oracle Java Career, Java Skills, Java Jobs, Java Tutorial and Materials, Oracle Java Preparation, Oracle Java Guides
Next, consider the increment and decrement operator usage. It might cause some concern that these are applied to the variable i, which is of Integer type. After all, Integer objects are immutable. However, this is not a problem. All that happens is that the contents of the variable are unboxed, the resulting int is incremented or decremented, and then a new Integer is created with that new value. Finally, the reference to the new Integer is assigned to the variable.

Look at line 1 considering the information above. The effect would be to assign zero to an element of the array at a subscript one greater than the int value of i before executing the line. A second effect would be that the int value in the object referred to by i would now be one greater than before. This is all valid syntax, even though the code couldn’t run correctly, because it would fail with a NullPointerException. However, the question doesn’t ask about running the code, only about compiling it. Also note that under some conditions, code of this kind could fail at runtime with an ArrayIndexOutOfBoundsException. Again, this is not relevant in this question. From this, you can see that line 1 compiles correctly.

A similar analysis of line 2 reveals that if i were not null, this line would reassign the variable i to refer to a new Integer object with an int value one less than the Integer to which it previously referred. Even though the code of line 2 would not execute, it would compile correctly.

In line 3, the code would increment the array element at the index value indicated by the current value of i, and then reassign i to have an int value one greater than before. This would fail with a NullPointerException, and if that were rectified, the code might still fail if the subscript indicated by i were invalid. However, the code of line 3 is syntactically valid and would compile without error.

Line 4 would fail to compile. One of the requirements for using the increment and decrement operators is that the target of such an operator must be in storage that can be updated. Such a value is sometimes referred to as an l-value, meaning an expression that can be on the left side of an assignment. In line 4, the expression is (i--)-- and the problem is that while i-- is valid in itself, the resulting expression is simply the numeric value that is contained in the object to which i now refers. And, in the same way that you cannot write 3++ (where would you store the result?), you cannot increment or decrement such a simple expression. The parenthetical (i--) cannot store the result. Consequently line 4 is syntactically invalid and would not compile.

Line 5 is syntactically valid. If i were not null, that line would reassign i to an Integer representing one less than the previous object it referred to, then apply the (no-effect) unary plus operator, and then assign that same result to the value i. The latter two operations have no meaningful effect, but they are syntactically valid, and line 5 would compile without problems.

In light of the foregoing discussions, you can see that option D is correct, and options A, B, C, E, F, and G are all incorrect.

Conclusion. The correct answer is option D.

Source: oracle.com

Wednesday, January 11, 2023

Hidden gems in Java 19, Part 2: The real hidden stuff


Compared to previous Java releases, the scope of changes in Java 19 has decreased significantly by targeting only seven implemented JEPs—most of which are new or improved incubator or preview features. As far as affecting your production code, you can take a little breather. However, Java 19 contains thousands of performance, security, and stability updates that aren’t in a JEP—and they are worthy of being adopted.

Oracle Java Tutorial and Material, Oracle Java Prep, Java Preparation, Java Guides, Java Learning, Java Certification

Even if you don’t use any of the preview or incubator features in the JEPs (read all about them in “Hidden gems in Java 19, Part 1: The not-so-hidden JEPs”), you should consider moving to Java 19.

This article outlines the updates buried deep in the release notes, and you and your team should be aware of them. For example, some new features include support for Unicode 14.0, additional date-time formats, new Transport Layer Security (TLS) signature schemes, and defense against Return Oriented Programming (ROP) attacks via PAC-RET protection on AArch64 systems.

I am using the Java 19.0.1 jshell tool to demonstrate the code in this article. If you want to test the features, download JDK 19, fire up your terminal, check your version, and run jshell, as follows. Note that you might see a newer dot-release version of the JDK, but nothing else should change.

[mtaman]:~ java -version
 java version "19.0.1" 2022-10-18
 Java(TM) SE Runtime Environment (build 19.0.1+10-21)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0.1+10-21, mixed mode, sharing)

[mtaman]:~ jshell --enable-preview
|  Welcome to JShell -- Version 19.0.1
|  For an introduction type: /help intro

jshell>

The enhancements


This section describes some additions and enhancements in Java 19.

Support for Unicode 14.0. Java 19 provides a small but significant addition for internationalization; it provides upgrades to Unicode 14.0. The java.lang.Character class now supports Level 14 of the Unicode Character Database (UCD), which adds 838 new characters, 5 new scripts, and 37 new emoji characters.

New system properties for System.out and System.err. If you run an existing application with Java 19, you may see question marks on the console instead of special characters. This is because, as of Java 19, the operating system’s default encoding is used for printing to System.out and System.err.

For example, cp1252 encoding is the default on Windows. If that’s not what you want and you’d prefer to see output in UTF-8, add the following JVM options when calling the application:

-Dstdout.encoding=utf8 -Dstderr.encoding=utf8

Your platform determines what these system properties’ default settings are. When the platform doesn’t have console streams, the values default to the native.encoding property’s value. When necessary, the launcher’s command-line option -D can override the properties and set them to UTF-8.

If you don’t want to do this each time the software launches, you can also define the following environment variable (it starts with an underscore) to set these parameters globally:

_JAVA_OPTIONS="-Dstdout.encoding=utf8 -Dstderr.encoding=utf8"

New methods to create preallocated hash maps and hashsets. You might wonder why you would need new methods to create preallocated hash maps and hashsets. Here’s an example to clarify that more: If you want to create an ArrayList of 180 elements, you could write the following code:

List<String> list = new ArrayList<>(180);

The underlying array, the ArrayList, is allocated directly for 180 elements and does not have to be enlarged several times as you insert the 180 elements.

Similarly, you might try to create a HashMap with 180 preallocated mappings as follows:

Map<String, Integer> map = new HashMap<>(180);

Intuitively, you would think that this new HashMap offers space for 180 mappings. However, it does not! This happens because the HashMap has a default load factor of 0.75 when it is initialized. This indicates that the HashMap gets rebuilt (also called rehashed) with double the size as soon as it is 75% filled. Thus, the new HashMap is initialized with a capacity of 180 and can hold only 135 (180 × 0.75) mappings without being rehashed.

Therefore, to create a HashMap for 180 mappings, calculate the capacity by dividing the number of mappings by the load factor: 180 ÷ 0.75 = 240. So, a HashMap for 180 mappings would be created as follows:

// for 180 mappings: 180 / 0.75 = 240
Map<String, Integer> map = new HashMap<>(240);

Java 19 makes it easier to create a HashMap that has the required mappings without fiddling with load factors by using the new static factory method newHashMap(int).

Map<String, Integer> map = HashMap.newHashMap(180);

Look at the source code to see how it works.

public static <K, V> HashMap<K, V> newHashMap(int numMappings) {
    return new HashMap<>(calculateHashMapCapacity(numMappings));
}

static final float DEFAULT_LOAD_FACTOR = 0.75f;

static int calculateHashMapCapacity(int numMappings) {
    return (int) Math.ceil(numMappings / (double) DEFAULT_LOAD_FACTOR);
}

Similar labor-saving static factory methods have been created in Java 19. Here’s the complete set.

◉ HashMap.newHashMap
◉ LinkedHashMap.newLinkedHashMap
◉ WeakHashMap.newWeakHashMap
◉ HashSet.newHashSet
◉ LinkedHashSet.newLinkedHashSet

TLS signature schemes. Applications now can alter the signature schemes used in specific TLS or Datagram Transport Layer Security (DTLS) connections using two new Java SE methods, setSignatureSchemes() and getSignatureSchemes(), which are found in the class javax.net.ssl.SSLParameters.

The underlying provider may set the default signature schemes for each TLS or DTLS connection. Applications can also alter the provider-specific default signature schemes by using the jdk.tls.server.SignatureSchemes and jdk.tls.client.SignatureSchemes system attributes. The setSignatureSchemes() method overrides the default signature schemes for the specified TLS or DTLS connections if the signature schemes parameter is not null.

It is recommended that when third-party vendors add support for Java 19 or later releases, they also add support for these methods. The JDK SunJSSE provider supports this technique. However, you should be aware that a provider might not have received an update to support the new APIs, in which case the provider might disregard the established signature schemes.

Support for PAC-RET protection on Linux/AArch64. To defend against Return Oriented Programming (ROP) attacks (documentation here), OpenJDK uses hardware features from the ARM v8.3 Pointer Authentication Code (PAC) extension but only when they are enabled.

To use this functionality, OpenJDK must first be compiled using GCC 9.1.0+ or LLVM 10+ with the configuration flag --enable-branch-protection. Then, if the system supports it and the Java binary was compiled with branch protection enabled, the runtime flag -XX:UseBranchProtection=standard will enable PAC-RET protection; otherwise, the flag is quietly ignored. A warning will be printed to the console if the system does not support PAC-RET protection or if the Java binary was not built with branch protection enabled. As an alternative, -XX:UseBranchProtection=pac-ret also enables PAC-RET protection.

Additional date-time formats. Java 19 brings new formats to the java.time.format.DateTimeFormatter and DateTimeFormatterBuilder classes. In prior releases, only four predefined styles were available: FormatStyle.FULL, FormatStyle.LONG, FormatStyle.MEDIUM, and FormatStyle.SHORT. Now you can specify a flexible style with the new DateTimeFormatter.ofLocalizedPattern(String requestedTemplate) method.

For example, the following creates a formatter that may format a date according to a locale, for example, “Feb 2022” in the US locale and “2022年2月” in the Japanese locale:

DateTimeFormatter.ofLocalizedPattern("yMMM")

There’s also a new supporting function: DateTimeFormatterBuilder.appendLocalized(String requestedTemplate).

Automatic generation of the class data sharing archive. With Java 19, the JVM option -XX:+AutoCreateSharedArchive automatically creates or updates an application’s class data sharing (CDS) archive, for example

java -XX:+AutoCreateSharedArchive -XX:SharedArchiveFile=app.jsa -cp application.jar App

The specified CDS archive will be written if it does not exist or if a different version of the JDK generated it.

Javadoc search enhancements. Java 19 can create a standalone search page for the API documentation produced by Javadoc, and the search syntax has been improved to support multiple search terms.

Highlighting of deprecated elements, variables, and keywords. The Java Shell tool (jshell) now marks deprecated elements and highlights deprecated variables and keywords in the console.

Specified stack size no longer rounded up. Historically, the actual Java thread stack size might differ from the value provided by the -Xss command-line option; it might be rounded up to a multiple of the system page size when that’s required by the operating system. That’s been fixed in Java 19, so the stack size specified is what you get.

Larger default key sizes for cryptographic algorithms. What happens if the caller does not specify a key size when using a KeyPairGenerator or KeyGenerator object to generate a key pair or secret key? In such cases, JDK providers use provider-specific default values.

Java 19 increases the default key sizes for various cryptographic algorithms as follows:

◉ Elliptic Curve Cryptography (ECC): increased from 256 to 384 bits
◉ Rivest-Shamir-Adleman (RSA), RSASSA-PSS, and Diffie-Hellman (DH): increased from 2,048 to 3,072 bits
◉ Advanced Encryption Standard (AES): increased from 128 to 256 bits, if permitted by the cryptographic policy; otherwise, it falls back to 128

In addition, the default digest method used by the jarsigner tool has changed from SHA-256 to SHA-384. The jarsigner tool’s default signature algorithm has also been modified to reflect this. Except for more-extended key sizes whose security strength matches SHA-512, SHA-384 is used instead of SHA-256.

Note that jarsigner will keep using SHA256withDSA as the default signature algorithm for Digital Signature Algorithm (DSA) keys to help with interoperability with earlier Java editions.

Linux cpu.shares argument no longer misinterpreted. The Linux cgroups argument cpu.shares was improperly interpreted by earlier JDK editions. When the JVM was run inside a container, this could result in the JVM using fewer CPUs than were available, underutilizing CPU resources.

With Java 19, the JVM will no longer, by default, take cpu.shares into account when determining how many threads to allocate to the various thread pools.

To return to the old behavior, use the command-line option -XX:+UseContainerCpuShares, but be aware that this option is deprecated and might be eliminated in a subsequent JDK release.

Upgraded support for locale data. Locale data based on the Unicode Common Locale Data Repository (CLDR) has been upgraded to version 41. Refer to the Unicode Consortium’s CLDR release notes for the list of changes.

Bug fixes and changes


This section describes some of the bug fixes and changes in Java 19.

Source- and binary-incompatible changes to java.lang.Thread. With the preview introduction of virtual threads in Java 19’s JEP 425, some source and binary changes have been made to the class java.lang.Thread that may impact your code if you extend the class. More details are in the documentation. Be aware of the following changes:

◉ Three new final methods have been added: Thread.isVirtual(), Thread.threadId(), and Thread.join(Duration). Suppose there is existing compiled code that extends Thread, and the subclass declares a method with the same name, parameters, and return type as any of these methods. In such a case, IncompatibleClassChangeError will be thrown at runtime if the subclass is loaded.
◉ The Thread class defines several new methods. If one of your source code files extends Thread and a method in the subclass conflicts with any of the new Thread methods, the file will not compile without being changed.
◉ Thread.Builder is added as a nested interface. If one of your source code files extends Thread and imports a class named Builder, and the code in the subclass references Builder as a simple name, the file will not compile without being changed.

Indify string concatenation changes to the order of operations. In Java 19, the process of concatenating strings now evaluates each parameter and eagerly creates a string from left to right. This fixes a bug in Java 9’s JEP 280, which introduced string concatenation techniques based on invokedynamic. (By the way, the word indify is short for using invokedynamic.)

For example, the following code now prints zoozoobar not zoobarzoobar:

StringBuilder builder = new StringBuilder("zoo");
System.out.println("" + builder + builder.append("bar"));

Lambda deserialization for object method references on interfaces. Deserialization of serialized method references to Object methods, which used an interface as the type on which the method is invoked, can now be deserialized again.

Keep in mind that the class files must be recompiled to support deserialization.

POSIX file access attributes copied to the target on a foreign file system. When two files are linked to different file system providers, such as when you copy a file from the default file system to a zip file system, the Java 19 function java.nio.file.Files.copy(Path, Path) copies Portable Operating System Interface (POSIX) file attributes from the source file to the destination file.

The POSIX file attribute view must be supported by both the source and target file systems. The owner and group owner of the file are not copied; the POSIX attributes copied are restricted to the file access rights.

Methods of InputStream and FilterInputStream no longer synchronized. The mark and reset functions of the java.io.InputStream and java.io.FilterInputStream classes no longer use the keyword synchronized. Since the other methods in these classes do not synchronize, this keyword is useless and has been removed in Java 19.

Some returned strings slightly different. In Java 19, the specification of the Double.toString(double) and Float.toString(float) methods is now tighter than in earlier releases, and the new implementation fully adheres to the specification.

The result of this change is that some returned strings are now shorter than when earlier Java releases are used, and inputs at the extremes of the subnormal ranges near zero might look different. However, the number of cases where there’s a difference in output is relatively small compared to the sheer number of possible double and float inputs.

For example, the double subnormal range is Double.toString(1e-323), which now returns 9.9E-324, as mandated by the new specification. Another example: Double.toString(2e23) now returns 2.0E23; in earlier releases, it returns 1.9999999999999998E23.

User’s home directory set to $HOME if invalid. The user.home system property on Linux and macOS systems is set to the operating system’s specified home directory. The value of the environment variable $HOME is used in place of the directory name if the variable is empty or contains only one character.

Typically, $HOME has a valid value and the same directory name. Except in systems such as system on Linux or when running in a container such as Docker, the default to $HOME is unusual and unlikely to happen.

Java 19 was changed to use the correct user home directory.

Deprecation


This section describes the features, options, and APIs deprecated in Java 19.

Deprecation of Locale class constructors. In Java 19, the public constructors of the Locale class were marked as deprecated. You should use the new static factory method Locale.of() to ensure only one instance per Locale configuration.

The following example shows the use of the factory method compared to the old constructor:

Locale japanese = new Locale("ja"); // deprecated
Locale japan    = new Locale("ja", "JP"); // deprecated

Locale japanese1 = Locale.of("ja");
Locale japan1    = Locale.of("ja", "JP");

System.out.println("japanese  == Locale.JAPANESE = " + (japanese  == Locale.JAPANESE));
System.out.println("japan     == Locale.JAPAN    = " + (japan     == Locale.JAPAN));
System.out.println("japanese1 == Locale.JAPANESE = " + (japanese1 == Locale.JAPANESE));
System.out.println("japan1    == Locale.JAPAN    = " + (japan1    == Locale.JAPAN));

When you run this code, you will see that the objects supplied via the factory method are identical to the Locale constants, whereas those created with constructs logically are not.

Several java.lang.ThreadGroup methods degraded. In Java 14 and Java 16, many Thread and ThreadGroup methods were marked as deprecated for removal. Now, the following methods have been decommissioned in Java 19:

◉ ThreadGroup.destroy() invocations will be ignored.
◉ ThreadGroup.isDestroyed() always returns false.
◉ ThreadGroup.setDaemon() sets the daemon flag, but this has no effect.
◉ ThreadGroup.getDaemon() returns the value of the unused daemon flags.
◉ ThreadGroup.suspend(), resume(), and stop() throw an UnsupportedOperationException.

Removed items


This section describes the old features, options, and APIs removed in Java 19.

TLS cipher suites using 3DES removed from the default enabled list. The default list of allowed cipher suites no longer includes the following TLS cipher suites that employ the outdated Triple Data Encryption (3DES) algorithm:

◉ TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
◉ TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA
◉ TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA
◉ TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA
◉ SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA
◉ SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA
◉ SSL_RSA_WITH_3DES_EDE_CBC_SHA

Note that cipher suites using 3DES are already disabled by default in the jdk.tls.disabledAlgorithms security property. To turn them back on, you can remove 3DES_EDE_CBC from the jdk.tls.disabledAlgorithms security parameter and re-enable the suites using the setEnabledCipherSuites() function of the SSLSocket, SSLServerSocket, or SSLEngine classes. While you are free to use these suites, you do so at your own risk; these were removed for a reason!

Alternately, the https.cipherSuites system property can be used to re-enable the suites if an application is using the HttpsURLConnection class.

Removal of the GCParallelVerificationEnabled diagnostic flag. Disabling parallel heap verification has never been used other than with its default value because there are no known benefits to doing so. Additionally, for a very long time, with no problems, this default value permitted multithreaded verification. Therefore, the GCParallelVerificationEnabled diagnostic flag was removed.

SSLSocketImpl finalizer implementation removed. Because the Socket implementation now handles the underlying native resource releases, the finalizer implementation of SSLSocket has been abandoned. With Java 19, if SSLSocket is not explicitly closed, TLS close_notify messages won’t be sent.

If you fail to correctly close sockets, you might see a runtime error. Applications should never rely on garbage collection and should always permanently close sockets.

Alternate ThreadLocal implementation of the Subject::current and Subject::callAs APIs removed. The jdk.security.auth.subject.useTL system property and the alternate ThreadLocal implementation of the Subject::current and Subject::callAs APIs have been removed. The default implementation of these APIs is still supported.

Source: oracle.com

Friday, January 6, 2023

Hidden gems in Java 19, Part 1: The not-so-hidden JEPs

Core Java, Oracle Java, Java Prep, Java Preparation, Java Tutorial and Materials, Java Skills, Java Jobs


Java 19 has seven main JEPs, which is a lower count than the nine JEPs in Java 18, the 14 JEPs in Java 17, the 17 JEPs in Java 16, the 14 JEPs in Java 15, and the 16 JEPs in Java 14. However, focusing on quantity doesn’t tell the story of Java 19, which contains extremely important JEPs for the future-looking Panama, Amber, and Loom projects, as well as porting the JDK to the Linux/RISC-V instruction set.

I am using the Java 19.0.1 jshell tool to demonstrate the code in this article. If you want to test the features, download JDK 19, fire up your terminal, check your version, and run jshell, as follows. Note that you might see a newer dot-release version of the JDK, but nothing else should change.

[mtaman]:~ java -version
 java version "19.0.1" 2022-10-18
 Java(TM) SE Runtime Environment (build 19.0.1+10-21)
 Java HotSpot(TM) 64-Bit Server VM (build 19.0.1+10-21, mixed mode, sharing)

[mtaman]:~ jshell --enable-preview
|  Welcome to JShell -- Version 19.0.1
|  For an introduction type: /help intro

jshell>

Be aware of two important notes.

◉ Two of the JEPs covered in this article are published as incubator modules to solicit developer feedback. An incubator module’s API could be altered or disappear entirely, so don’t count on it being in a future Java release. You should play with incubator modules but not use them in production code. To use incubator modules, use the --add-modules JVM switch.

◉ If a JEP is a preview feature, it is fully specified and implemented but is not finalized. Thus, it should not be used in production code. Use the switch --enable-preview to use those features.

The following JEPs are in Java 19:

Project Loom

◉ JEP 425: Virtual threads (first preview)
◉ JEP 428: Structured concurrency (first incubator)

Project Amber

◉ JEP 405: Record patterns (first preview)
◉ JEP 427: Pattern matching for switch (third preview)

Project Panama

◉ JEP 424: Foreign Function and Memory API (first preview)
◉ JEP 426: Vector API (fourth incubator)

In addition, there’s the following hardware port JEP:

◉ JEP 422: Linux/RISC-V port

Project Loom JEPs


Project Loom is designed to deliver new JVM features and APIs to support easy-to-use, high-throughput, lightweight concurrency as well as a new programming model, which is called structured concurrency.

Virtual threads. In JEP 425, Java 19 introduces virtual threads to the Java platform as a first preview. It is one of the most significant updates to Java in a very long time, but it is also a change that is hardly noticeable. Even though there are many excellent articles regarding virtual threads, such as Nicolai Parlog’s “Coming to Java 19: Virtual threads and platform threads,” I cannot discuss other impending features without first giving a brief overview of virtual threads.

Virtual threads fundamentally redefine the interaction between the Java runtime and the underlying operating system, removing significant barriers to scalability. Still, they don’t dramatically change how you create and maintain concurrent programs. Virtual threads behave almost identically to the threads you are familiar with, and there is barely any additional API.

Let’s look at them from a different view by asking the following question: Why do developers need virtual threads?

Anyone who has ever worked on a back-end application under high load is aware that threads are frequently the bottleneck. A thread is required for each incoming request to be processed. One Java thread corresponds to one operating system thread, consuming many resources. It’s best to start with a few hundred threads; otherwise, the entire system’s stability is jeopardized.

However, in real life, more than a few hundred threads are often required, especially if processing a request takes longer due to the need to wait for blocking data structures such as queues, locks, or external services such as databases, microservices, or cloud APIs.

For example, if a request takes two seconds and the thread pool is limited to 100 threads, the application could serve up to 50 requests per second. Even if several threads are served per CPU core, the CPU would be underutilized because it would spend most of its time waiting for responses from external services. So, you really need thousands of threads—or maybe tens of thousands. However, you’re not going to get that from your hardware.

One solution has been to use the reactive programming model with frameworks such as Project Reactor and RxJava.

Sadly, reactive code is often more complex than sequential code, and it can be hard to maintain. Here’s an example.

public DeferredResult<ResponseEntity<?>> createOrder(
    CreateOrderRequest createOrderRequest, Long sessionId, HttpServletRequest context) {
  
  DeferredResult<ResponseEntity<?>> deferredResult = new DeferredResult<>();

  Observable.just(createOrderRequest)
      .doOnNext(this::validateRequest)
      .flatMap(
          request ->
              sessionService
                  .getSessionContainer(request.getClientId(), sessionId)
                  .toObservable()
                  .map(ResponseEntity::getBody))
      .map(
          sessionContainer ->
              enrichCreateOrderRequest(createOrderRequest, sessionContainer, context))
      .flatMap(
          enrichedRequest ->
              orderPersistenceService.persistOrder(enrichedRequest).toObservable())
      .subscribeOn(Schedulers.io())
      .subscribe(
          success -> deferredResult.setResult(ResponseEntity.noContent()),
          error -> deferredResult.setErrorResult(error));

  return deferredResult;
}

In the reactive world, all the above code merely defines the reactive flow but doesn’t execute it; the code is executed only after the call to subscribe() (at the end of the method) in a separate thread pool. For this reason, it doesn’t make any sense to set a breakpoint at any line of the code above. Therefore, this code is hardly readable and is also tough to debug.

Additionally, the database and external services drivers’ maintainer must support the reactive model, and you’re not going to see that very often.

Core Java, Oracle Java, Java Prep, Java Preparation, Java Tutorial and Materials, Java Skills, Java Jobs
Virtual threads are a better solution because they allow you to write code that is quickly readable and maintainable without having to jump through hoops. That’s because virtual threads are like normal threads from a Java code perspective, but they are not mapped 1:1 to operating system threads.

Instead, there is a pool of so-called carrier threads onto which a virtual thread is temporarily mapped. The carrier thread can execute another virtual thread (a new thread or a previously blocked thread). As soon as the virtual thread encounters a blocking operation, the virtual thread is removed from the carrier threads.

Thus, blocking operations no longer block the executing thread; this lets the JVM process many requests in parallel with a small pool of carrier threads, allowing you to reimplement the reactive example above quite simply as the following:

public void createOrder(
    CreateOrderRequest createOrderRequest, Long sessionId, HttpServletRequest context) {
  
  validateRequest(createOrderRequest);

  SessionContainer sessionContainer =
      sessionService
          .getSessionContainer(createOrderRequest.getClientId(), sessionId)
          .execute()
          .getBody();

  EnrichedCreateOrderRequest enrichedCreateOrderRequest =
      enrichCreateOrderRequest(createOrderRequest, sessionContainer, context);

  orderPersistenceService.persistOrder(enrichedCreateOrderRequest);
}

As you can see, such code is easier to read and write, just as any sequential code is, and it’s also easier to debug by conventional means.

I believe that once you start using virtual threads, you will never switch back to reactive programming. Even better, you can continue to use your code unchanged with virtual threads because (thanks to this new JEP) it is part of the JDK. Well, it’s there as a preview; in a future Java version, virtual threads will be a standard feature.

Structured concurrency. In JEP 428, structured concurrency, which is an incubator module in Java 19, helps to simplify error management and subtask cancellation. Structured concurrency treats concurrent tasks operating in distinct threads as a single unit of work, improving observability and dependability.

Suppose a function contains several invoice-creating subtasks that need to be done in parallel, such as getting data from a database with getOrderBy(orderId), calling a remote API with getCustomerBy(customerId), and loading and reading data from a file with getTemplateFor(language). You could use the Java executable framework, for example, as in the following:

private final ExecutorService executor = Executors.newCachedThreadPool();

public Invoice createInvoice(int orderId, int customerId, String language) 
    throws InterruptedException, ExecutionException {
  
    Future<Customer> customerFuture =
        executor.submit(() -> customerService.getCustomerBy(customerId));

    Future<Order> orderFuture =
        executor.submit(() -> orderService.getOrderBy(orderId));

    Future<String> invoiceTemplateFuture =
        executor.submit(() -> invoiceTemplateService.getTemplateFor(language));

    
    Customer customer = customerFuture.get();
    Order order = orderFuture.get();
    String template = invoiceTemplateFuture.get();

    return invoice.generate(customer, order, template);
}

You can pass the three subtasks to the executor and wait for the partial results. It is easy to implement the basic task quickly, but consider these possible issues.

◉ How can you cancel other subtasks if an error occurs in one subtask?
◉ How can you cancel the subtasks if the invoice is no longer needed?
◉ How can you handle and recover from exceptions?

All these possible issues can be addressed, but the solution would require complex and difficult-to-maintain code.

And, more importantly, what if you want to debug this code? You can generate a thread dump, but it would give you a bunch of threads named pool-X-thread-Y. And you wouldn’t know which pool thread belongs to which calling threads since all calling threads share the executor’s thread pool.

The new structured concurrency API improves the implementation, readability, and maintainability of code for requirements of this type. Using the StructuredTaskScope class, you can rewrite the previous code as follows:

Invoice createInvoice(int orderId, int customerId, String language)
    throws ExecutionException, InterruptedException {
  
    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {

        Future<Customer> customerFuture = 
          scope.fork(() -> customerService.getCustomerBy(customerId));

        Future<Order> orderFuture = 
          scope.fork(() -> orderService.getOrderBy(orderId));


        Future<String> invoiceTemplateFuture = 
          scope.fork(() -> invoiceTemplateService.getTemplateFor(language));

        
        scope.join();              // Join all forks
        scope.throwIfFailed();     // ... and propagate errors


        Customer customer = customerFuture.resultNow();
        Order order = orderFuture.resultNow();
        String template = invoiceTemplateFuture.resultNow();


        // Here, both forks have succeeded, so compose their results
        return invoice.generate(customer, order, template);
    }
}

There’s no need for the ExecutorService in the scope of the class, so I replaced it with a StructuredTaskScope located in the method’s scope. Similarly, I replaced executor.submit() with scope.fork().

By using the scope.join() method, you can wait for all tasks to be completed—or for at least one to fail or be canceled. In the latter two cases, the subsequent throwIfFailed() throws an ExecutionException or a CancellationException.

The new approach brings several improvements over the old one.

◉ When you run the task, the subtasks form a self-contained unit of work in the code; you no longer need ExecutorService in a higher scope. The threads do not come from a thread pool; each subtask is executed in a new virtual thread.
◉ As soon as an error occurs in one of the subtasks, all other subtasks get canceled.
◉ When the calling thread is canceled, the subtasks are also canceled.
◉ The call hierarchy between the calling thread and the subtask-executing threads is visible in the thread dump.

To try the example yourself, you must explicitly add the incubator module to the module path and also enable preview features in Java 19. For example, if you have saved the code in a file named JDK19StructuredConcurrency.java, you can compile and run it as follows:

$ javac --enable-preview -source 19 --add-modules jdk.incubator.concurrent JDK19StructuredConcurrency.java

$ java --enable-preview --add-modules jdk.incubator.concurrent JDK19StructuredConcurrency

Project Amber JEPs


JEP 405 and JEP 427 are part of Project Amber, which focuses on smaller Java language features that can improve developers’ everyday productivity.

Pattern matching for switch. This is a feature that has already gone through two rounds of previews. First appearing in Java 17, pattern matching for switch allows you to write code like the following:

switch (obj) {
  case String s && s.length() > 8 -> System.out.println(s.toUpperCase());
  case String s                   -> System.out.println(s.toLowerCase());

  case Integer i                  -> System.out.println(i * i);

  default -> {}
}

You can use pattern matching to check if an object within a switch statement is an instance of a particular type and if it has additional characteristics. In the Java 17–compatible example above, the goal is to find strings longer than eight characters.

To improve the readability of this feature, Java 19 changed the syntax. In Java 17 and Java 18, the syntax was to write String s && s.length() > 0; now, in Java 19, instead of &&, you must use the easier-to-read keyword when.

Therefore, the previous example would be written in Java 19 as the following:

switch (obj) {
  case String s when s.length() > 8 -> System.out.println(s.toUpperCase());
  case String s                     -> System.out.println(s.toLowerCase());

  case Integer i                    -> System.out.println(i * i);

  default -> {}
}

What’s also new is that the keyword when is a contextual keyword; therefore, it has a meaning only within a case label. If you have variables or methods with the name when in your code, you don’t need to change them. This change won’t break any of your other code.

Record patterns. I am still discussing the topic of pattern matching here because JEP 405 is related to it. If the subject of records is new to you, “Records come to Java” by Ben Evans should help.

A record pattern comprises three components.

◉ A type
◉ A list of record component pattern matches
◉ An optional identifier

Record and type patterns can be nested to allow for robust, declarative, and modular data processing. It is better to explain this with an example, so let me clarify what a record pattern is. Assume you have defined the following Point record:

public record Point(int x, int y) {}

You also have a print() method that can print any object, including positions.

private void print(Object object) {
  
  if (object instanceof Point point) {
    System.out.println("object is a point, x = " + point.x() 
                                      + ", y = " + point.y());
  }
  // else ...
}

You might have seen this notation before; it was introduced in Java 16 as pattern matching for instanceof.

Record pattern for instanceof. As of Java 19, JEP 405 allows you to use a new feature called a record pattern. This new addition allows you to write the previous code as follows:

private void print(Object object) {
  if (object instanceof Point(int x, int y)) {
    System.out.println("object is a point, x = " + x + ", y = " + y);
  } 
  // else ...
}

Instead of needing to match on Point point and access point fields with the whole object, as in the previous code, you now can match on Point(int x, int y) and can then access their x and y fields directly.

Record pattern with switch. Previously with Java 17, you could also write the original example as a switch statement.

private void print(Object object) {
  switch (object) {
    case Point point
        -> System.out.println("object is a point, x = " + point.x() 
                                             + ", y = " + point.y());
    // other cases ...
  }
}

You can now also use a record pattern in the switch statement.

private void print(Object object) {
  switch (object) {
    case Point(int x, int y) 
        -> System.out.println("object is a point, x = " + x + ", y = " + y);

    // other cases ...
  }
}

Nested record patterns. It is now possible to match nested records. Here’s another example that defines a second record, Line, with a start point and a destination point, as follows:

public record Line(Point from, Point to) {}

The print() method can now use a record pattern to print all the path’s x and y coordinates easily.

private void print(Object object) {
  if (object instanceof Line(Point(int x1, int y1), Point(int x2, int y2))) {
    System.out.println("object is a Line, x1 = " + x1 + ", y1 = " + y1 
                                     + ", x2 = " + x2 + ", y2 = " + y2);
  }
  // else ...
}

Alternatively, you can write the code as a switch statement.

private void print(Object object) {
  switch (object) {
    case Line(Point(int x1, int y1), Point(int x2, int y2))
        -> System.out.println("object is a Line, x1 = " + x1 + ", y1 = " + y1 
                                            + ", x2 = " + x2 + ", y2 = " + y2);
    // other cases ...
  }
}

Thus, record patterns provide an elegant way to access a record’s elements after a type check.

Project Panama JEPs


The Project Panama initiative, which includes JEP 424 and JEP 426, focuses on interoperability between the JVM and well-defined foreign (non-Java) APIs. These APIs often include interfaces that are used in C libraries.

Foreign functions and foreign memory. In Project Panama, a replacement for the error-prone, cumbersome, and slow Java Native Interface (JNI) has been in the works for a long time.

The Foreign Linker API and the Foreign Memory Access API were already introduced in Java 14 and Java 16, respectively, as incubator modules. In Java 17, these APIs were combined to form the single Foreign Function and Memory API, which remained in the incubator stage in Java 18.

Java 19’s JEP 424 has promoted the new API from incubator to preview stage, which means that only minor changes and bug fixes will be made. So, it’s time to introduce the new API.

The Foreign Function and Memory API enables access to native memory (that is, memory outside the Java heap) and access to native code (usually C libraries) directly from Java.

The following examples store a string in off-heap memory, followed by a call to the C standard library’s strlen function to return the string length.

public class ForeignFunctionAndMemoryTest {
  public static void main(String[] args) throws Throwable {
    // 1. Get a lookup object for commonly used libraries
    SymbolLookup stdlib = Linker.nativeLinker().defaultLookup();

    // 2. Get a handle on the strlen function in the C standard library
    MethodHandle strlen = Linker.nativeLinker().downcallHandle(
        stdlib.lookup("strlen").orElseThrow(), 
        FunctionDescriptor.of(JAVA_LONG, ADDRESS));

    // 3. Convert Java String to C string and store it in off-heap memory
    MemorySegment str = implicitAllocator().allocateUtf8String("Happy Coding!");

    // 4. Invoke the foreign function
    long len = (long) strlen.invoke(str);

    System.out.println("len = " + len);
  }
}

The FunctionDescriptor expects the foreign function’s return type as the first parameter, with the function’s arguments coming in as extra parameters. The FunctionDescriptor handles accurate conversion of all Java types to C types and vice versa.

Since the Foreign Function and Memory API is still in the preview stage, you must specify a few parameters to compile and run the code.

$ javac --enable-preview -source 19 ForeignFunctionAndMemoryTest.java

$ java --enable-preview ForeignFunctionAndMemoryTest

As a developer who has worked with JNI—and remembers how much Java and C boilerplate code I had to write and keep in sync—I am delighted that the effort required to call the native function has been reduced by orders of magnitude.

Vector math. I’ll start by dispelling a possible point of confusion: The new Vector API has nothing to do with the java.util.Vector class. Instead, this is a new API for mathematical vector computation and mapping to modern Single-Instruction-Multiple-Data (SIMD) CPUs.

The Vector API attempts to make it easier for native code and JVM code to communicate with one another. The Vector API is also the fourth incubation of the API that defines vector computations that successfully compile at runtime to optimal vector instructions on supported CPU architectures, outperforming equivalent scalar computations.

With the help of the user model in the API, developers can use the HotSpot JVM’s autovectorizer to design sophisticated vector algorithms in Java that are more reliable and predictable.

The Vector API has been a part of the JDK since Java 16 as an incubator module, and in Java 17 and Java 18 it underwent significant development. The Foreign Function and Memory API preview defines improvements to loading and storing vectors to and from memory segments as part of the API proposed for JDK 19.

Along with the complementing vector mask compress operation, Java 19’s new JEP 426 adds the cross-lane vector operations of compress and expand. The compress operation maps the lanes of a source vector—which are chosen by a mask—to a destination vector in lane order. The compress procedure improves the query result filtering. The expand operation does the opposite.

You can also expand bitwise integral lane-wise operations, including counting the number of one bits, reversing the order of bits, and compressing and expanding bits.

The API’s objectives include being unambiguous, being platform-neutral, and having dependable runtime and compilation performance on the x64 and AArch64 architectures.

The hardware port


RISC-V is a free, open source instruction set architecture that’s becoming increasingly popular, and now there’s a Linux JDK for that architecture in JEP 422. A wide range of language toolchains already supports this hardware instruction set.

Currently, the Linux/RISC-V port will support only one general-purpose 64-bit instruction set architecture with vector instructions: an RV64GV configuration of RISC-V. More may be supported in the future.

The HotSpot JVM subsystems that are supported with this new Java 19 feature are

◉ C1 (client) just-in-time (JIT) compiler
◉ C2 (server) JIT compiler
◉ Template interpreter
◉ All mainline garbage collectors, including ZGC and Shenandoah

Source: oracle.com

Wednesday, January 4, 2023

Quiz yourself: The three-argument overload of the Stream API’s reduce method

Oracle Java, Oracle Java Exam, Oracle Java Tutorial and Materials, Oracle Java Certification, Oracle Java Guides

There are three reduce overloads; you should know what they do.

Imagine you have the following Person record:

record Person(String name, Integer experience) {}

Your colleague wrote the following code to calculate the total experience of all the people in the stream:

public static Integer calculateTotalExperience(Stream<Person> stream) {

  return stream.reduce(Integer.valueOf(0),

    (sum, p) -> sum += p.experience, // line n1

    (v1, v2) -> v1 * v2);     // line n2

}

To test the code, your colleague used the following test case, which produced a total experience of 15 years:

Person p1 = new Person("P1", 3);

Person p2 = new Person("P2", 3);

Person p3 = new Person("P3", 4);

Person p4 = new Person("P4", 5);

List<Person> list = List.of(p1, p2, p3, p4);

Integer totalAge = calculateTotalExperience(list.stream());

Which statement is correct? Choose one.

A. Line n1 contains an error.

B. Line n2 contains an error.

C. Both lines n1 and n2 contain errors.

D. The code is properly constructed for calculating the total experience.

Answer. This question investigates the three-argument overload of the reduce method in the Stream API.

In the Stream API, the reduce method creates a single result by taking the elements of the stream one at a time and updating an intermediate result. When all the stream data has been used, that intermediate result is considered final.

There are three reduce overloads, and it’s helpful to discuss all of them, since they introduce the key concepts sequentially.

The first overload takes a single argument that’s a BinaryOperator. That operator combines pairs of items of the stream data type into one item of the same type. Then the next stream item is combined with that intermediate result, and this is done repeatedly until all the stream data has been used. If the stream is empty, there can’t be a result in the normal way. Because of that, this overload returns an Optional that either contains the result of a nonempty stream or is itself empty to indicate no result.

The two-argument overload of reduce also takes a value of the result type. This is called the identity value and must have a couple of properties. First, it represents the result value if the stream is empty. Second, it should be possible to incorporate this value into the binary operator’s calculations any number of times without changing the final result. So, for simple addition, this identity value would be zero. For multiplication, it would be one. Because the identity value is provided, this overload does not need to return an Optional, and instead it returns a value of the stream type under all normal circumstances.

The third overload, which takes three arguments, is the topic of this question. This overload is used when the result is not of the same type as the stream data. In other APIs, an equivalent method might go by another name, perhaps involving the word fold or aggregate.

The operation of this three-argument reduction takes an identity value of the result type, rather than the stream type. It also takes a BiFunction operation that combines a value of the result type with a value of the stream type and produces a new value of the result type. This works well but has a problem in a parallel configuration of the stream.

In parallel mode, each of the separate threads that work on the reduction produce a partial result derived from just some of the stream’s data. To get to a final result, these partial results must be combined. This is the purpose of the third argument, which is a BinaryOperator of the result type. The signature of this method is as follows:

<U> U reduce(U identity,

   BiFunction<U, ? super T, U> accumulator,

   BinaryOperator<U> combiner);

In the general case of a stream running in sequential mode, there won’t be multiple partial results across multiple threads. Consequently, the combiner operation won’t be needed in a stream running in sequential mode. This fact turns out to be important to answering this question.

In the code presented here, identity is an Integer containing zero. This is the correct value for the identity value of an addition operation.

The accumulator operation on line n1 is provided by the following lambda:

(sum, p) -> sum += p.experience

This code uses the += assignment operator to add the current stream item’s experience field to the sum so far. This might look suspect, for two reasons.

◉ First, the sum is an Integer object, and that type is immutable. However, in Java, functions and lambda formal parameters are mutable by default, and the expression actually modifies the value of the sum to refer to a newly created Integer object. So this concern is unfounded.

◉ The second potential concern is that the lambda must implement a BiFunction that returns an Integer object, but this lambda lacks an obvious value to return. Of course, in Java, assignment operators have value. So the value of the expression sum += p.experience is actually the value assigned to the sum. That’s the correct value, so this lambda is correct both syntactically and semantically. Therefore, there’s no error on line n1, and options A and C are both incorrect.

Next, consider the combiner provided on line n2. This has the job of adding up the intermediate sums that might be created in separate threads if the stream were executed in parallel mode. However, it should be calculating a sum, not a multiplication, so that’s clearly a logical error. This tells you that option B is correct and, consequently, that option D is incorrect. Even though the code produces the right answer, it is not correctly written.

As a side note, the Java documentation for the Collector interface mentions a similar situation to the one described in this quiz.

A sequential implementation of a reduction using a collector would create a single result container using the supplier function and invoke the accumulator function once for each input element. A parallel implementation would partition the input, create a result container for each partition, accumulate the contents of each partition into a sub-result for that partition, and then use the combiner function to merge the subresults into a combined result.

There doesn’t seem to be an equivalent statement for the reduce operation, but clearly the expectation is that the combiners will typically not be invoked when a stream runs sequentially. This also explains how the code generated the correct result when your colleague ran the test.

Of course, although there’s no obvious reason why it would be useful, there does not appear to be any guarantee that the combiner must not be used in a sequential mode. So a developer must not assume that the combiner will be unused. By simply changing the stream in this example to parallel mode, you should expect to get incorrect results.

Conclusion. The correct answer is option B.

Source: oracle.com