Saturday, April 30, 2022

Advantages and Disadvantages of Java Sockets

Java Sockets, Oracle Java Exam Prep, Oracle Java Learning, Core Java, Java Preparation, Java Preparation

A socket is an End-Point of 2 sided or two-way communication link between two PCs running on the network. For Example: If there are two people in different places who want to communicate with each other and each of them has a mobile phone. So one of them has to start the communication by dialing the person and then the other person receives the call then the connection will be established between these two. Once, the connection is established then they can start communicating with each other by sending and accepting messages so that is how a simple mobile phone end-to-end communication works, here the mobile phones act as a socket and endpoint of communication.

How communication is done?

To communicate both devices need to have sockets at their end. Since we’re doing java programming these sockets are called java objects or socket objects. So to establish a connection and to send and receive messages and data both ends need to have a socket at the sender’s end and at the receiver’s end. Below is the illustration of the network

Java Sockets, Oracle Java Exam Prep, Oracle Java Learning, Core Java, Java Preparation, Java Preparation

How sockets are used to send and receive data?


We are going to use input and output streams 

◉ socket input streams: to read the data.
◉ socket output stream: to write the data.

Let’s say we want to send data from socket 1 to socket 2 in that case we have to use socket 1 output stream and to read data from B to A. we’ll use the socket as an input stream. Now Let’s discuss the Advantages and Disadvantages of java sockets.

Advantages and Disadvantages of Java Sockets


Advantages Disadvantages 
Flexible & powerful  Increased complexity cost and high-Security restrictions.
Very sufficient  Socket-based communications allow only to send packets of raw data between applications. 
Updated Information can be used to send only between devices  Communication can be established with the machine requested not with another machine. 
Low network traffic if efficient use  Both ends should have the ability to intercept the data. 

Source: geeksforgeeks.org

Friday, April 29, 2022

Lazy Java code makes applications elegant and sophisticated

In lazy development, you focus on the “what” and functional libraries handle the “how.”

“The functional style of programming is very charming,” says Venkat Subramaniam. “The code begins to read like the problem statement. We can relate to what the code is doing, and we can quickly understand it.” Not only that, Subramaniam explains, but as implemented in Java and beyond, functional-style code is lazy—and that laziness makes for efficient operations because the runtime isn’t doing unnecessary work.

Oracle Java Exam Prep, Oracle Java, Java Exam Prep, Java Career, Java Jobs, Java Skills, Oracle Java Certification

Subramaniam, president of Agile Developer and an instructional professor at the University of Houston, believes that laziness is the secret to success, both in life and in programming. Pretend that your boss tells you on January 10 that a certain hour-long task must be done before April 15. A go-getter might do that task by January 11.

Aiming to complete the work superquickly is not the best approach, insists Subramaniam. Instead, be lazy. Don’t complete that task until April 14. Why? Because the results of the boss’s task aren’t needed yet, and the requirements may change before the deadline, or the task might be canceled altogether. Or you (or your boss) might even leave the company on April 13.

So, don’t complete the job until you need to do so, because maybe you won’t need to do so after all.

This same mindset should apply to your programming, says Subramaniam: “Efficiency often means not doing unnecessary work.”

Subramaniam often explores how functional-style programming is implemented in the latest versions of Java, and why he’s so enthusiastic about using this style for applications that process lots of data—and where it’s important to create code that is easy to read, easy to modify, and easy to test.

First, a quick comparison of functional-style programming and imperative-style programming.

Functional versus imperative programming

The old mainstream of imperative programming, which has been a part of the Java language from day one, relies upon developers to explicitly code not only what they want the program to do but also how to do it.

Consider software that has a huge amount of data to process. A developer might create a loop that examines each piece of data and, if appropriate, takes specific action on that data with each iteration of the loop. It’s up to the developer to create that loop and manage it—in addition to coding the business logic to be performed on the data.

This classic imperative model, argues Subramaniam, results in what he calls “accidental complexity,” as each line of code might perform multiple functions, which makes it hard to understand, modify, test, and verify. Plus, the developer must do a lot of work to set up and manage the data and iterations. “You get bogged down with the details,” he says. “This not only introduces complexity but makes code hard to change.”

By contrast, when using a functional style of programming, developers can focus almost entirely on what is to be done, while ignoring the how. Why? Because the how is handled by the underlying library of functions, which are defined separately and applied to the data as required.

Subramaniam says that functional-style programming provides highly expressive code, where each line of code does only one thing: “The code becomes easier to work with and easier to write.”

He adds that in functional-style programming, “The code becomes the business logic.”

No, this isn’t entirely new

If you’re familiar with programming languages such as Clojure, Python, Erlang, or Haskell, you know that functional-style programming isn’t new. It’s not even close to new: The granddaddy of functional-style programming languages, LISP, first appeared in 1958.

Oracle Java Exam Prep, Oracle Java, Java Exam Prep, Java Career, Java Jobs, Java Skills, Oracle Java Certification
What’s new is that functional-style programming features were added to the Java programming language with the release of Java 8, which introduced lambda expressions, streams, and other features that enable developers to use functional-style programming where appropriate.

For example, streams are pipelines of data that can be transformed and mapped using functions. Conceptually, streams are easier to understand and manipulate than lists, which are a common data structure used in imperative-style programming.

Of course, Java 8 and later versions can still be programmed imperatively, giving developers tremendous flexibility. However, as Subramaniam says, functional-style programming is going mainstream, and that’s where the biggest benefits will be realized—especially when you get lazy.

Lazy programming in Java

Subramaniam explains that the Java language specifications for functional-style programming include laziness—and that’s a benefit at runtime.

Why? Functional-style programming, using streams, is designed to make a single pass through the data, performing all the necessary actions on each element in the stream before moving to the next element. If certain actions aren’t necessary to perform because the results of that action won’t be used, the Java implementation knows not to perform those actions on that element.

By contrast, in some other functional-style programming languages, there might be several iterations through the data. In some cases, actions will be performed on elements in the stream, even when it’s unnecessary or the results won’t be used. This can drastically hurt performance on very large datasets, he says.

In modern Java, says Subramaniam, “A collection of functions is applied on each element only as necessary. Instead of a function being applied on the entire collection, intermediate operations are fused together and run only when necessary.”

He adds, “In other languages, you can’t use lambdas and streaming for big data projects if the language doesn’t support lazy evaluation.” Those lazy streams are what got Subramaniam hooked on functional programming in Java: “Laziness is the ultimate sophistication.”

Source: oracle.com

Thursday, April 28, 2022

Bruce Eckel on Java interfaces and sealed classes

With the introduction of default and static methods in interfaces, Java lets you write method code in an interface that you might not want to be public.

With the introduction of default and static methods in interfaces, Java made it possible to write method code in an interface that you might not want to be public. In the code below, Old, fd(), and fs() are default and static methods, respectively. These methods are called only by f() and g(), so you can make them private.

Java Interfaces Classes, Oracle Java Sealed Classes, Oracle Java Exam Prep, Oracle Java Career, Java Skills, Oracle Java Preparation

// interfaces/PrivateInterfaceMethods.java

// {NewFeature} Since JDK 9

interface Old {

  default void fd() {

    System.out.println("Old::fd()");

  }

  static void fs() {

    System.out.println("Old::fs()");

  }

  default void f() {

    fd();

  }

  static void g() {

    fs();

  }

}

class ImplOld implements Old {}

interface JDK9 {

  private void fd() { // Automatically default

    System.out.println("JDK9::fd()");

  }

  private static void fs() {

    System.out.println("JDK9::fs()");

  }

  default void f() {

    fd();

  }

  static void g() {

    fs();

  }

}

class ImplJDK9 implements JDK9 {}

public class PrivateInterfaceMethods {

  public static void main(String[] args) {

    new ImplOld().f();

    Old.g();

    new ImplJDK9().f();

    JDK9.g();

  }

}

/* Output:

Old::fd()

Old::fs()

JDK9::fd()

JDK9::fs()

*/

(Note: The {NewFeature} comment tag excludes this example from the Gradle build that uses JDK 8.)

JDK9 turns fd() and fs() into private methods using the feature finalized in JDK 9. Note that fd() no longer needs the default keyword, as making it private automatically makes it default.

Sealed classes and interfaces

An enumeration creates a class that has only a fixed number of instances. JDK 17 finalizes the introduction of sealed classes and interfaces, so the base class or interface can constrain what classes can be derived from it. This allows you to model a fixed set of kinds of values.

// interfaces/Sealed.java

// {NewFeature} Since JDK 17

sealed class Base permits D1, D2 {}

final class D1 extends Base {}

final class D2 extends Base {}

// Illegal:

// final class D3 extends Base {}

The compiler produces an error if you try to inherit a subclass such as D3 that is not listed in the permits clause. In the code above, there can be no subclasses other than D1 and D2. Thus, you can ensure that any code you write will only ever need to consider D1 and D2.

You can also seal interfaces and abstract classes.

// interfaces/SealedInterface.java

// {NewFeature} Since JDK 17

sealed interface Ifc permits Imp1, Imp2 {}

final class Imp1 implements Ifc {}

final class Imp2 implements Ifc {}

sealed abstract class AC permits X {}

final class X extends AC {}

If all subclasses are defined in the same file, you don’t need the permits clause. In the following, the compiler will prevent any attempt to inherit a Shape outside of SameFile.java:

// interfaces/SameFile.java

// {NewFeature} Since JDK 17

sealed class Shape {}

final class Circle extends Shape {}

final class Triangle extends Shape {}

The permits clause allows you to define the subclasses in separate files, as follows:

// interfaces/SealedPets.java

// {NewFeature} Since JDK 17

sealed class Pet permits Dog, Cat {}

// interfaces/SealedDog.java

// {NewFeature} Since JDK 17

final class Dog extends Pet {}

// interfaces/SealedCat.java

// {NewFeature} Since JDK 17

final class Cat extends Pet {}

Subclasses of a sealed class must be modified by one of the following:

◉ final: No further subclasses are allowed.

◉ sealed: A sealed set of subclasses is allowed.

◉ non-sealed: This is a new keyword that allows inheritance by unknown subclasses.

The sealed subclasses maintain strict control of the hierarchy.

// interfaces/SealedSubclasses.java

// {NewFeature} Since JDK 17

sealed class Bottom permits Level1 {}

sealed class Level1 extends Bottom permits Level2 {}

sealed class Level2 extends Level1 permits Level3 {}

final class Level3 extends Level2 {}

Note that a sealed class must have at least one subclass.

A sealed base class cannot prevent the use of a non-sealed subclass, so you can always open things back up.

// interfaces/NonSealed.java

// {NewFeature} Since JDK 17

sealed class Super permits Sub1, Sub2 {}

final class Sub1 extends Super {}

non-sealed class Sub2 extends Super {}

class Any1 extends Sub2 {}

class Any2 extends Sub2 {}

Sub2 allows any number of subclasses, so it seems like it releases control of the types you can create. However, you strictly limit the immediate subclasses of the sealed class Super. That is, Super still allows only the direct subclasses Sub1 and Sub2.

A JDK 16 record (described in a previous article in this series) can also be used as a sealed implementation of an interface. Because a record is implicitly final, it does not need to be preceded by the final keyword.

// interfaces/SealedRecords.java

// {NewFeature} Since JDK 17

sealed interface Employee

  permits CLevel, Programmer {}

record CLevel(String type)

  implements Employee {}

record Programmer(String experience)

  implements Employee {}

The compiler prevents you from downcasting to illegal types from within a sealed hierarchy.

// interfaces/CheckedDowncast.java

// {NewFeature} Since JDK 1

sealed interface II permits JJ {}

final class JJ implements II {}

class Something {}

public class CheckedDowncast {

  public void f() {

    II i = new JJ();

    JJ j = (JJ)i;

    // Something s = (Something)i;

    // error: incompatible types: II cannot

    // be converted to Something

  }

}

You can discover the permitted subclasses at runtime using the getPermittedSubclasses() call, as follows:

// interfaces/PermittedSubclasses.java

// {NewFeature} Since JDK 17

sealed class Color permits Red, Green, Blue {}

final class Red extends Color {}

final class Green extends Color {}

final class Blue extends Color {}

public class PermittedSubclasses {

  public static void main(String[] args) {

    for(var p: Color.class.getPermittedSubclasses())

      System.out.println(p.getSimpleName());

  }

}

/* Output:

Red

Green

Blue

*/

Source: oracle.com

Wednesday, April 27, 2022

Parallel streams in Java: Benchmarking and performance considerations

The Stream API makes it possible to execute a sequential stream in parallel without rewriting the code.

The Stream API brought a new programming paradigm to Java: a declarative way of processing data using streams—expressing what should be done to the values and not how it should be done. More importantly, the API allows you to harness the power of multicore architectures for the parallel processing of data. There are two kinds of streams.

Oracle Java Exam, Oracle Java Tutorial and Material, Oracle Java Career, Java Skills, Java Jobs, Oracle Java Certification, Java Preparation, Oracle Java

◉ A sequential stream is one whose elements are processed sequentially (as in a for loop) when the stream pipeline is executed by a single thread.

◉ A parallel stream is split into multiple substreams that are processed in parallel by multiple instances of the stream pipeline being executed by multiple threads, and their intermediate results are combined to create the final result.

A parallel stream can be created only directly on a collection by invoking the Collection.parallelStream() method.

The sequential or parallel mode of an existing stream can be modified by calling the BaseStream.sequential() and BaseStream.parallel() intermediate operations, respectively. A stream is executed sequentially or in parallel depending on the execution mode of the stream on which the terminal operation is initiated.

The Stream API makes it possible to execute a sequential stream in parallel without rewriting the code. The primary reason for using parallel streams is to improve performance while at the same time ensuring that the results obtained are the same, or at least compatible, regardless of the mode of execution. Although the API goes a long way toward achieving its aim, it is important to understand the pitfalls to avoid when executing stream pipelines in parallel.

Using parallel streams

Building parallel streams. The execution mode of an existing stream can be set to parallel by calling the parallel() method on the stream. The parallelStream() method of the Collection interface can be used to create a parallel stream with a collection as the datasource. No other code is necessary for parallel execution, as the data partitioning and thread management for a parallel stream are handled by the API and the JVM. As with any stream, the stream is not executed until a terminal operation is invoked on it.

The isParallel() method of the stream interfaces can be used to determine whether the execution mode of a stream is parallel.

Executing parallel streams. The Stream API allows a stream to be executed either sequentially or in parallel—meaning that all stream operations can execute either sequentially or in parallel. A sequential stream is executed in a single thread running on one CPU core. The elements in the stream are processed sequentially in a single pass by the stream operations that are executed in the same thread.

A parallel stream is executed by different threads, running on multiple CPU cores in a computer. The stream elements are split into substreams that are processed by multiple instances of the stream pipeline being executed in multiple threads. The partial results from the processing of each substream are merged (or combined) into a final result.

Parallel streams utilize the fork/join framework for executing parallel tasks. This framework provides support for the thread management necessary to execute the substreams in parallel. The number of threads employed during parallel stream execution is dependent on the CPU cores in the computer.

Factors affecting performance

There are no guarantees that executing a stream in parallel will improve performance. This section looks at some factors that can affect performance.

In general, increasing the number of CPU cores and, thereby, the number of threads that can execute in parallel scales performance only up to a threshold for a given size of data, as some threads might become idle if there is no data left for them to process. The number of CPU cores boosts performance to a certain extent, but it is not the only factor that should be considered when deciding whether to execute a stream in parallel.

Inherent in the total cost of parallel processing is the startup cost of setting up the parallel execution. At the onset, if this cost is already comparable to the cost of sequential execution, not much can be gained by resorting to parallel execution.

A combination of the following three factors can be crucial in deciding whether a stream should be executed in parallel:

◉ Sufficiently large data size. The size of the stream must be sufficiently large enough to warrant parallel processing; otherwise, sequential processing is preferable. The startup cost can be too prohibitive for parallel execution if the stream size is too small.

◉ Computation-intensive stream operations. If the stream operations are small computations, the stream size should be proportionately large to warrant parallel execution. If the stream operations are computation-intensive, the stream size is less significant, and parallel execution can boost performance.

◉ Easily splitable stream. If the cost of splitting the stream into substreams is higher than the cost of processing the substreams, employing parallel execution can be futile. A collection such as an ArrayList, a HashMap, or a simple array are efficiently splitable, whereas a LinkedList or I/O-based datasources are least efficient in this regard.

Benchmarking is recommended

Benchmarking—that is, measuring performance—is strongly recommended before deciding whether parallel execution will be beneficial. Listing 1 illustrates a simple program that reads the system clock before and after a stream is executed; it can be used to get a sense of how well a stream performs.

Listing 1. Benchmark the performance of sequential and parallel streams.

import java.util.function.LongFunction;

import java.util.stream.LongStream;

/*

 * Benchmark the execution time to sum numbers from 1 to n values

 * using streams.

 */

public final class StreamBenchmarks {

  public static long seqSumRangeClosed(long n) {                           // (1)

    return LongStream.rangeClosed(1L, n).sum();

  }

  public static long paraSumRangeClosed(long n) {                          // (2)

    return LongStream.rangeClosed(1L, n).parallel().sum();

  }

  public static long seqSumIterate(long n) {                               // (3)

    return LongStream.iterate(1L, i -> i + 1).limit(n).sum();

  }

  public static long paraSumIterate(long n) {                              // (4)

    return LongStream.iterate(1L, i -> i + 1).limit(n).parallel().sum();

  }

  public static long iterSumLoop(long n) {                                 // (5)

    long result = 0;

    for (long i = 1L; i <= n; i++) {

      result += i;

    }

    return result;

  }

  /*

   * Applies the function parameter func, passing n as parameter.

   * Returns the average time (ms.) to execute the function 100 times.

   */

  public static <R> double measurePerf(LongFunction<R> func, long n) {     // (6)

    int numOfExecutions = 100;

    double totTime = 0.0;

    R result = null;

      for (int i = 0; i < numOfExecutions; i++) {                          // (7)

        double start = System.nanoTime();                                  // (8)

        result = func.apply(n);                                            // (9)

        double duration = (System.nanoTime() - start)/1_000_000;           // (10)

        totTime += duration;                                               // (11)

      }

      double avgTime = totTime/numOfExecutions;                            // (12)

    return avgTime;

  }

  /*

   * Executes the functions in the vararg funcs for different stream sizes.

   */

  public static <R> void xqtFunctions(LongFunction<R>... funcs) {          // (13)

    long[] sizes = {1_000L, 10_000L, 100_000L, 1_000_000L};                // (14)

    // For each stream size ...

    for (int i = 0; i < sizes.length; ++i) {

      System.out.printf("%7d", sizes[i]);                                  // (15)

      // ... execute the functions passed in the var-arg funcs.

      for (int j = 0; j < funcs.length; ++j) {                             // (16)

        System.out.printf("%10.5f", measurePerf(funcs[j], sizes[i]));

      }

      System.out.println();

    }

  }

  public static void main(String[] args) {                               // (17)

    System.out.println("Streams created with the rangeClosed() method:");// (18)

    System.out.println(" Size Sequential Parallel");

    xqtFunctions(StreamBenchmarks::seqSumRangeClosed,

      StreamBenchmarks::paraSumRangeClosed);

    System.out.println("Streams created with the iterate() method:");    // (19)

    System.out.println(" Size Sequential Parallel");

    xqtFunctions(StreamBenchmarks::seqSumIterate,

      StreamBenchmarks::paraSumIterate);

    System.out.println("Iterative solution with an explicit loop:");     // (20)

    System.out.println(" Size Iterative");

    xqtFunctions(StreamBenchmarks::iterSumLoop);

  }

}

Possible output from the program would look like Figure 1; these results will be referred to throughout the rest of the article.

Oracle Java Exam, Oracle Java Tutorial and Material, Oracle Java Career, Java Skills, Java Jobs, Oracle Java Certification, Java Preparation, Oracle Parallel Streams

Figure 1. Output from the benchmark program

The class StreamBenchmarks in Listing 1 defines five methods, at lines (1) through (5), that compute the sum of values from 1 to n. These methods compute the sum in various ways, and each method is executed with four different values of n—that is, the stream size is the number of values for summation.

The program prints the benchmarks for each method for the different values of n, which of course can vary, as many factors can influence the results—the most significant one being the number of CPU cores on the computer.

Here’s a more detailed look.

Methods at lines (1) and (2). The methods seqSumRangeClosed() at line (1) and paraSumRangeClosed() at line (2) perform the computation on a sequential and a parallel stream, respectively, that are created with the rangeClosed() method.

return LongStream.rangeClosed(1L, n).sum();             // sequential stream
...
return LongStream.rangeClosed(1L, n).parallel().sum();  // parallel stream

Note that the terminal operation sum() is not computation-intensive.

The parallel stream starts to show better performance when the number of values approaches 100,000. The stream size is then significantly large for the parallel stream to show better performance. Note that the range of values defined by the arguments of the rangeClosed() method can be efficiently split into substreams, because its start and end values are provided.

Methods at lines (3) and (4). The methods seqSumIterate() at line (3) and paraSumIterate() at line (4) return a sequential and a parallel stream, respectively, created with the iterate() method.

return LongStream.iterate(1L, i -> i + 1).limit(n).sum(); // Sequential ...
return LongStream.iterate(1L, i -> i + 1).limit(n).parallel().sum(); // Parallel

Here, the method iterate() creates an infinite stream, and the limit() intermediate operation truncates the stream according to the value of n. The performance of both streams degrades fast when the number of values increases.

However, the parallel stream performs worse than the sequential stream in all cases. The values generated by the iterate() method are not known before the stream is executed, and the limit() operation is also stateful, making the process of splitting the values into substreams inefficient in the case of the parallel stream.

Method at line (5). Method iterSumLoop() at line (5) uses a for loop to compute the sum. Using a for loop to calculate the sum performs best for all values of n compared to the streams, showing that significant overhead is involved in using streams for summing a sequence of numerical values.

The rest of the listing. Here is a description of the other main lines in Listing 1.

The methods measurePerf() at line (6) and xqtFunctions() at line (13) create the benchmarks for functions passed as parameters.

In the measurePerf() method, the system clock is read at line (8) and the function parameter func is applied at line (9). The system clock is read again at line (10) after the function application at line (9) has been completed. The execution time calculated at line (10) reflects the time for executing the function.

Applying the function func evaluates the lambda expression or the method reference implementing the LongFunction interface. In Listing 1, the function parameter func is implemented by method references that call methods at lines (1) through (5) in the StreamBenchmarks class whose execution time is to be measured.

Side effects and other factors


Efficient execution of parallel streams that produces the desired results requires the stream operations (and their behavioral parameters) to avoid certain side effects.

Noninterfering behaviors. The behavioral parameters of stream operations should be noninterfering, both for sequential and parallel streams. Unless the stream datasource is concurrent, the stream operations should not modify it during the execution of the stream pipeline.

Stateless behaviors. The behavioral parameters of stream operations should be stateless, both for sequential and parallel streams. A behavior parameter implemented as a lambda expression should not depend on any state that might change during the execution of the stream pipeline. The results from a stateful behavioral parameter can be nondeterministic or even incorrect. For a stateless behavior parameter, the results are always the same.

Having a shared state that is accessed by the behavior parameters of the stream operations in a pipeline is not a good idea. Why? Executing the pipeline in parallel can lead to race conditions when the global state is accessed; using synchronization code to provide thread safety may defeat the purpose of parallelization. Using the three-argument reduce() or collect() method can be a better solution to encapsulate shared state.

The intermediate operations distinct(), skip(), limit(), and sorted() are stateful. They can carry extra performance overhead when executed in a parallel stream, because such an operation can entail multiple passes over the data and may require significant data buffering.

Ordering and terminal operations. An ordered stream processed by operations that preserve the encounter order will produce the same results, regardless of whether it is executed sequentially or in parallel. However, repeated execution of an unordered stream—sequential or parallel—can produce different results.

Preserving the encounter order of elements in an ordered parallel stream can incur a performance penalty. The performance of an ordered parallel stream can be improved if the ordering constraint is removed by calling the unordered() intermediate operation on the stream.

The stateful intermediate operations distinct(), skip(), and limit() can improve performance in a parallel stream that is unordered, as compared to one that is ordered.

◉ Rather than needing to buffer the first occurrence of a duplicate value, the distinct() operation need only buffer any occurrence.

◉ The skip() operation can skip any n elements, rather than skipping the first n elements.

◉ The limit() operation can truncate the stream after any n elements, rather than just after the first n elements.

The terminal operation findAny() is intentionally nondeterministic, and it can return any element in the stream. It is especially suited for parallel streams.

The forEach() terminal operation ignores the encounter order, but the forEachOrdered() terminal operation preserves the order. The sorted() stateful intermediate operation, on the other hand, enforces a specific encounter order, regardless of whether it is executed in a parallel pipeline.

Autoboxing and unboxing of numeric values. Because the Stream API allows both object and numeric streams, and it provides support for conversion between them, choosing a numeric stream, when possible, can offset the overhead of autoboxing and unboxing in object streams.

Source: oracle.com

Monday, April 25, 2022

Quiz yourself: Java’s text blocks and variable-length argument lists

When do text blocks contain a leading new line, and when won’t you see a new line?

[Java 17, which is a long-term support release, is a significant step forward from Java 11. Although Java 17 became generally available in September 2021, the matching version of the certification exam has not been released as of this quiz’s publication date. The exam is in the later stages of development, and we’re excited to begin presenting questions based on the objectives for the new exam. —Ed.]

Oracle Java Certification, Oracle Java Career, Java Jobs, Java Skills, Oracle Java Preparation, Oracle Java Exam Preparation

Given the following method code fragment

var f = """

        ◌◌%s

        """.concat("◌◌").stripIndent();

f.concat("""

         ◌◌%s

         """);

System.out.print(f.formatted(new String[]{"A", "B"}));

Assume each dotted circle (◌) denotes a space character.

What is the result? Choose one.

A. A

B. ◌◌A

C. ◌◌A

     ◌◌B

D. A

    ◌◌B

E. [Ljava.lang.String;@41629346

Answer. This question investigates text blocks, new String behaviors, and the handling of variable-length argument lists. The investigation begins with the preparation of a String that starts out as a literal text block; keep in mind that the ◌ character in this quiz denotes a space. The string literal starts out as the following:

◌◌%s<new line>

Notice there’s no leading new line there; the text block rules mandate that the three opening double quote marks must be followed by a new line. The contents of the text block actually start on that second line.

Immediately after the text block, there’s a concat() method that creates a new String containing the following:

◌◌%s<new line>

◌◌

The next operation is the stripIndent() method. This method deletes the indentation that’s common across all the lines of a String. In this code sample, there are two spaces on each line, and these will be removed. Therefore, the String assigned to the variable f is

%s<new line>

The second concat() operation appends a new text block to f and produces a new string—but that String is lost because the result of the operation is not stored. Remember, it’s fundamental to String objects that they are immutable.

Java 15 added a new utility method, s.formatted(...), which works the same way as String.format(s, ...). Notably, both of these methods, and also printf methods, take a variable-length argument list. In Java, a variable-length argument list is always passed as an array. That array can be built by the compiler from comma-separated arguments (which is how the arguments are normally presented), or the array can be provided by the programmer. For example, the following two lines are identical in their behavior:

System.out.print(f.formatted(new String[]{"A", "B"}));

System.out.print(f.formatted("A", "B"));

◉ The array argument passed to the formatted method effectively carries two values: A and B. Option E is incorrect because the array itself will not be presented in the output.

◉ Only one %s placeholder (sometimes called a format specifier) exists in the formatting string. Therefore, only the first of the two values will be used; the second element is simply ignored. Thus, the output contains A but not B.

◉ Because the spaces were deleted, the output will simply be the following:

A<new line>

These findings mean option A is correct, and options B, C, D, and E are incorrect.

Conclusion. The correct answer is option A.

Source: oracle.com

Friday, April 22, 2022

Bruce Eckel on Java records

The amount of boilerplate and errors eliminated by the addition of records to Java is quite significant. Records also make code much more readable.

JDK 16 finalized the addition of the record keyword, which defines classes designed to be data transfer objects (also called data carriers). Records automatically generate

Oracle Java, Java Records, Core Java, Java Skills, Java Jobs, Java Preparation, Oracle Java Learning, Oracle Java JDK 16

◉ Immutable fields

◉ A canonical constructor

◉ An accessor method for each element

◉ The equals() method

◉ The hashCode() method

◉ The toString() method

The following code shows each feature:

// collections/BasicRecord.java

// {NewFeature} Since JDK 16

import java.util.*;

record Employee(String name, int id) {}

public class BasicRecord {

  public static void main(String[] args) {

    var bob = new Employee("Bob Dobbs", 11);

    var dot = new Employee("Dorothy Gale", 9);

    // bob.id = 12; // Error:

    // id has private access in Employee

    System.out.println(bob.name()); // Accessor

    System.out.println(bob.id()); // Accessor

    System.out.println(bob); // toString()

    // Employee works as the key in a Map:

    var map = Map.of(bob, "A", dot, "B");

    System.out.println(map);

  }

}

/* Output:

Bob Dobbs

11

Employee[name=Bob Dobbs, id=11]

{Employee[name=Dorothy Gale, id=9]=B, Employee[name=Bob Dobbs, id=11]=A}

*/

(Note: The {NewFeature} comment tag excludes this example from the Gradle build that uses JDK 8.)

For most uses of record, you will just give a name and provide the parameters, and you won’t need anything in the body. This automatically creates the canonical constructor that you see called in the first two lines of main(). This usage also creates the internal private final fields name and id; the constructor initializes these fields from its argument list.

You cannot add fields to a record except by defining them in the header. However, static methods, fields, and initializers are allowed.

Each property defined via the argument list for the record automatically gets its own accessor, as seen in the calls to bob.name() and bob.id(). (I appreciate that the designers did not continue the outdated JavaBean practice of accessors called getName() and getId().)

From the output, you can see that a record also creates a nice toString() method. Because a record also creates properly defined hashCode() and equals() methods, Employee can be used as the key in a Map. When that Map is displayed, the toString() method produces readable results.

If you later decide to add, subtract, or change one of the fields in your record, Java ensures that the result will still work properly. This changeability is one of the things that makes record so valuable.

A record can define methods, but the methods can only read fields, which are automatically final, as in the following:

// collections/FinalFields.java

// {NewFeature} Since JDK 16

import java.util.*;

record FinalFields(int i) {

  int timesTen() { return i * 10; }

  // void tryToChange() { i++; } // Error:

  // cannot assign a value to final variable i

}

Records can be composed of other objects, including other records, as follows:

// collections/ComposedRecord.java

// {NewFeature} Since JDK 16

record Company(Employee[] e) {}

// class Conglomerate extends Company {}

// error: cannot inherit from final Company

You cannot inherit from a record because it is implicitly final (and cannot be abstract). In addition, a record cannot be inherited from another class. However, a record can implement an interface, as follows:

// collections/ImplementingRecord.java

// {NewFeature} Since JDK 16

interface Star {

  double brightness();

  double density();

}

record RedDwarf(double brightness) implements Star {

  @Override public double density() { return 100.0; }

}

The compiler forces you to provide a definition for density(), but it doesn’t complain about brightness(). That’s because the record automatically generates an accessor for its brightness argument, and that accessor fulfills the contract for brightness() in interface Star.

A record can be nested within a class or defined locally within a method, as shown in the following example:

// collections/NestedLocalRecords.java

// {NewFeature} Since JDK 16

public class NestedLocalRecords {

  record Nested(String s) {}

  void method() {

    record Local(String s) {}

  }

}

Both nested and local uses of record are implicitly static.

Although the canonical constructor is automatically created according to the record arguments, you can add constructor behavior using a compact constructor, which looks like a constructor but has no parameter list, as follows:

// collections/CompactConstructor.java

// {NewFeature} Since JDK 16

record Point(int x, int y) {

  void assertPositive(int val) {

    if(val < 0)

      throw new IllegalArgumentException("negative");

  }

  Point { // Compact constructor: No parameter list

    assertPositive(x);

    assertPositive(y);

  }

}

The compact constructor is typically used to validate the arguments. It’s also possible to use the compact constructor to modify the initialization values for the fields.

// collections/PlusTen.java

// {NewFeature} Since JDK 16

record PlusTen(int x) {

  PlusTen {

    x += 10;

  }

  // Adjustment to field can only happen in

  // the constructor. Still not legal:

  // void mutate() { x += 10; }

  public static void main(String[] args) {

    System.out.println(new PlusTen(10));

  }

}

/* Output:

PlusTen[x=20]

*/

Although this seems as if final values are being modified, they are not. Behind the scenes, the compiler is creating an intermediate placeholder for x and then performing a single assignment of the result to this.x at the end of the constructor.

If necessary, you can replace the canonical constructor using normal constructor syntax, as follows:

// collections/NormalConstructor.java

// {NewFeature} Since JDK 16

record Value(int x) {

  Value(int x) { // With the parameter list

    this.x = x; // Must explicitly initialize

  }

}

Yes, this looks a bit strange. The constructor must exactly duplicate the signature of the record including the identifier names; you can’t define it using Value(int initValue). In addition, record Value(int x) produces a final field named x that is not initialized when using a noncompact constructor, so you will get a compile-time error if the constructor does not initialize this.x. Fortunately you’ll only rarely use the normal constructor form with record; if you do write a constructor, it will almost always be the compact form, which takes care of field initialization for you.

To copy a record, you must explicitly pass all fields to the constructor.

// collections/CopyRecord.java

// {NewFeature} Since JDK 16

record R(int a, double b, char c) {}

public class CopyRecord {

  public static void main(String[] args) {

    var r1 = new R(11, 2.2, 'z');

    var r2 = new R(r1.a(), r1.b(), r1.c());

    System.out.println(r1.equals(r2));

  }

}

/* Output:

true

*/

Creating a record generates an equals() method that ensures a copy is equal to its original.

Source: oracle.com

Wednesday, April 20, 2022

Java | How to start learning Java

Oracle Java Exam Prep, Oracle Java Exam, Oracle Java Preparation, Oracle Java Career, Java Skills, Java Jobs, Oracle Java Tutorial and Materials

Java is one of the most popular and widely used programming languages and platforms. A platform is an environment that helps to develop and run programs written in any programming language.

Java is fast, reliable, and secure. From desktop to web applications, scientific supercomputers to gaming consoles, cell phones to the Internet, Java is used in every nook and corner.

About Java

◉ Java is a simple language: Java is easy to learn and its syntax is clear and concise. It is based on C++ (so it is easier for programmers who know C++). Java has removed many confusing and rarely-used features e.g. explicit pointers, operator overloading, etc. Java also takes care of memory management and it also provides an automatic garbage collector. This collects the unused objects automatically.

◉ Java is a platform-independent language: The programs written in Java language, after compilation, are converted into an intermediate level language called the bytecode which is a part of the Java platform irrespective of the machine on which the programs run. This makes java highly portable as its bytecodes can be run on any machine by an interpreter called the Java Virtual Machine(JVM) and thus java provides ‘reusability of code’.

◉ Java is an object-oriented programming language: OOP makes the complete program simpler by dividing it into a number of objects. The objects can be used as a bridge to have data flow from one function to another. We can easily modify data and function’s as per the requirements of the program.

◉ Java is a robust language: Java programs must be reliable because they are used in both consumer and mission-critical applications, ranging from Blu-ray players to navigation systems.

◉ Java is a multithreaded language: Java can perform many tasks at once by defining multiple threads. For example, a program that manages a Graphical User Interface (GUI) while waiting for input from a network connection uses another thread to perform and wait’s instead of using the default GUI thread for both tasks. This keeps the GUI responsive.

◉ Java programs can create applets: Applets are programs that run in web browsers. But applets support was deprecated in Java 9 release and has been removed in Java 11 release due to waning browser support for the Java plugin.

◉ Java does not require any preprocessor: It does not require inclusion of header files for creating a Java application.

Therefore, Java is a very successful language and it is gaining popularity day by day.

Important tips and links to get you started

1. Understand the basics:

Learning the basics of any programming language is very important. It is the best way to begin learning something new. Don’t have any anxiety, begin learning the concepts about the language. Get familiar with the environment, and slowly you will get used to it within no time.

2. Patience is the key:

Learning Java will be overwhelming because of the volume of material about the language but be patient, learn at your own pace, don’t rush. Mastering Java is a process that takes time. And remember even the best coders would have started at some point. So it’s not a big deal, just do as much as you can and keep going. Give it your time. Patience is the key to success.

3. Practice Coding

Once you have understood the basics, the best thing to do is to brush up your skills with regular practice. True knowledge comes only when you implement what you’ve learned, as is said ‘Practice Makes a Man Perfect’. So, code more than you read. This will build your confidence. Remember Perfect Practice makes you Perfect.

Practice Coding: You can increase your coding skills here. Happy Coding!

4. Read about Java regularly

Continuously read about the various topics in Java and try to explore more. This will help to maintain your interest in Java.

5. Study in a group

Group study is a better way to learn something. This way you get to know about new things about the topic as everyone presents their ideas and you can discuss and solve your coding problems on the spot. Get to know a common group of people who are willing to learn java.

Get help from a tutor and read as many books about java as possible. There are many good books in the market that will help you in learning java.

Setting up Java

You can download java from here. Here you will find different versions of java. Choose and download the one compatible with your operating system.

After you have set up the Java environment correctly, try running this simple program:

// A Java program to print OracleJavaCertified

public class GFG {

public static void main(String args[])

{

System.out.println("OracleJavaCertified");

}

}

Output:

OracleJavaCertified

If the environment is set up correctly and the code is correctly written, you shall see this output on your console. That is your first Java program!

// javascript code demonstrating a simple object
const school = new Object();
school.name = 'Vivekanada school';
school.location = 'Delhi';
school.established = 1971;

school.displayInfo = function(){
console.log(school.name + 'was established in '+ school.established +'at '+ school.location);
console.log(`${school.name} was`);
}
school.displayInfo();

Source: geeksforgeeks.org

Monday, April 18, 2022

[Fixed] Java lang exceptionininitializererror com sun tools javac code typetags

Oracle Java Certification, Oracle Java Career, Java Skills, Java Jobs, Java Learning, Oracle Java Tools

A quick guide to fix java lang exceptionininitializererror com sun tools javac code typetags with maven.

1. Overview

In this tutorial, We’ll learn how to fix the error “Java lang exceptionininitializererror com sun tools javac code typetags” when working with maven build.

2. Fix 1 – Java lang exceptionininitializererror com sun tools javac code typetags

Fixing this issue is by providing the correct java version.

In the pom.xml file, you might be giving the java version as below.

<maven.compiler.source>1.11</maven.compiler.source>

<maven.compiler.target>1.11</maven.compiler.target>

Below is the complete pom.xml file for reference.

<project xmlns="http://maven.apache.org/POM/4.0.0"

         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

    <modelVersion>4.0.0</modelVersion>

    <groupId>org.example</groupId>

    <artifactId>Deep</artifactId>

    <version>1.0-SNAPSHOT</version>

    <properties>

        <maven.compiler.source>1.11</maven.compiler.source>

        <maven.compiler.target>1.11</maven.compiler.target>

        <dl4j.version>0.9.1</dl4j.version>

    </properties>

    <dependencies>

        <dependency>

            <groupId>org.nd4j</groupId>

            <artifactId>nd4j-native-platform</artifactId>

            <version>${dl4j.version}</version>

        </dependency>

        <dependency>

            <groupId>org.deeplearning4j</groupId>

            <artifactId>deeplearning4j-core</artifactId>

            <version>${dl4j.version}</version>

        </dependency>

        <dependency>

            <groupId>org.datavec</groupId>

            <artifactId>datavec-api</artifactId>

            <version>${dl4j.version}</version>

        </dependency>

        <dependency>

            <groupId>org.nd4j</groupId>

            <artifactId>nd4j-api</artifactId>

            <version>1.0.0-beta3</version>

        </dependency>

        <dependency>

            <groupId>org.deeplearning4j</groupId>

            <artifactId>deeplearning4j-play_2.11</artifactId>

            <version>0.9.1</version>

        </dependency>

    </dependencies>

</project>

This error may be appearing from jdk version after 1.9. and version has to be as 10 or 11 or 12 or 14 or 17.

And java versions should not start with “1.xx” after 1.9 versions. So, you provide the java version as 1.XX then you will see mostly “Java lang exceptionininitializererror” error.

To fix this error you need to change the java version as follows.

<maven.compiler.source>11</maven.compiler.source>

<maven.compiler.target>11</maven.compiler.target>

3. Fix 2 – Java lang exceptionininitializererror com sun tools javac code typetags

If the above fix does not work that means any one of the dependencies are needed to have the higher java versions.

You can find all transitive dependencies using “mvn dependency:tree” command 

For example, if you are using deeplearning4j-core.jar file then you may need to get the latest lombak jar file to fix the issue.

Add the below jar as a dependency in pom.xml file.

<dependency>

  <groupId>org.projectlombok</groupId>

  <artifactId>lombok</artifactId>

  <version>1.18.2</version>

  <scope>provided</scope>

</dependency>

4. Fix 3 – Java lang exceptionininitializererror com sun tools javac code typetags

If the above two solutions did not work then you need to change JAVA_HOME to the latest one or you need to change the jdk version in the eclipse.

Source: javacodegeeks.com

Friday, April 15, 2022

Java 18’s Simple Web Server: A tool for the command line and beyond

Core Java, Oracle Java Exam, Oracle Java Certification, Oracle Java Certification, Oracle Java Career, Oracle Java Jobs, Oracle Java Skill

Learn how the new minimal server can make your life easier in the context of ad hoc coding, testing, prototyping, and debugging.

Java 18’s Simple Web Server is a minimal HTTP static file server that was added in JEP 408 to the jdk.httpserver module. It serves a single directory hierarchy, and it serves only static files over HTTP/1.1; dynamic content and other HTTP versions are not supported.

The web server’s specification is informed by the overarching goal of making the JDK more approachable. The server is an out-of-the-box tool with easy setup and minimal functionality that lets you hit the ground running and focus on the task at hand. The simplistic design also avoids any confusion with feature-rich or commercial-grade servers—after all, far better alternatives exist for production environments, and the Simple Web Server is not the right choice in such cases. Instead, this server shines in the context of prototyping, ad hoc coding, and testing.

The server supports only the HEAD and GET request methods; any other requests receive either a 501 - Not Implemented or a 405 - Not Allowed response. HEAD and GET requests are handled as follows:

◉ If the requested resource is a file, its content is served.

◉ If the requested resource is a directory with an index file, the content of the index file is served.

◉ Otherwise, the directory listing is returned.

The jwebserver command-line tool

The jwebserver tool comes with the following usage options:

jwebserver [-b bind address] [-p port] [-d directory]

           [-o none|info|verbose] [-h to show options]

           [-version to show version information]

Each option has a short and a long version, and there are conventional options for printing the help message and the version information. Here are the usage options.

◉ -h or -? or --help: Prints the help message and exits.

◉ -b addr or --bind-address addr: Specifies the address to bind to. The default is 127.0.0.1 or ::1 (loopback). For all interfaces, use -b 0.0.0.0 or -b ::.

◉ -d dir or --directory dir: Specifies the directory to serve. The default is the current directory.

◉ -o level or --output level: Specifies the output format. The levels are none, info, and verbose. The default is info.

◉ -p port or --port port: Specifies the port to listen on. The default is 8000.

◉ -version or --version: Prints the Simple Web Server’s version information and exits.

Starting the server

The following command starts the Simple Web Server:

$ jwebserver

By default, the server binds to the loopback address and port 8000 and serves the current working directory. If startup is successful, the server runs in the foreground and prints a message to System.out listing the local address and the absolute path of the directory being served, such as /cwd. For example

$ jwebserver

Binding to loopback by default. For all interfaces use "-b 0.0.0.0" or "-b ::".

Serving /cwd and subdirectories on 127.0.0.1 port 8000

URL http://127.0.0.1:8000/

Configuring the server

You can change the default configuration by using the respective options. For example, here is how to bind the Simple Web Server to all interfaces.

$ jwebserver -b 0.0.0.0

Serving /cwd and subdirectories on 0.0.0.0 (all interfaces) port 8000

URL http://123.456.7.891:8000/

Warning: This command makes the server accessible to all hosts on the network. Do not do this unless you are sure the server cannot leak any sensitive information.

As another example, here is how to run the server on port 9000.

$ jwebserver -p 9000

By default, every request is logged on the console. The output looks like the following:

127.0.0.1 - - [10/Feb/2021:14:34:11 +0000] "GET /some/subdirectory/ HTTP/1.1" 200 –

You can change the logging output with the -o option. The default setting is info. The verbose setting additionally includes the request and response headers as well as the absolute path of the requested resource.

Stopping the server

Once it is started successfully, the Simple Web Server runs until it is stopped. On UNIX platforms, the server can be stopped by sending it a SIGINT signal, which is done by pressing Ctrl+C in a terminal window.

The descriptions above about how to start, configure, and stop the server capture the full extent of the functionality of jwebserver. The server is minimal yet configurable enough to cover common use cases in web development and web services testing, as well as for file sharing or browsing across systems.

While the jwebserver tool certainly comes in handy in many scenarios, what if you want to use the components of the Simple Web Server with existing code or further customize them? That’s where a set of new API points come in.

The new com.sun.net.httpserver API points

To bridge the gap between the simplicity of the command-line tool and the write-everything-yourself approach of the com.sun.net.httpserver API, JEP 408 introduces a new set of API points for server creation and customization. (The com.sun.net.httpserver package has been included in the JDK since 2006.)

The new class SimpleFileServer offers the key components of the server via three static methods. These methods allow you to retrieve a server instance, a file handler, or an output filter in a straightforward fashion and then custom tailor or combine those functions with existing code as needed.

Retrieving a server instance. The createFileServer method returns a static file server that’s configured with a bind address and port, a root directory to be served, and an output level. The returned server can be started or configured further, as follows.

Note: The source code examples in this article use jshell, Java’s convenient read-eval-print loop (REPL) shell.

jshell> import com.sun.net.httpserver.*;

jshell> var server = SimpleFileServer.createFileServer(new InetSocketAddress(8080), 

   ...> Path.of("/some/path"), OutputLevel.VERBOSE); 

jshell> server.start()

Retrieving a file handler instance. The createFileHandler method returns a file handler that serves a given root directory that can be added to a new or existing server. Note the overloaded HttpServer::create method, which is a nice addition to the API that allows you to initialize a server with a handler in one call.

jshell> var handler = SimpleFileServer.createFileHandler(Path.of("/some/path"));

jshell> var server = HttpServer.create(new InetSocketAddress(8080), 

   ...> 10, "/somecontext/", handler); 

jshell> server.start();

Retrieving an output filter. The createOutputFilter method takes an output stream and an output level and returns a logging filter that can be added to an existing server.

jshell> var filter = SimpleFileServer.createOutputFilter(System.out, 

   ...> OutputLevel.INFO); 

jshell> var server = HttpServer.create(new InetSocketAddress(8080), 

   ...> 10, "/somecontext/", new SomeHandler(), filter); 

jshell> server.start();

Doubling down on handlers

Now that the server components can easily be retrieved, the Simple Web Server team wanted to enhance the composability of the existing com.sun.net.httpserver API. In particular the team doubled down on the creation and combination of handlers, which are at the core of the request handling logic.

For this, the team introduced the HttpHandlers class, which comes with two new methods.

HttpHandlers::of returns a canned response handler with fixed state, namely a status code, a set of headers, and a response body.

◉ HttpHandlers::handleOrElse combines two handlers on a condition.

Here’s an example of how they can be used.

jshell> Predicate<Request> IS_GET = r -> r.getRequestMethod().equals("GET");

jshell> var jsonHandler = HttpHandlers.of(200, 

   ...> Headers.of("Content-Type", "application/json"),

   ...> Files.readString(Path.of("some.json")));

jshell> var notAllowedHandler = HttpHandlers.of(405, Headers.of("Allow", "GET"), "");

jshell> var handler = HttpHandlers.handleOrElse(IS_GET, jsonHandler, notAllowedHandler);

In the example above, the jsonHandler is a canned response handler that always returns a status code of 200, the given header, and the content of a given JSON file as the response body. The notAllowedHandler, on the other hand, always returns a 405 response. Based on the predicate, the combined handler checks the request method of an incoming request and forwards the request to the appropriate handler.

Taken together, these two new methods make a neat couple for writing custom-tailored logic for request handling and, thus, they can help you tackle various testing and debugging scenarios.

Adapting request state

Speaking of testing and debugging, there are other times when you might want to inspect and potentially adapt certain properties of a request before handling it.

To support this, you can use Filter.adaptRequest, a method that returns a preprocessing filter that can read and optionally change the URI, the method, or the headers of a request.

jshell> var filter = Filter.adaptRequest("Add Foo header", 

   ...> request -> request.with("Foo", List.of("Bar"))); 

jshell> var server = HttpServer.create(new InetSocketAddress(8080), 10, "/", someHandler, 

   ...> filter); 

jshell> server.start();

In the example above, the filter adds a Foo header to each incoming request before the someHandler gets a go at it. This functionality enables the straightforward adaptation and extension of an existing handler.

Going even further

With these new API points at hand, other less-obvious yet interesting applications for the Simple Web Server are within reach. For example, the Simple File Server API can be used for creating an in-memory file server, which serves a .zip file system or a Java runtime directory. For details and code snippets of these and other examples, see my recent article, “Working with the Simple Web Server.

Source: oracle.com

Wednesday, April 13, 2022

5 Best Java Frameworks For Microservices

Microservices are extensively being used to create complex applications with multi-functionality by combining every piece and putting them layer by layer in a single unit. Many of us might not be aware of the fact that Microservices is an approach to crafting a single app in a set of small services where each service runs on its own (process).

Java Frameworks, Java Microservices, Oracle Java Exam Prep, Oracle Java Learning, Oracle Java Career, Java Skills, Java Jobs, Oracle Java Maerials

In other words, Microservices are more of a service-oriented architecture that enables any app to assemble in small chunks rather than creating a whole single unit. Even today, many organizations and developers love working under this bridge as it enables them to work independently. The primary reason behind this is “Dependency of the same programming language literally ends” here! This clearly saves the boat on cost management and improves efficiency.

So, let’s get started with the 5 Best Java Frameworks For Microservices.

1. Spring Boot


Possibly one of the finest and easy-to-go frameworks in Java for developing microservices. It’s open-source, loaded with massive features and functionality that we might have seen so far. Besides this, it can easily be deployed literally on many platforms (like Docker). It offers a strong backup of a vast community network of developers, you can get each query resolved and that’s for sure. It also enables to provide some fascinating in-built functionality like security, auto-configuration, starter dependency (that boosts rapid app dev.), and a list of other services. Let’s have some key features of using this framework:

◉ Spring Boot helps in monitoring multiple components simultaneously.
◉ It enables maximum throughput and efficiency by using the load balancing method where traffic is being distributed in small chunks.
◉ It also offers the distributed messaging system which follows the Pub-Sub (publish-subscribe) model.

2. Quarkus


It was introduced to create modern yet high functionality java applications to meet the expectations of a cloud-native environment. Besides this, it’s a full-stack Kubernetes-native platform tailored for JVMs (Java Virtual Machine) dedicatedly for containers which enables them to sustain in a purposeful cloud, or serverless kind of environment. It was designed with java frameworks like Eclipse, Kafka, Spring, and so on. It offers the right contextual information to GraaIVM (a high-performance JDK distribution) to enable support in the native compilation of Java applications. Thus, working with Quarkus can be real fun, it also enables some other key features which include:

◉ It is designed to sustain in a low power consumption environment by allowing first-class support for Graal, real-time metadata processing, and so on.

◉ The development model of Quarkus can easily adopt the development pattern of your project and can be a good suit, especially for those who don’t like switching things and this makes it a perfect solution for today’s serverless architecture. 

◉ Quarkus also offer a single unified configuration system which means that with a single configuration file, Quarkus applications can be easily configured at every single extension.

3. Micronaut


If you’re willing to work on AWS then Micronaut is the answer, it’s a perfect blend of full-stack, JVM-based, and that is purely designed to create serverless microservice applications. The best part of using Micronauts is you don’t need to worry about the startup time or memory consumption, though it offers a swift flow of speed despite the code length. It’s not wrong to say that Micronaut is a truly modern developer toolkit, designed for today’s developers that helps with injection dependency, AOP, configure management, and much more and that’s what makes it a simple yet elegant Java Framework. Also, below, we’re mentioning a few more important elements that might be helpful for you to understand:

◉ It offers both HTTP client and server that is built on Netty (client-server framework) which also includes an extensive range of tools that suits the cloud environment.

◉ It also provides AOT compilation (ahead of time – the act of compiling a higher-level programming language into a lower-level language before execution of a program) that promotes low memory, IoT, serverless apps, and much more.

◉ Micronaut also supports an extensive range of support for building applications over Java, Groovy, and Kotlin.

4. Eclipse Vert. x


Formed under the Eclipse foundation, it is a perfect solution for crafting react apps over JVM (Java Virtual Machine). Eclipse Vert.x is also a perfect solution for the execution of all kinds of constrained environments (such as VM and Containers). Besides this, Vert.x is a toolkit that offers high flexibility and accuracy for building blocks for any components. The best about vert.x is the independency of creating any components with all the usual libraries. This makes it interesting to work with Eclipse vert.x in your project. Although there are certain key factors to consider beforehand:

◉ The developer will have the option to use multiple languages in their project by using the basic APIs for writing asynchronous networked applications using polyglot.

◉ It is often known as the I/O threading model where a developer can write code as a single thread app using vert.x

◉ It helps in scaling small or medium segment hardware by handling multiple concurrencies with the help of small kernel threads.

5. Ballerina


Just to be specific, it’s not a framework but a distributed programming language that is being specifically used to code distributed applications and that also enables programmers to develop custom network apps with the help of open-source language. Besides this, Ballerina is a cloud-native programming language that eases the JVM frameworks and it also includes annotations for Kubernetes and Docker which help developers to build apps in a low coding environment. Some other features of using Ballerina are as follows:

◉ It enables language-integrated queries with the help of declarative processing of JSON, tabular data, and XML.

◉ Ballerina is highly reliable and can easily handle errors, concurrency safety with the help of readable syntax

◉ It also offers textual as well as graphical syntax based on sequential diagrams.

The introduction of frameworks is simply to elevate the capabilities and to provide an enriched user experience than ever. The idea is simple, grab the best one and start implementing it in your project, the rest it’s all your requirement and the kind of features you’re looking for. 

Source: geeksforgeeks.org

Monday, April 11, 2022

Top 10 Java Frameworks To Learn in 2022

10 Java Frameworks, Core Java, Java Certification, Java Learning, Java Preparation, Java Certified, Java Career, Java Skills, Java Jobs

From basic “Hello World” code to enterprise application-level code, the common thing is the choice of the most suitable language. Java stands in the list with head high as it is robust, platform-independent, and secure. Tech giants such as Google, Netflix, Amazon use Java as one of the languages for development. Writing Java code from scratch to build enterprise-level applications, be it, finance, banking, e-commerce, IT sector can cause manual overhead as it contains many modules which sum up to become an application.

Here’s where Java frameworks come to the rescue. Java framework is java-specific pre-written code or a template that may contain predefined classes and methods used to perform the input, process, and output tasks. Developers can use this predefined template to develop an application by reusing and filling it as suitable for the application.  

Let’s look into the top Java frameworks that are a good choice to learn in 2022 if you are a Java developer or aspire to be one.  

1. Spring

Spring is an open-source framework that can be used by any Java application. It eliminates the problem of tight coupling among modules as it provides loose coupling so that modification in one class doesn’t lead to the need for change in other classes. Spring is a package in itself as it supports features such as configuration and security. It has an active community and therefore a lot can be found about it online.

The following features of Spring make it unique and developers awestruck:

◉ Inversion of control (IoC) – Object coupling is done at runtime, rather than compile time.

◉ Dependency Injection – It works together with IoC and allows the loose coupling by supplying dependency of objects.

2. Hibernate

Hibernate is an Object-Relational Mapping (ORM) that makes handling databases with Java easier by eliminating the problems faced with JDBC as JDBC does not support object-level relationships which is a key concept while developing applications. Queries in Hibernate are known as HQL (Hibernate Query Language). It maps database tables to corresponding Java classes directly by creating an abstraction layer for the code to be loosely coupled. With Hibernate, developers do not have to think about establishing database connections or performing operations, it is taken care of by Hibernate itself. The hibernate.cfg.xml file contains information about database configuration and mapping.

3. Struts

Apache Struts is an open-source framework that is used for web applications. It follows MVC (Model-View-Controller) thus resulting in a convenient application by dividing model, view, and controller, and binding these three through the struts-config.xml file. It has two versions, Struts 1 and Struts 2. Struts 2 is the upgraded version and is preferred by all the companies. It supports AJAX, REST, and JSON as it comes with various plugins. It provides an easy setup and is more flexible as compared to traditional MVC architecture. It is a good choice for web developers.

4. Google Web Toolkit (GWT)

Do we really have to guess who developed the Google Web Toolkit? It’s Google as evident from the name. Google products like AdSense, Blogger and Google Wallet have been developed using GWT. The key reason for which Google Web Toolkit is considered developer-friendly is that it makes working with Google APIs easier. The ability to convert Java code into JavaScript code based on a browser makes it stand out in the queue of Java frameworks. Beautiful and elegant internet Java applications can be made using it.

5. JavaServer Faces (JSF)

JSF is developed by Oracle. It is a part of Java Platform, Enterprise Edition, and therefore is a standard through the Java Community Process. JSF is based on MVC architecture and is a component-based UI framework that allows developers to just drag and drop UI components without deeply knowing any of the technologies such as JavaScript, etc. It eliminates the need of introducing a new framework as it allows available backend Java code to be extended with a web interface. Facelets is the system for templating in JSF and allows integration with AJAX-enabled components.  

6. Dropwizard

“Does exactly what it says on the tin” fit for the Dropwizard, it is a wizard. It completes the application very fast owing to its enormous support for logging, application metrics, advanced configurations. It bundles up many libraries from the Java ecosystem into a single framework. The rapid development of RESTful APIs and support for rapid prototyping make Dropwizard magical when its features are seen all together.  

7. Grails

Grails is beginner-friendly. It is written in the Groovy programming language. Groovy is similar to Java and has more features than Java. Grails is based on the MVC pattern and runs on the Java platform, being compatible with Java syntax. GSP (Groovy Server Pages) is the rendering element in Grails and GORM is the ORM implementation for it. What if we mix Groovy and Java code? It will run fine. One of the highlights of Grails is that we can integrate Grails and Java code.  

8. Vaadin

Vaadin has a whole new way of working as it eliminates the worry of client-server communications and routing so that one can concentrate on the presentation layer alone. It is open-source and components can be customized to create highly versatile code. You can write your application completely in Java as the UI components of Vaadin take care of the browser activities. One of the key features of Vaadin is that it is a cross-platform framework, i.e., you can migrate your code into a different platform.  

9. Blade

Looking for the simplest Java framework? Say Hi to Blade!  

Blade is a lightweight full-stack web framework based on MVC architecture. It works on a module basis, which helps in debugging by dividing an application into modules. Blade works with JSON files and is used for full-stack web development.

10. Play

Play framework follows convention over configuration. One of the unique things is that you can write Play applications not only in Java but in Scala as well. Play does not necessarily follow J2EE web standards. Play makes integration with Maven projects easier and creates simple JAR files. Play offers support for both the IIntelliJIDEA and Eclipse. It provides in-built relational database access libraries. It includes many features which make it a favorite of many Java developers.  

When most sectors and businesses have gone virtual, web applications can be developed seamlessly with the help of any of the above Java frameworks in 2022. Each one has its uniqueness so, you can choose as per your requirements. Frameworks have made development with languages easy and the above Java frameworks have made it easier with Java.

Source: geeksforgeeks.org