Monday, February 27, 2023

Curly Braces #9: Was Fred Brooks wrong about late software projects?


After more than 30 years of professional software development, I’ve learned that not only do you need a lot of code to build software but you also need lots of communication. This is what the late Fred Brooks described as a problem in his famous book, The Mythical Man-Month—especially in regard to adding more software developers to an already late software project. The level of intercommunication between people grows to where it impedes progress, and the project becomes increasingly late with each person added.

Oracle Java, Oracle Java Career, Oracle Java Skills, Oracle Java Jobs, Oracle Java Prep, Oracle Java Preparation, Oracle Java Tutorial and Materials, Oracle Java Certification, Oracle Java Guides, Oracle Java Learning

That’s where Brooks’ Law comes in, which clearly states: “Adding manpower to a late software project makes it later.”

Things are rarely this straightforward. In my opinion, it may have been inaccurate for Brooks to talk about this human-resource paradox as a generalization. It was a problem specific to his project, his team’s architecture, and his team’s choice of languages and tools. He also assumed that all developers are equal and that tasks cannot be easily worked on independently. Even so, over time, studies on many large software projects have proven him correct—enough that Brooks’ Law is, well, Brooks’ Law.

If you buy into the argument that adding people to an already late software project delays it even more, then what can you do to speed things up? It turns out there are some things you can do. A few of the following suggestions are from Brooks himself and a few are my own (for what they’re worth).

Use the Bermuda Plan


To speed up a software project, the Bermuda Plan—part of Brooks’ Law—may sound cryptic but it’s very simple: Send most of your developers on a nice vacation and let your top people do all the work unabated. That’s not very formulaic, but it’s a guideline that makes sense if communication and distraction are the main impediments to progress. It may not be practical, however.

Want a more practical version? Well, you can move developers to critical nondevelopment tasks. This helps to reduce communication delays and assigns developers to finish tasks that help the remaining developers become more productive. For example, developers can be assigned to

◉ Improve deployment processes (DevOps).
◉ Improve system architecture to support parallel programming teams.
◉ Implement or enhance automated testing.
◉ Build out lab resources to reduce hardware bottlenecks.
◉ Build or identify tools to help coding and debugging.
◉ Improve documentation to help get other developers up to speed.

You might demoralize developers who are removed from mainline development tasks, and you might also create more costs related to additional release cycles, but these downsides can be managed and controlled.

Design the system with proper segmentation


To minimize intercommunication and interdependencies between software teams, carefully segment your system design to allow teams to work independently—that is, in parallel.

For example, a single team working on a client/server application will require a lot of coordination as they work on the message-by-message communication between the two components of the application.

However, if they first decide to use an independent communication protocol (such as HTTP), the client and server teams can work almost completely independently, as long as each adheres to the communication specification. I would suggest with confidence that you could develop a new web browser today without speaking to a single web server developer.

Leverage pair programming


Pair programming, where two individuals are glued together to work side by side on a single task, can reduce communication needs by half (or more). Instead of each person working on different tasks and needing to communicate across the organization, developers are paired, reducing cross talk. Pair programming helps to further reduce communication problems because knowledge sharing occurs organically, especially when you pair a newer developer with a more experienced one.

The benefits from pair programming often improve productivity for each developer, and for the team as a whole, for the following reasons:

◉ Individual programmers can focus on their strengths as part of a pair.
◉ The organization is often more resilient against employee turnover.
◉ There is less schedule impact when people need time off.
◉ Having multiple people working on the same problem and code tends to result in fewer defects to fix later. This is also known as Linus’ Law, named after Linus Torvalds of Linux fame.
◉ With additional people participating in the same conversations, misunderstandings are reduced and communication is often reduced because there is less rehashing.
◉ Best practices and time-saving techniques are easily shared and spread throughout the team.
◉ With rubber ducking, as described in The Pragmatic Programmer by Andrew Hunt and David Thomas, debugging is improved, mainly due to human nature: One person explaining something to another helps to uncover issues very quickly.
◉ When people work together, they tend to stay more focused, reinforce each other’s confidence and strengths, and generally desire to be more productive so as to not let the other person down.

Add more people


Yes; you read that right. I’m suggesting adding even more people to a late project to help speed it up. That completely violates Brooks’ Law. But it can help in situations where less training and overhead are needed for the added people, for example, if they are technology specialists, proven consultants with exceptional skills and expertise, nondevelopers who have exceptional communication skills, or developers who have experience building similar systems.

In the extreme, the use of competition between internal groups can lead to seemingly miraculous results. You can see examples in Tracy Kidder’s The Soul of a New Machine or in other legendary large-scale development efforts written about in books or online articles.

In my experience, an acquisition can make a difference as well. I’ve witnessed multiple examples where a project’s success was so critical that a decision was made to acquire a company with a similar product or technology to enable progress. This can have multiple side effects that do indeed work:

◉ An acquisition can serve as a catalyst for renewed hope and energy that reinvigorates the original team.
◉ There’s an infusion of fresh talent that was not hired by the original team.
◉ Unintended but healthy competition can result.
◉ A new camaraderie between developers can improve teamwork.
◉ New managers who are unafraid to ask questions and suggest changes are infused into the project.
◉ New design patterns and ways of thinking can unlock unrealized time savings.
◉ Additional thought leadership, instead of added developers, can help increase a project’s velocity.

Extend the schedule, if you can


That suggestion may sound like a snarky comment, but in reality, the original schedule may simply be unattainable, as Brooks’ Law also points out. Scheduling mistakes often account for late projects, which is an issue that extends beyond software development. Just ask any homeowner who’s remodeling, and they’re sure to agree.

Whether or not you can extend the schedule, progress can often be improved by performing tasks more often and working out the bottlenecks. This is in line with the Agile development process, as well as with DevOps, where you release smaller increments more often and get better at doing that. If you can define an incremental release plan where each phase has its own schedule and then divide groups of developers across the phases, you may be able to achieve a higher degree of independence and parallelism.

Use Java!


I would never suggest one language is more productive than another, but Java is a complete platform. Java has a robust virtual machine that abstracts the hardware details and a mature set of tools that help in every facet of development and debugging. In addition, there’s a rich set of commercial and open source software to add value to your projects, powerful IDEs, and a vast community to turn to for help.

Source: oracle.com

Friday, February 24, 2023

Quiz yourself: How does a Java finally block handle an exception?

Quiz yourself, Oracle Java, Java Tutorial and Materials, Oracle Java Prep, Java Preparation, Java Learning, Java Certification, Java Guides, Java Learning, Java Guides

You’ll want to know the difference between abrupt completion and normal completion.


Given these two exception classes

class BatteryException extends Exception { }
class FuelException extends Exception { }

And the following Car class

public class Car {
  public String checkBattery() throws BatteryException {
    // implementation
  }
  public String checkFuel() throws FuelException {
    // implementation
  }
  public String start() {
    try {
      checkBattery();
      checkFuel();
    } catch (BatteryException be) {
      return "BadBattery";
    } finally {
      return "";
    }
  }
}

Which statement is correct about the start() method? Choose one.

A. It may return BadBattery or an empty string.
B. It can only return an empty string.
C. It may throw FuelException.
D. It will cause a compilation error because FuelException is not handled.

Answer. This question investigates a less-frequently used behavior of a finally block.

Looking at the code, notice that there are two domain-specific exceptions related to the battery and the fuel. They’re direct subtypes of Exception, which means that they are checked exceptions. If such an exception might be thrown by a method, it must be declared in the throws clause of that method.

The usual way to handle an exception inside a method is to use a try-catch structure and provide a catch block that names the exception, or a parent type of that exception. In this code, there’s a catch block for the BatteryException but not for the FuelException. Given that the method does not declare throws FuelException, you might expect the compiler to refuse to compile the code.

However, if you think a bit deeper on this, you should realize that the finally block executes return "". This does exactly what it says: No matter how the code reaches the finally block, the result is that the method will return an empty string. No exceptions will be thrown; indeed, any FuelException that might arise will simply be abandoned. In other words, because FuelException is never thrown by the method, it’s not necessary to declare it in a throws clause.

The above examination shows that options C and D are both incorrect, because no FuelException is possible, and the code does not fail to compile due to a missing throws clause.

Digging into this logic a little further, if the code executes return "Bad battery" from inside the try block, it must execute the finally block before control is ultimately passed to the caller. This too causes the method to return the empty string from inside the finally block. This tells you that option A is also incorrect. Further, when you combine this with the earlier discussion, you should see that the code always returns an empty string; therefore, option B is correct.

It might be of interest to follow up on the qualitative descriptions above with some details from the Java Language Specification. First, you need to understand the meaning of the phrase abrupt completion, which is the topic of section 14.1. This section might be paraphrased as follows: If a region of code runs to its end, it completes normally. If, by contrast, it jumps out of that region without completing all the steps, it completes abruptly.

Note that this doesn’t imply (nor does it exclude) exceptions. In particular, a return statement constitutes an abrupt completion of a method, where running off the end—that is, reaching the closing curly brace of the method—is normal completion. (You should read the specification for a formal, and more complete, description.)

With that in mind, let’s continue looking in the specification and now focus on the behavior of catch and finally. In section 14.20.2 you’ll find the following text:

If the catch block completes normally, then the finally block is executed. Then there is a choice:

◉ If the finally block completes normally, then the try statement completes normally.
◉ If the finally block completes abruptly for any reason, then the try statement completes abruptly for the same reason.

If the catch block completes abruptly for reason R, then the finally block is executed. Then there is a choice:

◉ If the finally block completes normally, then the try statement completes abruptly for reason R.
◉ If the finally block completes abruptly for reason S, then the try statement completes abruptly for reason S (and reason R is discarded).

These paragraphs explain that abrupt completion of the finally block means that the try construct completes abruptly for the same reason. In effect, this supersedes any previous mode of completion and any previous reason for abrupt completion.

Note that execution of a return statement constitutes abrupt completion, as explained in the following sentence from section 14.17:

It can be seen, then, that a return statement always completes abruptly.

You know that the return "" in the finally block is always executed because there are no paths through the method that do not enter the try construct. Putting this together with the specification excerpts above, you can see that the only possible result of executing the method is abrupt completion, which returns an empty string.

Conclusion. The correct answer is option B.

Source: oracle.com

Wednesday, February 22, 2023

Embedded Java: Then and now


Oracle Java SE 8 Embedded is the final major release of the Oracle Java SE Embedded product. Starting with JDK 9, Oracle doesn’t plan to offer a separate Java SE Embedded product download.

Embedded Java, Oracle Java, Java Career, Java Prep, Java Tutorial and Materials, Java Certifications, Java JDK

Mainstream Java is heading towards version 20, so what’s going on with Java in the embedded space?

First, here’s some history. Java (then called Oak) was initially developed by engineers at Sun Microsystems more than 30 years ago. Originally, Oak was developed to provide an object-oriented programming and runtime environment for embedded systems—independent of the underlying hardware and operating system.

The idea was to create a uniform and portable programming platform with automatic memory management and robust execution of code in a virtual environment. By using a virtual machine (VM), programmers would automatically avoid otherwise common problems such as crashes caused by buffer overflows and faulty pointer arithmetic.

Object-oriented programming for embedded systems was a relatively new idea in the 1990s, and the embedded community was skeptical. The Java programming language was designed to be interpreted during runtime and as such, it was initially intrinsically slow and resource hungry. And even though the first Java release had only eight packages and about 200 classes, it was considered too “heavy” by many hardcore C/C++ and assembly language programmers.

Given the limited processing power and very limited memory availability back then, it was understandable that the advantages of object-oriented programming, portability, and secure execution did convince many software engineers.

Set-top box manufacturers were among the earliest adopters of Java in the embedded space even before the Java plugin for web browsers opened the door for an entirely new programming model on the desktop.

Applets—which were small, and usually visual, programs written in Java—could be downloaded into a browser and create an interactive user experience that was previously hard to accomplish with HTML and early scripting languages.

Because the Java code was executed in a VM invoked by the browser, applet authors didn’t have to care much about the variety of target operating systems and underlying CPU architectures. Java grew bigger and stronger and conquered the desktop world. Even back then, PCs had serious computing power and memory, and some technology advances such as just-in-time (JIT) compilers helped increase the acceptance of Java as a mainstream programming language for client applications.

Enter J2ME


Even decades ago, programmers and project managers realized the advantages of object-oriented programming and the benefits of the fast-growing class libraries and functionality included in the Java language. But most embedded systems still weren’t strong and big enough to host a full desktop Java runtime environment (JRE). The Java stakeholders (Sun Microsystems, IBM, Nokia, RIM, Philips, Siemens, Motorola, and others) organized in the Java Community Process approved a Java Specification Request, JSR 68, to specify a Java variant specifically designed for embedded use: Java 2 Micro Edition, also known as J2ME.

Subsetted class libraries and small-footprint JVMs opened the door for widespread use of Java in embedded systems. In particular, mobile phones made use of Java with the Mobile Information Device Profile (MIDP) profile, a configuration targeted to handheld phones with elementary graphics capabilities from the Limited Connected Device User Interface (LCDUI).

In 2001 MicroDoc (the company we work for) was one of the first companies in Europe to begin working on embedded Java. Initial work was done on the infamous PowerPC Red Box with UNIX, followed by a JVM port to Sun’s ChorusOS microkernel operating system for credit card payment terminals. Many of those terminals are still in operation today with their original VM infrastructure.

More and more embedded systems were integrated into communication networks, and implementing complex networking protocols in C or even assembly language turned out to be complicated and error prone. Java offered an integrated network stack and an automatic software distribution mechanism locally and over the network. And embedded JVMs became available for many operating systems and CPU architectures such as SH-4, PowerPC, ARM, MIPS, and x86.

The adoption of Java in the embedded space was still limited by frequent complaints about poor runtime performance and high memory requirements. But new technologies such as tiered garbage collectors and ahead-of-time (AOT) compilation made the execution of Java code more predictable and faster than ever. And the advent of stronger 32-bit processors and affordable memory opened the way for many high-tech use cases such as automotive head units and Global System for Mobile Communications (GSM) network stations and controllers.

Java takes the lead


Java surpassed all other programming languages in popularity for the first time in 2001 when it became the most used programming language according to the TIOBE Index, and it stayed on top until 2019. During that period, embedded Java achieved widespread adoption in devices such as telematics units, Blu-ray players, internet routers, and integrated internet edge devices. MicroDoc ported a VM to Windows CE on AMD’s Geode chipset as part of AMD’s 50x15 initiative. The initiative was founded to accelerate access to the internet with very low-cost devices to enable educational and commercial applications online, even in less-developed countries.

That goal was eventually reached with the advent of feature phones and, of course, smartphones. Beyond that, MicroDoc had its first high-volume deployment in the auto industry in 2009. The company’s engineers cooperated closely with a well-known German tier-one supplier, and they created one of the first aftermarket onboard telematics devices for the trucking industry. The platform, based on 32-bit ARM/Linux, was designed to enable track-and-trace services and to enable third-party applications to be deployed.

What followed was a series of automotive engagements with tier-one suppliers and OEMs. MicroDoc provided an advanced runtime platform for automotive head units on a variety of hardware and software architectures. Starting with 32-bit SH-4 on Windows Automotive, the team ported and optimized VMs for Linux on PowerPC, ARM32, and ARM64, which enabled MicroDoc’s customers to deploy their Java-based applications on whatever hardware generation they chose to deploy.

As a kind of niche market supplier for customized Java VMs, MicroDoc was able to work with customers from a variety of industries, including network infrastructure companies, logistics companies, smart-home device manufacturers, and companies in the healthcare sector.

Besides porting JVMs to numerous target devices, MicroDoc also added valuable reusable components to the standard class libraries. These include Java stacks for the use of many variants of OpenGL, libraries for the open standard communication protocol MQTT, device management protocols, and hardened stacks for the Transport Layer Security (TLS) protocol.

Modern embedded Java


As a typed language, Java is still among the most popular languages today, and embedded applications benefit from the integrated security measures in current Java systems. But it looked like the idea of a truly embedded Java came to an end with Java 8.

To maintain the integrity of the Java language and still allow for customizing Java runtimes for embedded systems, Oracle released Oracle Java SE Embedded, which defined three so-called compact profiles that were strict subsets of the desktop class libraries.

This move was needed since Java 8 had become fairly large and contained many features rarely used in embedded systems. The smallest compact profile class libraries have a footprint of less than 14 Mb compared to the desktop version libraries, which go above 50 MB.

A configurable JIT compiler and a choice of garbage collectors complemented the embedded version and made it a suitable embedded platform for many industries; for example, some of the world’s largest automakers rely on Java 8 technology for their infotainment and telematics systems.

Because the current release of the platform is Java 19, and Java 20 will be released in March 2023, it is fair to ask why there isn’t a more recent Java embedded version. There are several reasons.

The embedded systems market is very complex. There are hundreds, if not thousands, of different CPU variants and operating system dialects on the potential target devices, and it is extremely expensive to maintain a complex codebase such as a VM on so many platforms. The few profitable high-volume embedded applications, such as smartphones, have turned to open source offerings or focus on different languages.

Therefore, it is hard to justify big investments in many VM platforms. Oracle has reduced the number of embedded platforms it supports. What remains is a few smaller software vendors (such as MicroDoc) who specialize in the customization and optimization of JVMs for niche markets.

Java has a new module system. Java 9 introduced a new module system (known as Project Jigsaw). Java’s monolithic class library was rearchitected to allow separation into functional components that can be added, as needed, for a runtime system. Unused components can be left out—thus, the footprint is reduced. Whether this technology lives up to its promises remains to be decided by embedded systems engineers. Some claim that the unbundling was not done thoroughly enough and the essential core modules needed for every application are still too big.

Some people complain about performance issues, in particular startup time. When you launch a VM, you load a big piece of software and classes before any user code can be executed. And then the JIT compiler monitors the application and decides when to interrupt the execution of frequently used methods to compile them into machine code. That helps at runtime later, but it also increases the system’s total startup time.

Help comes from the cloud


Cloud computing has become a mainstream business in recent years. Giant server farms host applications for millions of users. Many services offered in the cloud are based on microservice architectures. Microservices are small functional entities that are invoked and immediately suspended after use. The requirements for cloud computing are fairly in line with what’s needed in the embedded space: a small footprint and fast startup.

Even though today’s servers have abundant horsepower and virtually unlimited memory, when millions of users are served at a time, the resources need to be shared among all users, and the fraction available for a single user can become fairly small. Also, users don’t want to wait for many services to start up; they want an immediate response.

Oracle is one of the major cloud providers and has launched a game-changing project to solve these problems: GraalVM.

GraalVM is a portable VM that can be used to execute a variety of programming languages: Java and also Python, R, Ruby, and JavaScript. And GraalVM offers a unique technology called GraalVM Native Image that can be used to compile Java applications directly into a standalone executable file, called a native image, for a target platform.

Using a native image is different from having a JVM execute AOT code. A native image contains only the ready-to-run machine code of the application and a lightweight memory manager for garbage collection. Most of the other heavyweight JVM components are stripped: No interpreter, no JIT compiler, and no class libraries are part of the native image. This leads to a superslim footprint and blindingly fast startup times.

Does that sound like an embedded platform? It does.

GraalVM Native Image is well suited for bringing new and existing Java applications to embedded devices. It offers the full universe of Java advantages and at the same time saves memory and CPU cycles.

Oracle is focusing on server-based cloud computing with GraalVM, but there is a growing community working on implementations for embedded use. MicroDoc has entered into a contract with Oracle to bring a commercial license offering for GraalVM to the embedded market.

MicroDoc has already implemented a cross-compiler for GraalVM Native Image compilation that can create executables for previously unsupported platforms that are commonly used in the embedded space (such as 32-bit Linux running on ARM). With 30 years of experience in the field of embedded VMs, MicroDoc can bring GraalVM-based solutions to legacy systems and future architectures as well.

In other words, the story of embedded Java is not over. In fact, it has only just begun.

Source: oracle.com

Monday, February 20, 2023

Announcing OCI File Storage replication

The Oracle Cloud Infrastructure (OCI) File Storage service now supports cloud native asynchronous replication as a feature of our highly available, elastic file system. With the launch of this feature, file system replication is available as a fully managed solution for your enterprise workloads.

What is File Storage replication?


File Storage replication allows you to replicate your source file systems to target file systems in different availability domains. These targets can exist across multiple availability domains within a region or across different regions in your tenancy. For example, an Oracle E-Business Suite (EBS) customer with primary operations in availability domain 1 in Phoenix, AZ, can choose to have their backup or recovery site be in availability domain 2 of Ashburn, VA. This functionality is critical for many customers who need disaster recovery solutions to protect critical business data and adhere to compliance requirements. File Storage replication uses snapshots and clones as some of the building blocks for its replication and disaster recovery architecture.

File Storage replication gives you consistent file system replicas. Your application can use the target file system, fully confident of its consistency. The capability of having file system consistency is another OCI first in the industry, where the underlying replication technology doesn’t rely on block-level replication. This setup is unlike what other hyper-scale cloud providers offer, where the filesystem consistency is not provided.

With File Storage replication, the source file system can be replicated to multiple target regions simultaneously. You can select a replication interval that meets your business needs. This flexibility helps you meet your compliance and information life cycle requirements with the following use cases:

◉ Geographically dispersed disaster recovery: Failover and failback
◉ Data migration and data mobility: General data movement (copy and backup), snapshots, and read-write file system clones in other availability domains or regions

Understanding File Storage replication concepts


A replication relationship is established between a file system in the primary (source) region and a file system in the secondary (target) or recovery region. The replication relationship is represented by replication resources, which are tracked by unique Oracle Cloud identifiers (OCIDs) in the source and target regions. Source and target file systems don’t necessarily have to be in different regions. They can be in different availability domains within the same region.

The initial data transfer from the source to the target file system is called the base copy. When the base copy is complete, periodic system-driven snapshots are taken on the source, and the incremental data are securely transferred over to the target file system. These increments are called delta copies. The base copy, snapshots, and delta copies all happen without any intervention from the user.

The frequency of the delta copy is controlled by the replication interval specified by you. For convenience, the replication feature assesses your file system and recommends an appropriate replication interval. You can monitor the health, progress, and performance of the replication by using metrics, alarms, and notifications.

File Storage replication is asynchronous in nature. The source and target file systems have an active and passive role. You can actively use the source file system during replication. The data on the target file system is accessible only when the replication relationship is ended. Alternatively, you can also create a clone from a snapshot in the target filesystem and use that clone with your application.

Oracle Java, Java Career, Java Tutorial and Materials, Java Prep, Java Prep, Java Certification

Get started


With two clicks, you can get replication going! Head over to the Oracle Cloud Console and select the file system that you want to replicate. In the Resources panel, click the Replication link. Then, click the Create Replication button, fill out a few fields, and you’re on your way.

Oracle Java, Java Career, Java Tutorial and Materials, Java Prep, Java Prep, Java Certification

Like any other File Storage feature, you can also use the OCI command line interface (CLI), application programming interface (API), or the software development kit (SDK) to create and manage replications. You can also set up replication using Terraform (resource manager).

When you put together your disaster recovery solution using File Storage replication, you need to understand the starting sizes, the rates of change to your file systems, and the network bandwidth between the source and target regions. For large or rapidly changing file systems, you might find that the replication interval currently supportable is beyond your recovery point objectives.

Replication has a built-in estimator tool that considers these factors and helps you arrive at the recommended replication interval for the target region. It also estimates the completion time for the base copy. With replication metrics and OCI alarms, the replication estimator feature enables you to plan and monitor your recovery point objectives.

Like other File Storage features, using replication has no extra cost. However, as a file system user, you’re billed for the storage that you use. So, when you replicate a file system, you pay for the storage used for the source and the target file systems. If you’re replicating between regions, a common use case for disaster recovery considerations, you’re also charged for the outbound data transfer between regions.

Source: oracle.com

Saturday, February 18, 2023

Ace the 1Z0-819 Java SE 11 Developer Exam: Your Step-by-Step Guide

Oracle Certified Professional - Java SE 11 Developer, Oracle Java 11 Mock Test, 1Z0-819, Oracle 1Z0-819 Questions and Answers, Oracle Java SE 11, 1Z0-819 Study Guide, 1Z0-819 Practice Test, Oracle Java SE 11 Developer Certification Questions, 1Z0-819 Sample Questions, 1Z0-819 Simulator, Oracle Java SE 11 Developer Online Exam, Oracle Java SE 11 Developer, 1Z0-819 Certification, Java SE 11 Developer Exam Questions, Java SE 11 Developer, 1Z0-819 Study Guide PDF, 1Z0-819 Online Practice Test, 1z0-819 dumps, 1z0-819 exam questions, 1z0-819 exam, java se 11 developer 1z0-819 dumps, 1z0-819 preparation, oracle 1z0-819, 1z0-819 syllabus, java se 11 developer exam number: 1z0-819, 1z0-819 questions, 1z0-819 exam dumps, java se 11 developer 1z0-819 practice tests, 1z0-819 mock exam free, java 1z0-819, java se 11 developer 1z0-819 syllabus, java se 11 programmer i 1z0-819 book pdf free download, 1z0-819 mock exam, 1z0-819 book pdf, java 11 certification dumps 1z0-819, java se 11 developer 1z0-819 pdf, java se 11 developer certification (1z0-819), java se 11 developer 1z0-819 ocp, java se 11 developer 1z0-819, java se 11 developer exam questions, java se 11 certification questions
Java is a powerful and popular programming language used by developers worldwide. As the language evolves, so do the certifications and exams that certify developers' skills and knowledge. The 1Z0-819 exam is a new Java SE 11 Developer exam that tests your proficiency in Java SE 11. Passing this exam can open up new career opportunities and demonstrate your expertise to employers. This article will provide a step-by-step guide to clearing the 1Z0-819 exam, including essential concepts, study resources, and tips.

Understanding the 1Z0-819 Exam

Before diving into how to pass the 1Z0-819 exam, it's essential to understand what this exam tests and how it's structured. The 1Z0-819 exam tests your proficiency in Java SE 11, including core language features, APIs, and libraries. The exam consists of 80 multiple-choice questions and lasts for 180 minutes. To pass the exam, you must score 63% or higher.

Step 1: Review the Exam Objectives

The first step to passing the 1Z0-819 exam is to review the exam objectives. These objectives provide a detailed breakdown of the topics covered on the exam and can help you focus your studying. The exam objectives for the 1Z0-819 exam can be found on the Oracle website.

Step 2: Use Study Resources

To pass the 1Z0-819 exam, you must have a strong understanding of Java SE 11 and be familiar with the exam's structure and format. The following study resources can help you prepare for the exam:
  • Oracle Certified Professional - Java SE 11 Developer Certification Study Guide: This guide covers all the topics on the exam and includes practice questions to test your knowledge.
  • Java SE 11 Documentation: This is an essential resource for understanding the core language features, APIs, and libraries covered on the exam.
  • Practice tests: Taking practice tests can help you get a feel for the format and difficulty of the actual exam. You can find several practice tests online.
  • Study groups: Joining a study group can help prepare for the 1Z0-819 exam. You can find study groups online or form one with other developers in your area. A study group can allow you to discuss concepts, ask questions, and share study resources.

Step 3: Study and Practice Coding

Once you have familiarized yourself with the exam objectives and study resources, the next step is to start studying and practicing coding. To improve your coding skills, try to solve coding challenges and exercises, and review the Java SE 11 documentation to ensure that you deeply understand the language.

Step 4: Take 1Z0-829 Practice Tests

Taking practice tests is an essential part of preparing for the 1Z0-819 exam. Practice tests can help you get a feel for the format and difficulty of the actual exam, and they can also help you identify areas where you need to improve your knowledge and skills.

Step 5: Take the Exam

Once you have completed your studying and practice tests, it's time to take the actual exam. Make sure to arrive at the testing center early and bring all the necessary materials, including a valid ID. Read the instructions carefully, and take your time answering the questions. If you get stuck on a question, skip it and return to it later.

It's essential to manage your time wisely during the exam. You have 180 minutes to answer 80 multiple-choice questions, which gives you approximately 2 minutes per question. Try to spend only a little time on any question, and if you need clarification on an answer, make an educated guess and move on.

Conclusion

Passing the 1Z0-819 exam can open up new career opportunities and demonstrate your expertise to employers. However, it requires hard work, dedication, and preparation. By understanding the exam objectives, using the right study resources, practicing coding, and taking practice tests, you can easily increase your chances of passing the exam. Remember to manage your time wisely during the exam and follow the instructions carefully. You can become a certified Java SE 11 Developer with the right mindset and approach. Good luck!

Friday, February 17, 2023

Java Management Service introduces new Advanced Features for customers and makes Basic Discovery available to everyone

With the latest release of Java Management Service (JMS) Oracle introduces several new advanced features to help administrators gain additional insights into Java workloads. JMS administrators can now use Java Management Service - Fleet Management to:

◉ Analyze the usage of application servers
◉ Identify potential vulnerabilities associated with the Java libraries used by applications
◉ Assess the impact of Oracle JRE and JDK Cryptographic Roadmap changes on their applications
◉ Use Java Flight Recorder to gather application insights
◉ Download and install Oracle Java versions
◉ Remove Oracle Java versions

on Desktops, Servers, or Cloud deployments covered by an Oracle Java SE Subscription or when running on an Oracle Cloud Infrastructure service that permits access to the underlying operating system.

As announced during the JavaOne 2022 Keynote, the Basic Java Management Service Discovery Features that identify Java Runtimes and Oracle JDK usage is now available to everyone, even users that do not have a Java SE Subscription or are running in Oracle Cloud Infrastructure.

New Advanced Features


In addition to Java Runtime Lifecycle Management Operations, JMS has introduced more advanced features - Advanced Usage Tracking, Crypto event analysis, and JDK Flight Recording. These new advanced features are currently supported on Linux platforms.

Advanced usage tracking

Basic usage tracking which relies on Java usage tracker and file scanning capabilities helps JMS administrators to identify Oracle JDK usage and report OpenJDK distributions. Advanced usage tracking will help in identifying usage of Java severs and Java libraries.

Scan for Java servers

JMS administrators can use the "Scan for Java servers" operation in Java Management Service - Fleet Management, to detect and report usage of application and HTTP servers like Oracle Weblogic, Apache Tomcat, and JBoss. In addition to the versioning info, JMS administrators can also see the applications deployed on these servers and the Managed Servers to which the servers are deployed.

Java Management Service, Oracle Java Certification, Oracle Java Career, Java Skills, Java Jobs, Java Prep, Java Preparation, Java Certification
Java Application Servers

Java Management Service, Oracle Java Certification, Oracle Java Career, Java Skills, Java Jobs, Java Prep, Java Preparation, Java Certification

Applications running in each Java Application Server

Scan for Java libraries

The "Scan for Java libraries" creates a list of Java libraries used by Java applications (both standalone and those deployed in Java servers) in the fleet. JMS will also compare the libraries and versions found against the National Vulnerability Database to help administrator identify applications that should be updated to use newer versions or updated to different libraries.

The scans for advanced usage tracking must be initiated by the JMS administrator and is not performed by default by the JMS agents.

Java Management Service, Oracle Java Certification, Oracle Java Career, Java Skills, Java Jobs, Java Prep, Java Preparation, Java Certification
Java libraries detected by JMS in the fleet

Crypto event analysis

Oracle's plans for changes to the security algorithms and associated policies/settings in the Oracle Java Runtime Environment (JRE) and Java SE Development Kit (JDK) are published periodically at Oracle JRE and JDK Cryptographic Roadmap. To make good use of that information however, administrators would need to know if any of their Java applications are using the algorithms, key lengths, or default values that will be changed. Some of that information can be hard to know, especially when applications rely on configurations on the servers they connect to.

Using Crypto Event Analysis, administrators will get detailed information on what cryptographic algorithms from the Java Security Libraries are being used. JMS will compare the algorithms being used with the planned changes and highlight applications that might be impacted by future changes or by certificates that are about to expire.  When applicable, JMS will provide  recommendations to avoid disruptions.

Java Management Service, Oracle Java Certification, Oracle Java Career, Java Skills, Java Jobs, Java Prep, Java Preparation, Java Certification
Results of Crypto event analysis run on a managed instance in the fleet

Please be aware that JMS can only identify cryptographic usage within the JDK libraries. JMS can identify usage of most third-party cryptographic providers but cannot provide details of which algorithms or certificates are being used when relying on third-party cryptographic providers.

JDK Flight Recording

Administrators can initiate Java Flight Recording on applications reported by JMS using the Run JDK Flight Recorder (JFR) operation in Java Management Service - Fleet Management. JDK Flight Recorder collects diagnostic and profiling data about a running Java application. JMS will initiate the recording and upload the resulting JFR file to the customer’s tenancy, enabling administrators to do their own analysis of the recordings.

Java Management Service, Oracle Java Certification, Oracle Java Career, Java Skills, Java Jobs, Java Prep, Java Preparation, Java Certification
Initiating Java Flight Recording for an application

Basic Java discovery available for all!


We are excited to announce that Basic Java discovery of JMS is now available to all Java users, whether they have a Java SE Subscription, are running on OCI, or not. Basic Discovery allows you to:

◉ View the versions and vendor information of all Java runtimes in your systems
◉ Identify which Oracle Java installations are up to date, and which ones should be updated or upgraded
◉ View which applications run on each Oracle Java runtime

To take advantage of JMS Basic Discovery administrators will need to create an OCI Account, go to Java Management Service, and create one or more fleets (to group the managed instances).  Once you have created your fleet(s), you install the Java Management Service agent on each system you would like to monitor. The JMS agent will scan your systems to find all Java installations and configure usage logging on all Oracle Runtimes to start collecting information on what Java Applications are using them. All information collected by JMS is stored in your user tenancy. Although there is no charge for using JMS you will be responsible for storage costs for the information collected by the agent (starting $0.01 per MB per month).

Source: oracle.com

Wednesday, February 15, 2023

Differences Between Oracle JDK and OpenJDK

Oracle JDK, OpenJDK, Oracle Java Tutorial and Materials, Oracle Java Career, Java Prep, Oracle Java Certification, Java Guides, Java Prep, Oracle Java Preparation

Java has been one of the most popular programming languages in the world for many years, and for good reason. It is versatile, reliable, and scalable, making it an excellent choice for developing everything from small mobile apps to large enterprise systems. However, when it comes to choosing a Java Development Kit (JDK) for your project, you may be wondering what the differences are between Oracle JDK and OpenJDK. In this article, we will explore the key differences between the two JDKs and help you make an informed decision on which one is right for your project.

What is Oracle JDK?


Oracle JDK is the official implementation of Java Standard Edition (Java SE), developed and maintained by Oracle Corporation. It is the original implementation of Java, and it includes all the features and components required to develop, run, and debug Java applications. Oracle JDK is available under a commercial license, which means that if you want to use it for commercial purposes, you will need to purchase a license from Oracle.

What is OpenJDK?


OpenJDK, on the other hand, is an open-source implementation of Java SE, developed and maintained by the Java community. It is an alternative to Oracle JDK, and it includes all the features and components required to develop, run, and debug Java applications. OpenJDK is available under the GNU General Public License, which means that it is free to use for commercial and non-commercial purposes.

Key Differences between Oracle JDK and OpenJDK


1. Licensing

One of the key differences between Oracle JDK and OpenJDK is the licensing. Oracle JDK is available under a commercial license, which means that if you want to use it for commercial purposes, you will need to purchase a license from Oracle. OpenJDK, on the other hand, is available under the GNU General Public License, which means that it is free to use for commercial and non-commercial purposes.

2. Support

Another important difference between Oracle JDK and OpenJDK is the support. Oracle provides commercial support for Oracle JDK, which includes bug fixes, security updates, and technical support. OpenJDK, on the other hand, is community-supported, which means that there is no formal support from any organization. However, many companies and individuals provide community support for OpenJDK, which includes bug fixes, security updates, and technical support.

3. Release Schedule

Oracle JDK and OpenJDK also have different release schedules. Oracle releases a new version of Oracle JDK every six months, and provides support for each version for at least three years. OpenJDK, on the other hand, is released by different vendors, each with its own release schedule. Some vendors release a new version of OpenJDK every six months, while others release it every few years. The length of support for each version of OpenJDK also varies depending on the vendor.

4. Features

While both Oracle JDK and OpenJDK include all the features and components required to develop, run, and debug Java applications, there are some differences in the implementation. Oracle JDK includes some proprietary features that are not available in OpenJDK, such as Java Flight Recorder and Java Mission Control. However, these features are available in OpenJDK if you use a build that includes them.

Which one should you choose?


Choosing between Oracle JDK and OpenJDK depends on your specific needs and requirements. If you require commercial support, then Oracle JDK may be the better choice for you. If you are looking for a free and open-source alternative, then OpenJDK may be the better choice. It is also worth noting that some third-party vendors provide commercial support for OpenJDK, so you may be able to get the support you need without purchasing a license from Oracle.

Monday, February 13, 2023

Quiz yourself: Splitting Java streams and using escape characters

Oracle Java, Oracle Java Tutorial and Materials, Oracle Java Prep, Oracle Java Guides, Oracle Java Certification, Java Career, Java Jobs, Java Skill

Test your knowledge of the Pattern class and splitAsStream method


Given the following class

import java.util.Arrays;
import java.util.regex.Pattern;
public class FooBaz {
  static final Pattern PIPE_SPLITTER = Pattern.compile("\\|");
  public static void main(String[] args) {
    System.out.print(doIt("12|11|30"));
  }
  public int doIt(String s) {
    var a = PIPE_SPLITTER.splitAsStream(s)
       .mapToInt(v -> Integer.valueOf(v))
       .mapToObj(v -> new Integer[]{v % 3 == 0 ? 1 : 0, v % 5 == 0 ? 2 : 0, v})
       .reduce(new Integer[]{0, 0, 0}, (i, is) -> new Integer[]{i[0] + is[0], i[1] + is[1], i[2] + is[2]});
    return Arrays.stream(a).mapToInt(Integer::intValue).sum();
  }
}

What is the result? Choose one.

A. 56 is the output.
B. 57 is the output.
C. 58 is the output.
D. 59 is the output.
E. Compilation fails.

Answer. When you see an exam question that has unreasonably complex code, be sure to check for simple things first. You won’t always find the answer there, but if checking the hard stuff is going to take a long time, it’s smart to check the easy stuff first. In this example, the code does not, in fact, compile. The reason is simple: The static main method attempts to call the doIt method without any explicit prefix, and such an invocation can work only for a static doIt() method. However, doIt() is an instance method and in the absence of an explicit prefix, such an invocation will fail. From this you can quickly determine that option E is correct, mark that as your answer, and move on to the next question.

Now that you know the correct answer, let’s make this discussion more interesting by pretending that the doIt() method was static or that an explicit instance prefix was provided for the invocation of the doIt() method.

First, notice that the splitAsStream method splits the string argument "12|11|30" into three text chunks—"12", "11", and "30"—which are then converted to a stream of equivalent primitive int values by the mapToInt operation.

As side notes, observe three things: the use of the Pattern class’s splitAsStream method, the precompilation of the pattern, and the escaping of the vertical bar character in the regular expression pattern.

◉ The splitAsStream method is more direct than the more common approach of extracting items from the source text to an intermediate array using the simple split method and then making a stream from the elements of the array as a second step.

◉ The precompilation of the regular expression pattern makes no difference here, but notice that the pattern is declared as a static final, rather than being embedded in the body of the method. Turning a textual regular expression into the representation that actually performs pattern matching is a fairly CPU-intensive task, so it’s generally a good idea to arrange that a pattern is precompiled in this way just once, rather than referring to it in the string literal form in a way that might involve it being compiled each time a loop executes that code.

◉ Note the nature of the regular expression literal. The simple vertical bar (or pipe) character represents an OR operation and must be escaped. However, a single backslash would be an attempt to escape the vertical bar in the parsing of the string literal, which is probably not what you want. You need to make a literal containing the character sequence “backslash, vertical bar.” Because backslash is itself the escape character, it must be escaped, so two backslashes in the source code make one in the binary code, which is what’s desired.

Going back to the operation of this stream, the three int values are mapped to a stream of Integer arrays, containing the following data:

[1, 0, 12]
[0, 0, 11]
[1, 2, 30]

Notice that the conditional operators in the mapToObj argument will put 0 in the first array element if the int in the stream is exactly divisible by 3 but put 1 in otherwise. The second element of the array will be 0 if the int is exactly divisible by 5 but will be 2 otherwise. The third element is simply the int value from the stream.

Next the stream is reduced to a single Integer[] by summing values with the same indices to produce the following result in the variable a:

[2, 2, 53]

In the final step, the array noted above is converted to a stream of Integer objects, which are then converted to primitives and then reduced to the sum of all elements, producing 57 as the output. Thus, if the code had actually compiled, option B would have been correct.

Conclusion. The correct answer is option E.

Source: oracle.com

Friday, February 10, 2023

Quiz yourself: Handling side effects in Java

Oracle Java, Oracle Java Tutorial and Materials, Oracle Java Prep, Oracle Java Preparation, Oracle Java Certification, Oracle Java Learning, Oracle Java Guides

This question exemplifies a style that’s popular with test creators. It’s less popular with candidates.


Imagine that your colleague is prototyping new business logic that must work in a multithreaded application and has created the following class:

class MyRunnable implements Runnable {
    public void run() {
        synchronized (MyRunnable.class) {
            System.out.print("hello ");
            System.out.print("bye ");
        }
    }
}

To test the class, your colleague wrote the following method and then invoked the method, passing a Stream object containing two MyRunnable instances:

public static void testMyRunnable(Stream<Runnable> s) {
    s.map(
        i -> {
            new Thread(new MyRunnable()).start();
            return i;
        }
    ).count();
}

A. The output will be exactly hello bye hello bye.
B. The output will always start with hello followed by either hello or bye.
C. No output will be produced.
D. None of the above.

Which statement is correct? Choose one.


Answer. This question exemplifies a style that’s popular with test creators, but perhaps it’s less popular with candidates. The setup makes the question appear to be on one topic, when in fact it’s really about something else. In this case, the question probably appears to be about threading and mutual exclusion using synchronization. It’s really about the Stream API.

Oracle Java, Oracle Java Tutorial and Materials, Oracle Java Prep, Oracle Java Preparation, Oracle Java Certification, Oracle Java Learning, Oracle Java Guides
Look at the method and its invocation. The test method receives a Stream as an argument, calls a map() operation on that stream, and then executes the count() terminal operation on the resulting stream. You know from the question that the Stream argument has two items in it, so the count() method must return 2.

Here is the detail that matters most: If the Stream object is one for which the size is known without having to draw elements to exhaustion, the count() method might actually return that size without ever processing the body of the stream. Indeed, the documentation for the count() method states the following:

An implementation may choose to not execute the stream pipeline (either sequentially or in parallel) if it is capable of computing the count directly from the stream source. In such cases no source elements will be traversed and no intermediate operations will be evaluated. Behavioral parameters with side-effects, which are strongly discouraged except for harmless cases such as debugging, may be affected.

In other words, if the argument stream has a known size, there will be no output at all. If, however, the argument stream has a size that is not known until it runs, some output will be produced.

The side effects of printing “hello ” and “bye ” are therefore not impossible but are also not guaranteed. Options A, B, and C are therefore incorrect, and option D must be the correct answer.

To dig deeper, let’s investigate this idea of a stream having a known or unknown element count. The following streams have exactly two elements:

List.of(1, 3).stream()
Stream.of(1, 3)

However, because some of the elements might be removed, the following stream has an element count that must be determined dynamically:

List.of(1, 3).stream.filter(x -> 3 * Math.random())

Given that this kind of side effect can be ignored—the documentation calls it elided—how should you write code intended to be used in the map method and related methods? The guidance is that the operations passed as arguments to the methods of a stream should generally be pure functions. A key (but not the only) feature of a pure function in programming (as distinct from mathematical theory) is that it does not have observable side effects. (Printing a message is typically considered to be a visible side effect, though logging messages might not be considered visible. It’s complicated and what’s visible depends a bit on perspective.)

On this topic, the documentation has more to offer.

The eliding of side-effects may also be surprising. With the exception of terminal operations forEach and forEachOrdered, side-effects of behavioral parameters may not always be executed when the stream implementation can optimize away the execution of behavioral parameters without affecting the result of the computation.

As mentioned earlier, this question looks as if it’s about synchronization. So, in the interest of completeness, consider how this aspect will behave if the map method’s argument is invoked with each element of the stream.

The body of the run() method is synchronized on the java.lang.Class object that describes MyRunnable in the running VM (that is, MyRunnable.class). This is, in effect, a static element and, therefore, no matter how many instances of this particular MyRunnable class might exist, only one thread can be in the process of executing the sequence of print statements. That tells you that if any thread manages to print “hello ” it must continue to print “bye ” before any other thread can print anything. This would mean that, if the stream actually processed its elements through the map operation, the output would be as shown in option A.

Conclusion. The correct answer is option D.

Source: oracle.com

Monday, February 6, 2023

Curly Braces #8: REST peacefully with GraphQL and Java

GraphQL can be a very efficient way of transferring data via API calls.


I’ve been RESTing happily since the early 2000s after Roy Fielding’s doctoral dissertation, “Architectural styles and the design of network-based software architectures,” caused many in the software world to move to representational state transfer (REST) to solve their API needs.

Oracle Java, Java Exam, Java Tutorial and Materials, Java certification, Java Prep, Java Preparation, Java Guides, Java Graph

Prior to that, I was building web-enabled software services, called service-oriented architecture (SOA) or web services. REST helped to formalize API definitions, but SOA and web services were essentially equivalent to traditional approaches in two key ways: The API developer predetermines both the endpoints and the data returned for each API.

Over the past few years, many have come to consider REST the de facto standard for API usage, even for noninternet applications. It’s easy to embed a web server to serve up a REST API, and there are plenty of frameworks available to enable it. Additionally, REST APIs are language- and platform-neutral, and those APIs are often used as a facade to enable legacy applications in a modern web or mobile application architecture. In this article, I’ll talk about both REST and another architecture, GraphQL.

REST has drawbacks


Although REST solves many API-related problems, it’s not perfect. The architecture’s deficiencies include the following.

Overfetching. REST APIs are defined to return data as a predefined structure, usually in XML or JSON. If a caller wants only some fields of data returned, too bad: They get all the data anyway. This doesn’t seem like a big deal, but this inefficiency adds up when an API returns multiple records.

Underfetching. You may need to make multiple REST calls to aggregate all the data you need for one user or back-end operation. The associated round trips are inefficient and can lead to multiple database transactions.

Overfetching and underfetching. Ironically, underfetching often leads to overfetching, because one or more of the REST calls required to satisfy a single user operation likely contain data that’s not needed or that’s duplicated..

Implicit intent. REST is built upon HTTP, and it leverages GET and PUT/POST calls to indicate read or write operations. With REST, it’s frowned upon to name API calls explicitly, for example, GetUser or CreateUser. Instead, you are encouraged to name the API and the user, and then rely on the HTTP operation that’s used to imply the intent. For example, an HTTP GET is equivalent to GetUser, PUT is equivalent to either UpdateUser or CreateUser, a POST is usually equivalent to CreateUser but sometimes to UpdateUser, and DELETE is equivalent to DeleteUser. Because of this, the API’s intent can be hidden behind the communication protocol, so it isn’t always obvious. It’s also not a precise match; hence, the confusion between POST, PUT, and PATCH.

Lack of agility. Each REST API call exists and returns the prescribed data only because its creator decided it should. Even if the API is well designed, it’s unlikely to serve every client’s needs precisely, and changing needs will render it less of a fit over time. Additionally, once APIs are used, it’s difficult or impossible to change them without impacting external applications. Building dependencies between applications is less than agile.

Introducing GraphQL


In 2012, developers at Facebook developed an improvement on REST, which was then released as an open source data query language called GraphQL.

GraphQL is similar to REST except that it’s data oriented: The caller precisely defines the data to be returned, and the server complies by returning that data and nothing else. For instance, if a user wants to know the balance for a bank account, the front-end code will make a call to a GraphQL web interface using a JSON-like request such as the one shown in Listing 1. (The ssn field is for a nine-digit identifier issued by the US government called a Social Security Number.)

Listing 1. A sample GraphQL query

{
    account {
        id(id: "987654321")
        name
        type
        customer {
            firstName
            lastName
            ssn
        }
        availableBalance
        totalBalance
    }
}

The server will fulfill the query with a JSON-compliant response, as shown in Listing 2.

Listing 2. A sample GraphQL query response

{
  "data": {
    "account": {
      "id": "987654321",
      "name": "Personal Checking",
      "type": "Basic Checking",
      "customer": {
        "firstName": "Eric",
        "lastName": "Bruno",
        "ssn": "123-45-6789"
      },
      "availableBalance": "1234.56",
      "totalBalance": "1234.56"
    }
  }
}

In this case, the identification of the bank account is provided as an input key for lookup. So far, this is a straightforward query. However, consider that this single GraphQL call combines data from multiple resources: the user as well as basic account and balance information from the bank.

By contrast, common REST APIs often break this into multiple endpoints and calls: one for the balance of the given account number, another for account data, and yet another for user data. Additionally, there’s likely a lot more data about the account and the user than what was returned here.

Individual REST calls to get user and account information would likely have resulted in overfetching, which is inefficient and may even be a security risk in a financial application.

Looking inside GraphQL


Although GraphQL’s name contains the word graph, the architecture doesn’t supply true graph operations. However, GraphQL does provide a type system with introspection, a defined query language, and execution semantics with explicit indication of reads and writes. A single request, called a query, can return data for more than one resource, as shown in the previous example, by following references between them.

In other words, GraphQL queries allow you to express relationships in the call itself, dynamically, offering efficiency and flexibility.

Unlike REST APIs, which use endpoints to describe and group operations, GraphQL organizes them by schemas, data types, and associated fields. Types are used to constrain requests to only what is feasible, and they indicate how data is to be used. Using the query in Listing 1, related GraphQL types might look like Listing 3.

Listing 3. GraphQL types for the query in Listing 1

type Query {
    account: Account
}

type Account {
    id: Int
    name: String
    type: [
        "Basic Checking"
        "Advanced Checking"
        "Business Checking"
    ]
    owner: Customer
    availableBalance: Balance
    totalBalance: Balance
}

type Customer {
    firstName: String
    lastName: String
    ssn: String
    address: Address
    phone: Phone
    email: String
    active: Boolean
}

type Address {
    street: String
    city: String
    state: [
      "Alabama"
      "Alaska"
      ...
    ]
    zip: String
}

type Phone {
    ...
}

type Balance {
    amount: Float
    asOf: Date
    ...
}

As shown in this example, the GraphQL type system is expressive and comprehensive.

GraphQL mutations


Notice that the GraphQL description for type in Listing 3 begins with the keyword Query. This indicates that this is a read schema. GraphQL provides the mutation schema to mark an API as writable. It’s a requirement that every GraphQL API have a query type, but a mutation type is optional, and it is similar to queries in that you specify nested fields and a return type. The following is an example of the mutation type definition:

mutation CreateAccount($account: Account,) {
    createAccount(account: $account) {
        id
        name
    }
}

The createAccount mutation creates a new account and returns the id and name of that account. The matching request, which is an input object type, would look like the following:

{
  "account": {
    "name": "Personal Checking",
    "type": "Basic Checking",
    "customer": {
    "firstName": "Eric",
    "lastName": "Bruno",
    "ssn": "...",
    "address": "...",
    "phone": "...",
    "email": "eric@ericbruno.com",
    "active": "true"

    }
  }
  :
}

The result would be the new account id and name, as shown below.

{
  "data": {
    "createAccount": {
      "id": "987654321",
      "name": "Personal Checking",
      "Customer:" {
        "ssn": "..."
      }
    }
  }
}

The mutation in this example can create a new customer along with the account or return an existing customer if the record is located with the ssn provided; GraphQL is flexible this way.

The GraphQL schema includes more advanced features, such as interfaces, lists, the ability to specify bounds on fields, enumerations, unions, inputs, operations, and more. There’s also a sophisticated validation schema based on the GraphQL type system.

Java and GraphQL


GraphQL includes open source helper code in many languages, including Java, to make it easy to create and consume GraphQL APIs.

On GitHub, you’ll find Java classes to help generate queries, define schemas, execute queries, and parse the results. Other GraphQL Java libraries are available and also integrate with other tools and server frameworks such as Spring.

Source: oracle.com