Friday, December 29, 2023

Introducing GraalOS

Introducing GraalOS

At Oracle Cloud World 2023, Oracle announced GraalOS, an innovative new application deployment technology that will be first made available through Oracle Cloud Infrastructure Functions.

What is GraalOS?

GraalOS is a high performance serverless Java-based application deployment technology. It uses Oracle GraalVM Native Image to run your application as a native machine executable—taking full advantage of the latest x64 and AArch64 processor features available on Oracle Cloud Infrastructure (OCI). An application powered by GraalOS, referred to as a GraalOS application here, should be less expensive to operate to help reduce your cloud costs.

Fast Start

A GraalOS application starts fast with virtually no “cold start” cost. Unlike container-based platforms that suffer from significant cold start costs, a GraalOS application is a small native Linux executable that starts in 10s of milliseconds.

Reduced Memory

A GraalOS application requires significantly less memory thanks to GraalVM Native Image ahead-of-time (AOT) compilation. In turn, lower memory usage has a direct impact on your operating costs: pricing structures for most cloud services, including OCI Functions, have a significant memory usage element.

Run On Demand

A GraalOS application is automatically suspended and resumed when called—with no idle cost. Applications and functions that are not receiving requests are terminated on most serverless platforms after a timeout period has been exceeded. An application that is subsequently invoked is subject to a cold start cost. GraalOS’s ability to suspend and rapidly resume your idle applications means no cold start.

Applications, not Containers

GraalOS runs native Linux executables directly—taking advantage of the latest advances in hardware-enforced application isolation. This approach removes the need to package your application into a container, which eliminates challenges such as selecting a secure container image and ensuring that the latest security patches are in place and updated regularly.

Cloud Native

Introducing GraalOS
With support for stateful and stateless microservices and functions, GraalOS is ideal for cloud native applications. Both short-lived functions and long-running microservices will benefit from GraalOS features such as virtually no cold start, transparent suspend and resume, and no cost idle.

OCI Functions Powered by GraalOS

The first use of GraalOS technology is in OCI Functions: it will add a new “Graal Function” type that will start much faster and require less memory than existing OCI functions. Thanks to the built-in OCI Functions triggers provided by OCI services such as Events, Connector Hub, Data Integration, API Gateway, and Notifications, all these services will be able to take advantage of GraalOS-powered functions with no changes.

The Road Ahead

GraalOS is an exciting new application deployment technology. Its first application will be providing new GraalOS powered functions that provide significant benefits to OCI Functions users. We’ll follow that up with a full application deployment platform planned for next year. Stay tuned for more GraalOS developments!

Source: oracle.com

Wednesday, December 27, 2023

Oracle NetSuite on GraalVM

Oracle NetSuite on GraalVM

Oracle has completely moved its integrated application suite of cloud business software, known as Oracle NetSuite (hereafter, “NetSuite”), to Oracle GraalVM for JDK 17. This recent migration has increased the overall performance of the application suite and reduced its consumption of resources. Keep reading to learn more about the migration, performance gains, and future plans.

The Technology Behind NetSuite


At its heart, NetSuite is a large-scale Java application that runs on more than 10,000 application servers worldwide. Each instance of the application consists of about 100,000 loaded Java classes.

Oracle achieved the seamless transition to Oracle GraalVM without any changes to the application's source code or modifications to its deployment configuration. There were no classpath changes and no dependency conflicts with the libraries used by the application.

Why Oracle GraalVM?


Oracle GraalVM measurably reduced the CPU time needed by NetSuite. Because it runs on tens of thousands of application servers, any reduction in CPU time has the potential to reduce the number of servers. It was simple to switch to the Graal JIT compiler by just replacing the existing JDK installation with no additional configuration.

Java Performance Results


The team tested the performance of NetSuite running with the Graal JIT and the performance with their existing JDK installation. Both were based on the same version of JDK: 17.0.6.

The results were divided into two categories based on the type of NetSuite server: Request Servers and Background Job Servers.

Using Graal JIT reduced the CPU time on both server types, achieving 1.08x speedup on average on the request serving servers (7.39% CPU time reduction) and 1.07x speedup on average on the background job processing servers (6.37% CPU time reduction). On some workloads, the average CPU time usage was reduced even by 13%, resulting in 1.15x speedup.

Executing JavaScript on the JVM


In addition to the acceleration GraalVM provides for Java, NetSuite also leverages Graal.js in its SuiteScript extension language. Built on JavaScript, SuiteScript enables complete customisation and automation of business processes. Using the SuiteScript APIs, core business records and user information can be accessed and manipulated via scripts that are executed at pre-defined events, such as field change or form submission. NetSuite chose Graal.js as their JavaScript runtime because of its support for the latest ECMAScript standards, the ease of migration from Rhino, and its outstanding performance. Graal.js is written using the GraalVM Truffle framework which enables compilation of guest language code (JavaScript in this case) into optimized machine code.

Conclusion and Future Plans


Compared to the previous installation, GraalVM reduced the CPU time consumed by the NetSuite application in the production environment by 6.4–7.4% on average with as much as 13% on some workloads. The transition to GraalVM was as simple as upgrading to the new GraalVM release.

The design of GraalVM enables maintenance as well as the addition of incremental improvements. Therefore, we expect NetSuite to achieve even better performance characteristics in the future thanks to new compiler optimizations and Truffle framework improvements.

Source: oracle.com

Friday, December 22, 2023

Unleashing the Power of JDK 20: A Comprehensive Guide

Unleashing the Power of JDK 20: A Comprehensive Guide

Introduction


In the dynamic landscape of Java Development, staying ahead is not just an option; it's a necessity. In this article, we delve deep into the intricacies of JDK 20, exploring its features, enhancements, and how it stands as a game-changer in the realm of Java programming.

Understanding JDK 20


Java Development Kit (JDK) 20, the latest iteration of the JDK series, brings forth a myriad of improvements aimed at optimizing development workflows and enhancing the overall user experience. Let's unravel the key facets that make JDK 20 a standout in the Java development ecosystem.

1. Feature-rich Modules

JDK 20 introduces a set of feature-rich modules, each designed to address specific aspects of Java development. From enhanced security protocols to streamlined compilation processes, these modules empower developers to navigate the complexities of coding with ease.

2. Performance Boosts

One of the noteworthy aspects of JDK 20 is its focus on performance enhancements. The compiler optimizations and runtime improvements integrated into this version ensure that Java applications run faster and more efficiently, marking a significant stride in the evolution of Java programming.

Benefits of Upgrading to JDK 20


Upgrading to JDK 20 is not merely an option but a strategic move for developers aiming to elevate their projects. Let's explore the tangible benefits that come with making the leap to this advanced version.

1. Enhanced Security Measures

Security is paramount in the digital age, and JDK 20 doesn't disappoint. With reinforced security measures, including upgraded cryptographic algorithms and secure coding practices, developers can build robust, secure applications that stand resilient against evolving cyber threats.

2. Improved Developer Productivity

JDK 20 streamlines the development process, offering tools and utilities that enhance developer productivity. From advanced debugging capabilities to optimized build tools, the JDK 20 ecosystem empowers developers to write cleaner code in less time.

3. Compatibility with Modern Technologies

Staying relevant in the ever-evolving tech landscape is crucial. JDK 20 ensures compatibility with the latest technologies, providing developers with the flexibility to integrate cutting-edge features into their applications seamlessly.

How to Upgrade to JDK 20


Now that we've established the compelling reasons to embrace JDK 20, let's delve into the practicalities of upgrading your development environment.

1. Assessing Compatibility

Before making the transition, it's imperative to assess the compatibility of your existing codebase with JDK 20. Conduct a thorough analysis using tools like Jdeps to identify potential issues and ensure a smooth migration.

2. Backup and Version Control

Prior to upgrading, create a robust backup of your projects. Utilize version control systems such as Git to track changes and have a failsafe mechanism in place to revert to previous versions if needed.

3. Gradual Implementation

To minimize disruptions, consider a gradual implementation strategy. Start by upgrading non-production environments, allowing developers to acclimate to the new features and address any unforeseen challenges before rolling out the changes to production.

Conclusion

In conclusion, JDK 20 is more than just an update; it's a strategic move towards a future-proof and efficient Java development ecosystem. By understanding its features, benefits, and the seamless upgrade process, developers can harness the full potential of JDK 20, ensuring their projects thrive in the ever-evolving world of Java programming.

Wednesday, December 20, 2023

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes

Prometheus can do more than monitor Oracle WebLogic Server. It can automatically scale clusters, too.


Elasticity (scaling up or scaling down) of an Oracle WebLogic Server cluster lets you manage resources based on demand and enhances the reliability of your applications while managing resource costs.

This article follows on from Monitoring Oracle WebLogic Server for Oracle Container Engine for Kubernetes, which showed how to use two open source tools— Grafana and Prometheus—to monitor an Oracle WebLogic Server domain deployed in Oracle WebLogic Server for Oracle Cloud Infrastructure (OCI) Container Engine for Kubernetes.

In this article, you’ll learn the steps required to automatically scale an Oracle WebLogic Server cluster provisioned on WebLogic Server for OCI Container Engine for Kubernetes through an Oracle Cloud Marketplace stack when a monitored metric (the total number of open sessions for an application) goes over a threshold. The Oracle WebLogic Monitoring Exporter application scrapes the runtime metrics for specific WebLogic Server instances and feeds them to Prometheus.

Because Prometheus has access to all available WebLogic Server metrics data, you can use any of the metrics data to specify rules for scaling. Based on collected metrics data and configured alert rule conditions, Prometheus’ Alertmanager will send an alert to trigger the desired scaling action and change the number of running managed servers in the WebLogic Server cluster.

You’ll see how to implement a custom notification integration using the webhook receiver, a user-defined REST service that is triggered when a scaling alert event occurs. After the alert rule matches the specified conditions, the Alertmanager sends an HTTP request to the URL specified as a webhook to request the scaling action. (For more information about the webhook used in this sample demo, see this GitHub repository.)

WebLogic Server for OCI Container Engine for Kubernetes is available as a set of applications in Oracle Cloud Marketplace. You use WebLogic Server for OCI Container Engine for Kubernetes to provision a WebLogic Server domain with the WebLogic Server administration server and each managed server running in different pods in the cluster.

WebLogic Server for OCI Container Engine for Kubernetes uses Jenkins to automate the creation of custom images for a WebLogic Server domain and the deployment of these images to the cluster. WebLogic Server for OCI Container Engine for Kubernetes also creates a shared file system and mounts it to WebLogic Server pods, a Jenkins controller pod, and an admin host instance.

The application provisions a public load balancer to distribute traffic across the managed servers in your domain and a private load balancer to provide access to the WebLogic Server administration console and the Jenkins console.

WebLogic Server for OCI Container Engine for Kubernetes also creates an NGINX ingress controller in the cluster. NGINX is an open-source reverse proxy that controls the flow of traffic to pods within the cluster. The ingress controller is used to expose services of type Load Balancer using the load balancing capabilities of Oracle Cloud Infrastructure Load Balancing and Oracle Container Engine for Kubernetes.

Overview: How it all works


Figure 1 shows the WebLogic Server for OCI Container Engine for Kubernetes cluster components. The WebLogic Server domain consists of one administration server (AS) pod, one or more managed server (MS) pods, and the Oracle WebLogic Monitoring Exporter application deployed on the WebLogic Server cluster.

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes
Figure 1. The cluster components

Additional components are the Prometheus pods, the Alertmanager pods, the webhook server pod, and the Oracle WebLogic Server Kubernetes Operator.

The six steps shown in Figure 1 are as follows:

  1. Scrape metrics from managed servers: The Prometheus server scrapes the metrics from the WebLogic Server pods. The pods are annotated to let the Prometheus server know the metrics endpoint available for scraping the WebLogic Server metrics.
  2. Create alert: Alertmanager evaluates the alert rules and creates an alert when an alert condition is met.
  3. Send alert: The alert is sent to the webhook application hook endpoint defined in the Alertmanager configuration.
  4. Invoke scaling action: When the webhook application’s hook is triggered, it initiates the scaling action, which is to scale up or down the WebLogic Server cluster.
  5. Request scale up/down: The Oracle WebLogic Server Kubernetes Operator invokes Kubernetes APIs to perform the scaling.
  6. Scale up/down the cluster: The cluster is scaled up or down by setting the value for the domain custom resource’s spec.clusters[].replicas parameter.

Figure 2 is a network diagram showing all the components of the WebLogic Server for OCI Container Engine for Kubernetes stack, including the admin host and bastion host provisioned with the stack. The figure also depicts the placement of different pods in the following two node pools:

◉ The WebLogic Server pods go on the WebLogic Server node pool.
◉ The Prometheus, Alertmanager, and other non-WebLogic Server pods are placed on the non-WebLogic Server node pool of the WebLogic Server for OCI Container Engine for Kubernetes stack.

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes

Figure 2. Network diagram for the stack

There are three things you should be aware of.

  • The setup for the Prometheus deployment and autoscaling will be carried out from the admin host.
  • The WebLogic Server for OCI Container Engine for Kubernetes stack comes with Jenkins-based continuous integration and continuous deployment (CI/CD) to automate the creation of updated domain Docker images. The testwebapp application deployment, described below, uses Jenkins to update the domain Docker image with the testwebapp application.
  • All images referenced by pods in the cluster are stored in Oracle Cloud Infrastructure Registry. The webhook application’s Docker image will be pushed to Oracle Cloud Infrastructure Registry, and the deployment of the webhook will use the image tagged as Webhook:latest from Oracle Cloud Infrastructure Registry.

Prerequisites for this project


The article Monitoring Oracle WebLogic Server for Oracle Container Engine for Kubernetes describes how to install Prometheus community Helm charts on WebLogic Server for OCI Container Engine for Kubernetes. The Prometheus Helm charts will also deploy Alertmanager. The article also describes deploying the Oracle WebLogic Monitoring Exporter application using a WebLogic Server for OCI Container Engine for Kubernetes CI/CD pipeline.

This article builds on what was covered earlier and assumes you have already provisioned a WebLogic Server for OCI Container Engine for Kubernetes cluster through an Oracle Cloud Marketplace stack and set up Prometheus and Alertmanager. You should verify the following prerequisites are met:

  • The Prometheus and Alertmanager pods are up and running in the monitoring namespace.
  • The Prometheus and Alertmanager consoles are accessible via the internal load balancer’s IP address.
  • The WebLogic Server metrics exposed by the Oracle WebLogic Monitoring Exporter application are available and seen from the Prometheus console.

You should also access the testwebapp that will be used for this project.

Demonstrating WebLogic Server cluster scaling in five steps


The goal here is to demonstrate the scaling action on the WebLogic Server cluster based on one of several metrics exposed by the Oracle WebLogic Monitoring Exporter application. This is done by scaling up the WebLogic Server cluster when the total open-sessions count for an application exceeds a threshold. There are several other metrics that the Oracle WebLogic Monitoring Exporter application exposes, which can be used for defining the alert rule for scaling up or scaling down the WebLogic Server cluster.

For the sake of brevity, the webhook pod is deployed on any of the node pools but, ideally, it should be deployed to the non-WebLogic Server node pool (identified by the label Jenkins) using the nodeSelector in the pod deployment.

There are five steps required to automatically scale a WebLogic Server cluster provisioned on WebLogic Server for OCI Container Engine for Kubernetes through Oracle Cloud Marketplace when a monitored metric goes over a configured threshold. For this article, the triggering metric will be the number of open sessions for an application.

The steps are as follows:

  1. Deploy a testwebapp application to the WebLogic Server cluster.
  2. Create the Docker image for a webhook application.
  3. Deploy the webhook application in the cluster.
  4. Set up Alertmanager to send out alerts to the webhook application endpoint.
  5. Configure an Alertmanager rule to send out the alert when the total number of open sessions for the testwebapp across all servers in the WebLogic Server cluster exceeds a configured threshold value.

Once this work is complete, you can trigger the alert condition and then observe that the WebLogic Server cluster is properly scaled as a result.

Step 1: Deploy the testwebapp


You will start by deploying the testwebapp application to the WebLogic Server cluster.

First, create the testwebapp archive zip file that bundles the testwebapp.war file with the Oracle WebLogic Server Deploy Toolkit deployment model YAML file. Execute the following steps from the WebLogic Server for OCI Container Engine for Kubernetes admin host instance.

cd /u01/shared
mkdir -p wlsdeploy/applications
rm -f wlsdeploy/applications/wls-exporter.war
mkdir -p model
wget https://github.com/bhabermaas/kubernetes-projects/raw/master/apps/testwebapp.war -P wlsdeploy/applications

DOMAIN_CLUSTER_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_cluster_name')
cat > model/deploy-testwebapp.yaml << EOF
appDeployments:
  Application:
    'testwebapp' :
      SourcePath: 'wlsdeploy/applications/testwebapp.war'
      Target: '$DOMAIN_CLUSTER_NAME'
      ModuleType: war
      StagingMode: nostage
EOF

zip -r testwebapp_archive.zip wlsdeploy model

The contents of the testwebapp_archive.zip file are shown below.

[opc@wrtest12-admin shared]$ unzip -l testwebapp_archive.zip
Archive:  testwebapp_archive.zip
  Length      Date    Time    Name
---------  ---------- -----   ----
        0  07-27-2021 04:47   wlsdeploy/
        0  08-20-2021 23:21   wlsdeploy/applications/
     3550  08-20-2021 23:22   wlsdeploy/applications/testwebapp.war
        0  08-20-2021 23:20   model/
      190  08-20-2021 23:22   model/deploy-testwebapp.yaml
---------                     -------
     3740                     5 files

The script above creates a testwebapp_archive.zip file that can be used with WebLogic Server for OCI Container Engine for Kubernetes in an update-domain CI/CD pipeline job. Open the Jenkins console and browse to the update-domain job. Click Build with Parameters. For Archive Source, select Shared File System, and set Archive File Location to /u01/shared/testwebapp_archive.zip, as shown in Figure 3.

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes
Figure 3. The parameters for the Jenkins pipeline update-domain build

Once the job is complete, the testwebapp application should be deployed. You can verify that it’s accessible by executing the following script on the admin host. The return code should be 200, indicating that the testwebapp is accessible via the external load balancer IP address.

INGRESS_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ingress_namespace')
SERVICE_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.service_name')
EXTERNAL_LB_IP=$(kubectl get svc "$SERVICE_NAME-external" -n $INGRESS_NS -ojsonpath="{.status.loadBalancer.ingress[0].ip}")
curl -kLs -o /dev/null -I -w "%{http_code}" https://$EXTERNAL_LB_IP/testwebapp

Step 2: Create the webhook application Docker image


The example in this article uses the webhook application, which is third-party open source code available on GitHub. I am using webhook version 2.6.4. Here is how to create the Docker image.

Create the apps, scripts, and webhook directories, as follows:

cd /u01/shared
mkdir -p Webhook/apps
mkdir -p Webhook/scripts
mkdir -p Webhook/Webhooks

Copy the webhook executable to the apps directory.

wget -O Webhook/apps/Webhook https://github.com/bhabermaas/kubernetes-projects/raw/master/apps/Webhook
chmod +x Webhook/apps/Webhook

Download the scalingAction.sh script file into the scripts directory, as follows:

wget https://raw.githubusercontent.com/oracle/Oracle WebLogic-kubernetes-operator/main/operator/scripts/scaling/scalingAction.sh -P Webhook/scripts

Create scaleUpAction.sh in the scripts directory.

DOMAIN_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_domain_namespace')
DOMAIN_UID=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_domain_uid')
OPERATOR_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_operator_namespace')
SERVICE_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.service_name')
CLUSTER_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_cluster_name')

cat > Webhook/scripts/scaleUpAction.sh << EOF
#!/bin/bash

echo scale up action >> scaleup.log

MASTER=https://\$KUBERNETES_SERVICE_HOST:\$KUBERNETES_PORT_443_TCP_PORT

echo Kubernetes master is \$MASTER

source /var/scripts/scalingAction.sh --action=scaleUp --domain_uid=$DOMAIN_UID --cluster_name=$CLUSTER_NAME --kubernetes_master=\$MASTER --wls_domain_namespace=$DOMAIN_NS --operator_namespace=$OPERATOR_NS --operator_service_name=internal-Oracle WebLogic-operator-svc --operator_service_account=$SERVICE_NAME-operator-sa 

EOF
chmod +x Webhook/scripts/*

Similar to the script listed above, you can create a scaleDownAction.sh script by passing the --action=scaleDown parameter to the scalingAction.sh script.

Create Dockerfile.Webhook for the webhook application, as follows. Note that the code below uses an http endpoint for the webhook application. To have the application serve hooks over HTTPS (which would obviously be more secure in a real deployment, but more complex for this example), see the webhook app documentation.

cat > Webhook/Dockerfile.Webhook << EOF
FROM store/oracle/serverjre:8

COPY apps/Webhook /bin/Webhook

COPY Webhooks/hooks.json /etc/Webhook/

COPY scripts/scaleUpAction.sh /var/scripts/

COPY scripts/scalingAction.sh /var/scripts/

RUN chmod +x /var/scripts/*.sh

CMD ["-verbose", "-hooks=/etc/Webhook/hooks.json", "-hotreload"]

ENTRYPOINT ["/bin/Webhook"]
EOF

Create the hooks.json file in the Webhooks directory. Similar to scaleup, you can define a hook for scaledown that invokes the scaleDownAction.sh script.

cat > Webhook/Webhooks/hooks.json << EOF
[
  {
    "id": "scaleup",
     "execute-command": "/var/scripts/scaleUpAction.sh",
     "command-working-directory": "/var/scripts",
     "response-message": "scale-up call ok\n"
  }
]
EOF

Build the webhook Docker image tagged as Webhook:latest and push it to the Oracle Cloud Infrastructure Registry repository. Use Docker hub credentials to log in to the Docker hub before doing a Docker build.

cd Webhook
OCIR_URL=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_url')
OCIR_USER=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_user')
OCIR_PASSWORD_OCID=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_password')
OCIR_PASSWORD=$(python /u01/scripts/utils/oci_api_utils.py get_secret $OCIR_PASSWORD_OCID)
OCIR_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_namespace')

docker image rm Webhook:latest
docker login -u <docker hub="" user=""> -p <docker hub="" password="">
docker build -t Webhook:latest -f Dockerfile.Webhook .
docker image tag Webhook:latest $OCIR_URL/$OCIR_NS/Webhook
docker login $OCIR_URL -u $OCIR_USER  -p "$OCIR_PASSWORD"
docker image push $OCIR_URL/$OCIR_NS/Webhook
</docker></docker>

Step 3: Deploy the webhook application into the WebLogic Server OCI Container Engine for Kubernetes cluster


Now that the webhook application Docker image has been pushed out to Oracle Cloud Infrastructure Registry, you can update the Webhook-deployment.yaml file to use that image.

Start by creating the ocirsecrets secret file in the monitoring namespace for the webhook deployment. It will be able to pull the Webhook:latest image from Oracle Cloud Infrastructure Registry

OCIR_URL=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_url')
OCIR_USER=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_user')
OCIR_PASSWORD_OCID=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_password')
OCIR_PASSWORD=$(python /u01/scripts/utils/oci_api_utils.py get_secret $OCIR_PASSWORD_OCID)
kubectl create secret docker-registry ocirsecrets  --docker-server="$OCIR_URL" --docker-username="$OCIR_USER" --docker-password="$OCIR_PASSWORD" -n monitoring

Run the following script, which downloads the Webhook-deployment.yaml file and updates it with imagePullSecrets.

cd /u01/shared
wget https://raw.githubusercontent.com/bhabermaas/kubernetes-projects/master/kubernetes/Webhook-deployment.yaml -P prometheus

OCIR_URL=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_url')
OCIR_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ocir_namespace')
OPERATOR_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.wls_operator_namespace')
INTERNAL_OPERATOR_CERT_VAL=$(kubectl get configmap Oracle WebLogic-operator-cm -n $OPERATOR_NS -o jsonpath='{.data.internalOperatorCert}')
sed -i "s/extensions\/v1beta1/apps\/v1/g" prometheus/Webhook-deployment.yaml
sed -i "s/verbs: \[\"get\", \"list\", \"watch\", \"update\"\]/verbs: \[\"get\", \"list\", \"watch\", \"update\", \"patch\"\]/g" prometheus/Webhook-deployment.yaml
sed -i "s/image: Webhook:latest/image: $OCIR_URL\/$OCIR_NS\/Webhook:latest/g" prometheus/Webhook-deployment.yaml
sed -i "s/imagePullPolicy: .*$/imagePullPolicy: Always/g" prometheus/Webhook-deployment.yaml
sed -i "s/value: LS0t.*$/value: $INTERNAL_OPERATOR_CERT_VAL/g" prometheus/Webhook-deployment.yaml
sed -i "85 i \ \ \ \ \ \ imagePullSecrets:" prometheus/Webhook-deployment.yaml
sed -i "86 i  \ \ \ \ \ \ \ \ - name: ocirsecrets" prometheus/Webhook-deployment.yaml

To help you see what the script above did, here’s a sample updated Webhook-deployment.yaml file.

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: Webhook
rules:
- apiGroups: [""]
  resources:
  - nodes
  - namespaces
  - nodes/proxy
  - services
  - endpoints
  - pods
  - services/status
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- apiGroups: ["Oracle WebLogic.oracle"]
  resources: ["domains"]
  verbs: ["get", "list", "watch", "update", "patch"]
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: default
  namespace: monitoring
---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: Webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: Webhook
subjects:
- kind: ServiceAccount
  name: default
  name: default
  namespace: monitoring
---

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: Webhook
  name: Webhook
  namespace: monitoring
spec:
  replicas: 1
  selector:
    matchLabels:
      name: Webhook
  template:
    metadata:
      creationTimestamp: null
      labels:
         name: Webhook
    spec:
      containers:
      - image: iad.ocir.io/idiaaaawa6h/Webhook:latest
        imagePullPolicy: Always
        name: Webhook
        env:
        - name: INTERNAL_OPERATOR_CERT
          value: LS0tLS1CRUdJTiBDRVJUSUZJQ0FUR...NBVEUtLS0tLQo=
        ports:
        - containerPort: 9000
          protocol: TCP
        resources:
          limits:
            cpu: 500m
            memory: 2500Mi
          requests:
            cpu: 100m
            memory: 100Mi
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
      imagePullSecrets:
        - name: ocirsecrets
---

apiVersion: v1
kind: Service
metadata:
  name: Webhook
  namespace: monitoring
spec:
  selector:
    name: Webhook
  type: ClusterIP
  ports:
  - port: 9000

Now you can deploy the webhook.

kubectl apply -f prometheus/Webhook-deployment.yaml
while [[ $(kubectl get pods -n monitoring -l name=Webhook -o 'jsonpath={..status.conditions[?(@.type=="Ready")].status}') != "True" ]]; do echo "waiting for pod" && sleep 1; done

Verify that all resources are running in the monitoring namespace.

[opc@wrtest12-admin shared]$ kubectl get all -n monitoring
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/grafana-5b475466f7-hfr4v                         1/1     Running   0          17d
pod/prometheus-alertmanager-5c8db4466d-trpnt         2/2     Running   0          46h
pod/prometheus-kube-state-metrics-86dc6bb59f-6cfnd   1/1     Running   0          46h
pod/prometheus-node-exporter-cgtnd                   1/1     Running   0          46h
pod/prometheus-node-exporter-fg9m8                   1/1     Running   0          46h
pod/prometheus-server-649d869bd4-swxmk               2/2     Running   0          46h
pod/Webhook-858cb4b794-mwfqp                         1/1     Running   0          8h

NAME                                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/grafana                         ClusterIP   10.96.106.93    <none>        80/TCP         17d
service/prometheus-alertmanager         NodePort    10.96.172.217   <none>        80:32000/TCP   46h
service/prometheus-kube-state-metrics   ClusterIP   10.96.193.169   <none>        8080/TCP       46h
service/prometheus-node-exporter        ClusterIP   None            <none>        9100/TCP       46h
service/prometheus-server               NodePort    10.96.214.99    <none>        80:30000/TCP   46h
service/Webhook                         ClusterIP   10.96.238.121   <none>        9000/TCP       8h

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/prometheus-node-exporter   2         2         2       2            2           <none>          46h

NAME                                            READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/grafana                         1/1     1            1           17d
deployment.apps/prometheus-alertmanager         1/1     1            1           46h
deployment.apps/prometheus-kube-state-metrics   1/1     1            1           46h
deployment.apps/prometheus-server               1/1     1            1           46h
deployment.apps/Webhook                         1/1     1            1           8h

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/grafana-5b475466f7                         1         1         1       17d
replicaset.apps/prometheus-alertmanager-5c8db4466d         1         1         1       46h
replicaset.apps/prometheus-kube-state-metrics-86dc6bb59f   1         1         1       46h
replicaset.apps/prometheus-server-649d869bd4               1         1         1       46h
replicaset.apps/Webhook-858cb4b794                         1         1         1       8h
</none></none></none></none></none></none></none>

Step 4: Configuring Alertmanager


It’s time to set up Alertmanager to send out alerts to the webhook application endpoint. The trigger rules for those alerts will be configured in Step 5.

You should verify that the metric wls_webapp_config_open_sessions_current_count is listed in the drop-down list that shows all the metrics in the Prometheus console. This is the metric that will be used in the alert rule. Browse to the Prometheus console and click Open Metrics Explorer to verify the metric wls_webapp_config_open_sessions_current_count is available, as shown in Figure 4.

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes
Figure 4. Verifying the metric is listed in the Prometheus console

The expression sum(wls_webapp_config_open_sessions_current_count{app="testwebapp"}) > 15 checks for open sessions for the testwebapp across all managed servers. For example, if the value of the open-sessions count on managed server 1 for testwebapp is 10 and on managed server 2 it is 8, the total open-sessions count will be 18. In that case, the alert will be fired.

Verify the expression and the current value of the metric from the Prometheus console, as shown in Figure 5.

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes
Figure 5. Verifying the expression and current value of the trigger metric

Configure Alertmanager to invoke the webhook when an alert is received. To do this, edit the prometheus-alertmanager configmap file, as shown below. The script also changes the URL from http://Webhook.Webhook.svc.cluster.local:8080/log to http://Webhook:9000/hooks/scaleup, and it changes the receiver name from logging-Webhook to web.hook.

kubectl get configmap prometheus-alertmanager -n monitoring -o yaml > prometheus/prometheus-alertmanager-cm.yaml
sed -i "s/name: logging-Webhook/name: web.hook/g" prometheus/prometheus-alertmanager-cm.yaml
sed -i "s/receiver: logging-Webhook/receiver: web.hook/g" prometheus/prometheus-alertmanager-cm.yaml
sed -i "s/url: .*$/url: http:\/\/Webhook:9000\/hooks\/scaleup/g" prometheus/prometheus-alertmanager-cm.yaml
kubectl apply -f prometheus/prometheus-alertmanager-cm.yaml

Step 5: Update the Alertmanager rule


Here is how to update the Prometheus Alertmanager rule to send the alert when the total number of open sessions for the testwebapp across all servers in the WebLogic Server cluster exceeds the configured threshold value of 15.

kubectl get configmap prometheus-server -n monitoring -o yaml > prometheus/prometheus-server-cm.yaml
sed -i "18 i \ \ \ \ \ \ - alert: ScaleUp" prometheus/prometheus-server-cm.yaml
sed -i "19 i \ \ \ \ \ \ \ \ annotations:" prometheus/prometheus-server-cm.yaml
sed -i "20 i \ \ \ \ \ \ \ \ \ \ description: Firing when total sessions active greater than 15" prometheus/prometheus-server-cm.yaml
sed -i "21 i \ \ \ \ \ \ \ \ \ \ summary: Scale up when current sessions is greater than 15" prometheus/prometheus-server-cm.yaml
sed -i "22 i \ \ \ \ \ \ \ \ expr: sum(wls_webapp_config_open_sessions_current_count{app=\"testwebapp\"}) > 15" prometheus/prometheus-server-cm.yaml
sed -i "23 i \ \ \ \ \ \ \ \ for: 1m" prometheus/prometheus-server-cm.yaml
kubectl apply -f prometheus/prometheus-server-cm.yaml

Verify the alert rule shows up in the Prometheus alerts screen, as shown in Figure 6. Be patient: It may take a couple of minutes for the rule to show up.

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes
Figure 6. Verifying that the alert rule appears in Prometheus

See if everything works


All the pieces are in place to autoscale the WebLogic Server cluster. To see if autoscaling works, create an alert by opening more than the configured 15 sessions of the testwebapp application. You can do that by using the following script to create 17 sessions using curl from the WebLogic Server for OCI Container Engine for Kubernetes admin host.

cat > max_sessions.sh << EOF
#!/bin/bash
COUNTER=0

MAXCURL=17

while [ \$COUNTER -lt \$MAXCURL ]; do
   curl -kLs -o /dev/null -I -w "%{http_code}" https://\$1/testwebapp
   let COUNTER=COUNTER+1
   sleep 1
done
EOF

chmod +x max_sessions.sh

Next, run this script from the admin host.

INGRESS_NS=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.ingress_namespace')
SERVICE_NAME=$(curl -s -H "Authorization: Bearer Oracle" http://169.254.169.254/opc/v2/instance/ | jq -r '.metadata.service_name')
EXTERNAL_LB_IP=$(kubectl get svc "$SERVICE_NAME-external" -n $INGRESS_NS -ojsonpath="{.status.loadBalancer.ingress[0].ip}")
echo $EXTERNAL_LB_IP
./max_sessions.sh $EXTERNAL_LB_IP

You can verify the current value for the open-sessions metric again in the Prometheus console. The sum of the values across all managed servers should show at least 17, as shown in Figure 7.

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes
Figure 7. Verifying the number of open testwebapp sessions

After that, verify the ScaleUp alert is in the firing state (the state will change from inactive to pending to firing), as shown in Figure 8. It may take couple moments to change states.

Autoscale Oracle WebLogic Server for Oracle Container Engine for Kubernetes
Figure 8. Confirming the ScaleUp alert is firing

Once the alert has fired, the automatic scaling of the WebLogic Server cluster should be triggered via the webhook. You can verify that scaling has been triggered by looking into the webhook pod’s logs and by looking at the number of managed server pods in the domain namespace.

First, verify the webhook pod’s log, as shown below, to see if the scaling was triggered and handled without issue.

[opc@wrtest12-admin shared]$ kubectl logs Webhook-858cb4b794-mwfqp -n monitoring
[Webhook] 2021/08/23 18:41:46 version 2.6.4 starting
[Webhook] 2021/08/23 18:41:46 setting up os signal watcher
[Webhook] 2021/08/23 18:41:46 attempting to load hooks from /etc/Webhook/hooks.json
[Webhook] 2021/08/23 18:41:46 os signal watcher ready
[Webhook] 2021/08/23 18:41:46 found 1 hook(s) in file
[Webhook] 2021/08/23 18:41:46   loaded: scaleup
[Webhook] 2021/08/23 18:41:46 setting up file watcher for /etc/Webhook/hooks.json
[Webhook] 2021/08/23 18:41:46 serving hooks on http://0.0.0.0:9000/hooks/{id}
[Webhook] 2021/08/23 18:53:29 incoming HTTP request from 10.244.0.68:49284
[Webhook] 2021/08/23 18:53:29 scaleup got matched
[Webhook] 2021/08/23 18:53:29 scaleup hook triggered successfully
[Webhook] 2021/08/23 18:53:29 2021-08-23T18:53:29Z | 200 |    725.833µs | Webhook:9000 | POST /hooks/scaleup
[Webhook] 2021/08/23 18:53:29 executing /var/scripts/scaleUpAction.sh (/var/scripts/scaleUpAction.sh) with arguments ["/var/scripts/scaleUpAction.sh"] and environment [] using /var/scripts as cwd
[Webhook] 2021/08/23 18:53:30 command output: Kubernetes master is https://10.96.0.1:443

[Webhook] 2021/08/23 18:53:30 finished handling scaleup
[Webhook] 2021/08/23 19:49:19 incoming HTTP request from 10.244.0.68:60516
[Webhook] 2021/08/23 19:49:19 scaleup got matched
[Webhook] 2021/08/23 19:49:19 scaleup hook triggered successfully
[Webhook] 2021/08/23 19:49:19 2021-08-23T19:49:19Z | 200 |    607.125µs | Webhook:9000 | POST /hooks/scaleup
[Webhook] 2021/08/23 19:49:19 executing /var/scripts/scaleUpAction.sh (/var/scripts/scaleUpAction.sh) with arguments ["/var/scripts/scaleUpAction.sh"] and environment [] using /var/scripts as cwd
[Webhook] 2021/08/23 19:49:21 command output: Kubernetes master is https://10.96.0.1:443

[Webhook] 2021/08/23 19:49:21 finished handling scaleup

And verify that the managed server count has changed from 2 to 3.

[opc@wrtest12-admin shared]$ kubectl get po -n wrtest12-domain-ns
NAME                                      READY   STATUS    RESTARTS   AGE
wrtest12domain-wrtest12-adminserver       1/1     Running   0          29h
wrtest12domain-wrtest12-managed-server1   1/1     Running   0          29h
wrtest12domain-wrtest12-managed-server2   1/1     Running   0          5m22s
wrtest12domain-wrtest12-managed-server3   1/1     Running   0          92s

Source: oracle.com

Monday, December 18, 2023

Eyes on the horizon of evolving customer expectations

Eyes on the horizon of evolving customer expectations

Over the next few weeks, I look forward to sharing with you – in a five-part series – our point of view on the future of customer experience. Let’s get started. There is a lot to talk about.

When we raise our eyes to the CX horizon, what do we see?

It’s clear the future of CX is exciting. It’s a place where customers act and react in a blink of an eye. They expect to be part of immersive, digitally native, physically enhanced shared moments; they expect to be remembered, valued, and rewarded for length and steadiness of their on-going relationship with your business. Being a customer now has never been more empowering – or more frustrating. Being a CX professional now – business, or technical – has never been more exciting – or more difficult.

Some very clear expectations and pressures have changed how customers engage, how businesses deliver value, and how enterprise IT teams deliver on CX innovation. That future is certainly promising – and much closer than many may realize.

Let’s take a look.

We, customers, have perpetually evolving expectations.

We all communicate differently. Not just in real-life, also in real-work. As consumers we have long used asynchronous channels socially. As customers we will pay more just for a unified experience anywhere, anytime, all-the-time. And small, continuous gestures mean a lot to our experience of a brand– like the best forward-thinking brands that thank frequent customers with small gifts like extra points or a recognition of loyalty for a long trip or big purchase. Every little engagement matters. They all add up. And accustomed to the convenience of always-on experiences that ‘just work’ across all channels in private life, business professionals now expect the same convenience when engaging with brands whether at work or at play.

We are embracing outcomes over ownership. Subscriptions help us spread costs and payments over time rather than pay upfront, with more predictable finances. Every business today – from retail and consumers goods, to industrial manufacturers – is changing how they offer new digital experiences for existing products and services. The success of the sharing economy and the ‘-as a Service’ of manufactured goods has changed expectations for how we consume at home and at work: Tacos (“Taco Bell brings back the $10 Taco Subscription”) to how builders pay for the use of heavy machinery like pay-by-the-use construction cranes. Subscription-based products and services offer countless advantages with mutual benefits for both the customer and the business. For customers, subscription business models grant the right to access products, services, or experiences in a recurring fashion and meet our desire to become asset-light. For businesses, subscription models provide new opportunities for upselling and cross-selling to increase share of wallet within the existing customer base.

We expect privacy. We certainly will pay more – engage more – with personalized, convenient experiences and services. But there is an important caveat. We expect the utmost discretion for the privilege of accessing any part of our digital identity. When used for good, sharing our data can earn us quick compensation for a flight delay, or empathetic outreach from an insurance company when we are having trouble resolving a problem. When our data is not shielded, businesses risk losing customer trust forever.

And we’re not making it easy on businesses. Because it’s not enough to just use our data for customer analysis or targeting. Today we expect brands to proactively lead us along the best journey for me (me, me, me, me). Recommendations, next best offers, next best purchases, and any other predictive engagements must directly benefit me like as if I were your only and most important customer. Use my data to personalize the experience – but also preempt problems or add value throughout my journey.

We – as customers – are clearly forcing a lot of change and raising the bar every day with what we expect from customer experience delivery. The bar for businesses has never been higher. And so, in my second blog in this series, I look forward to sharing with you, my thoughts on the way businesses need to deliver value and constant customer experience innovation.

Source: oracle.com

Friday, December 15, 2023

Unleashing the Power of Java SE: A Comprehensive Guide

Unleashing the Power of Java SE: A Comprehensive Guide

Introduction


Welcome to our comprehensive guide on Java SE, where we delve into the intricacies of this powerful programming language. In the ever-evolving landscape of technology, understanding Java SE is not just an advantage but a necessity for developers aiming to stay at the forefront. Join us as we explore the nuances, capabilities, and applications of Java SE that make it a cornerstone in the world of software development.

What is Java SE?


Java SE, or Java Standard Edition, is a robust and versatile programming platform that forms the foundation for developing and deploying Java applications. It provides a comprehensive set of APIs, tools, and libraries, empowering developers to create diverse software solutions. Whether you're a seasoned developer or a novice, Java SE's flexibility makes it an ideal choice for a wide range of projects.

Key Features of Java SE


Object-Oriented Paradigm

At the heart of Java SE is its object-oriented programming paradigm. This approach enhances code reusability, scalability, and maintainability. Developers can encapsulate data and functionalities within objects, fostering a modular and efficient coding structure.

Platform Independence

One of Java SE's standout features is its platform independence, often referred to as "write once, run anywhere" (WORA). This means that Java code can be written on one platform and executed on any other with Java Virtual Machine (JVM) support, providing unparalleled versatility.

Rich Standard Library

Java SE boasts an extensive standard library, offering a plethora of pre-built classes and methods. This not only accelerates development but also ensures that developers have access to a wide range of tools, reducing the need for reinventing the wheel.

Security Mechanisms

Security is paramount in today's digital landscape, and Java SE excels in this aspect. With features like the Java Security Manager and robust authentication mechanisms, developers can build secure applications, mitigating potential vulnerabilities.

Java SE in Action


Enterprise Applications

Java SE finds widespread use in developing enterprise-level applications. Its scalability and reliability make it an ideal choice for crafting applications that can seamlessly handle large datasets and concurrent user interactions.

Web Development

Java SE is not limited to server-side applications; it plays a crucial role in web development. Technologies like JavaServer Pages (JSP) and Servlets empower developers to create dynamic and interactive web applications.

Mobile Development

In the realm of mobile development, Java SE has left an indelible mark. Android, the world's most popular mobile operating system, relies heavily on Java for building robust and feature-rich applications.

Java SE Best Practices


To harness the full potential of Java SE, adopting best practices is essential.

Code Optimization

Writing efficient and optimized code ensures that applications run smoothly and consume minimal system resources. Utilize tools like Java Profiler to identify and rectify performance bottlenecks.

Version Control

Implementing a robust version control system, such as Git, is crucial for collaborative development. This ensures code integrity, facilitates collaboration, and simplifies the debugging process.

Continuous Integration

Embrace the concept of continuous integration with tools like Jenkins or Travis CI. This practice enhances code quality, detects issues early, and facilitates a streamlined development workflow.

Conclusion

In conclusion, Java SE stands as a stalwart in the realm of programming languages, offering a powerful and flexible platform for developers. Whether you are embarking on a new project or enhancing an existing one, the capabilities of Java SE are indispensable.

Source: oracle.com